Robust Sensitivity Analysis of Courses of Action Using an Additive Value Model
2008-03-01
According to Clemen , sensitivity analysis answers, “What makes a difference in this decision?” (2001:175). Sensitivity analysis can also indicate...alternative to change. These models look for the new weighting that causes a specific alternative to rank above all others. 19 Barron and Schmidt first... Schmidt , 1988:123). A smaller objective function value indicates greater sensitivity. Wolters and Mareschal propose a similar approach using goal
Optimum sensitivity derivatives of objective functions in nonlinear programming
NASA Technical Reports Server (NTRS)
Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.
1983-01-01
The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.
Ackerman, L K; Noonan, G O; Begley, T H
2009-12-01
The ambient ionization technique direct analysis in real time (DART) was characterized and evaluated for the screening of food packaging for the presence of packaging additives using a benchtop mass spectrometer (MS). Approximate optimum conditions were determined for 13 common food-packaging additives, including plasticizers, anti-oxidants, colorants, grease-proofers, and ultraviolet light stabilizers. Method sensitivity and linearity were evaluated using solutions and characterized polymer samples. Additionally, the response of a model additive (di-ethyl-hexyl-phthalate) was examined across a range of sample positions, DART, and MS conditions (temperature, voltage and helium flow). Under optimal conditions, molecular ion (M+H+) was the major ion for most additives. Additive responses were highly sensitive to sample and DART source orientation, as well as to DART flow rates, temperatures, and MS inlet voltages, respectively. DART-MS response was neither consistently linear nor quantitative in this setting, and sensitivity varied by additive. All additives studied were rapidly identified in multiple food-packaging materials by DART-MS/MS, suggesting this technique can be used to screen food packaging rapidly. However, method sensitivity and quantitation requires further study and improvement.
Sensitivity analysis of a ground-water-flow model
Torak, Lynn J.; ,
1991-01-01
A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
DOT National Transportation Integrated Search
2017-02-08
The study re-evaluates distress prediction models using the Mechanistic-Empirical Pavement Design Guide (MEPDG) and expands the sensitivity analysis to a wide range of pavement structures and soils. In addition, an extensive validation analysis of th...
Wu, Yiping; Yu, Wenfang; Yang, Benhong; Li, Pan
2018-05-15
The use of different food additives and their active metabolites has been found to cause serious problems to human health. Thus, considering the potential effects on human health, developing a sensitive and credible analytical method for different foods is important. Herein, the application of solvent-driven self-assembled Au nanoparticles (Au NPs) for the rapid and sensitive detection of food additives in different commercial products is reported. The assembled substrates are highly sensitive and exhibit excellent uniformity and reproducibility because of uniformly distributed and high-density hot spots. The sensitive analyses of ciprofloxacin (CF), diethylhexyl phthalate (DEHP), tartrazine and azodicarbonamide at the 0.1 ppm level using this surface-enhanced Raman spectroscopy (SERS) substrate are given, and the results show that Au NP arrays can serve as efficient SERS substrates for the detection of food additives. More importantly, SERS spectra of several commercial liquors and sweet drinks are obtained to evaluate the addition of illegal additives. This SERS active platform can be used as an effective strategy in the detection of prohibited additives in food.
ERIC Educational Resources Information Center
Akturk, Ahmet Oguz
2015-01-01
Purpose: The purpose of this paper is to determine the cyberbullying sensitivity levels of high school students and their perceived social supports levels, and analyze the variables that predict cyberbullying sensitivity. In addition, whether cyberbullying sensitivity levels and social support levels differed according to gender was also…
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.
Global Sensitivity and Data-Worth Analyses in iTOUGH2: User's Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wainwright, Haruko Murakami; Finsterle, Stefan
2016-07-15
This manual explains the use of local sensitivity analysis, the global Morris OAT and Sobol’ methods, and a related data-worth analysis as implemented in iTOUGH2. In addition to input specification and output formats, it includes some examples to show how to interpret results.
Revisiting inconsistency in large pharmacogenomic studies
Safikhani, Zhaleh; Smirnov, Petr; Freeman, Mark; El-Hachem, Nehme; She, Adrian; Rene, Quevedo; Goldenberg, Anna; Birkbak, Nicolai J.; Hatzis, Christos; Shi, Leming; Beck, Andrew H.; Aerts, Hugo J.W.L.; Quackenbush, John; Haibe-Kains, Benjamin
2017-01-01
In 2013, we published a comparative analysis of mutation and gene expression profiles and drug sensitivity measurements for 15 drugs characterized in the 471 cancer cell lines screened in the Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Cell Line Encyclopedia (CCLE). While we found good concordance in gene expression profiles, there was substantial inconsistency in the drug responses reported by the GDSC and CCLE projects. We received extensive feedback on the comparisons that we performed. This feedback, along with the release of new data, prompted us to revisit our initial analysis. We present a new analysis using these expanded data, where we address the most significant suggestions for improvements on our published analysis — that targeted therapies and broad cytotoxic drugs should have been treated differently in assessing consistency, that consistency of both molecular profiles and drug sensitivity measurements should be compared across cell lines, and that the software analysis tools provided should have been easier to run, particularly as the GDSC and CCLE released additional data. Our re-analysis supports our previous finding that gene expression data are significantly more consistent than drug sensitivity measurements. Using new statistics to assess data consistency allowed identification of two broad effect drugs and three targeted drugs with moderate to good consistency in drug sensitivity data between GDSC and CCLE. For three other targeted drugs, there were not enough sensitive cell lines to assess the consistency of the pharmacological profiles. We found evidence of inconsistencies in pharmacological phenotypes for the remaining eight drugs. Overall, our findings suggest that the drug sensitivity data in GDSC and CCLE continue to present challenges for robust biomarker discovery. This re-analysis provides additional support for the argument that experimental standardization and validation of pharmacogenomic response will be necessary to advance the broad use of large pharmacogenomic screens. PMID:28928933
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
NASA Astrophysics Data System (ADS)
Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung
2007-07-01
This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.
Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide
Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...
2017-03-01
The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Stability, performance and sensitivity analysis of I.I.D. jump linear systems
NASA Astrophysics Data System (ADS)
Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven
2018-06-01
This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
Application of design sensitivity analysis for greater improvement on machine structural dynamics
NASA Technical Reports Server (NTRS)
Yoshimura, Masataka
1987-01-01
Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.
2014-12-26
additive value function, which assumes mutual preferential independence (Gregory S. Parnell, 2013). In other words, this method can be used if the... additive value function method to calculate the aggregate value of multiple objectives. Step 9 : Sensitivity Analysis Once the global values are...gravity metric, the additive method will be applied using equal weights for each axis value function. Pilot Satisfaction (Usability) As expressed
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Quantifying uncertainty and sensitivity in sea ice models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
SEP thrust subsystem performance sensitivity analysis
NASA Technical Reports Server (NTRS)
Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.
1973-01-01
This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.
Maternal sensitivity: a concept analysis.
Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae
2008-11-01
The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
NASA Astrophysics Data System (ADS)
Rieger, Vanessa S.; Dietmüller, Simone; Ponater, Michael
2017-10-01
Different strengths and types of radiative forcings cause variations in the climate sensitivities and efficacies. To relate these changes to their physical origin, this study tests whether a feedback analysis is a suitable approach. For this end, we apply the partial radiative perturbation method. Combining the forward and backward calculation turns out to be indispensable to ensure the additivity of feedbacks and to yield a closed forcing-feedback-balance at top of the atmosphere. For a set of CO2-forced simulations, the climate sensitivity changes with increasing forcing. The albedo, cloud and combined water vapour and lapse rate feedback are found to be responsible for the variations in the climate sensitivity. An O3-forced simulation (induced by enhanced NOx and CO surface emissions) causes a smaller efficacy than a CO2-forced simulation with a similar magnitude of forcing. We find that the Planck, albedo and most likely the cloud feedback are responsible for this effect. Reducing the radiative forcing impedes the statistical separability of feedbacks. We additionally discuss formal inconsistencies between the common ways of comparing climate sensitivities and feedbacks. Moreover, methodical recommendations for future work are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yan; Vyas, Anant D.; Guo, Zhaomiao
This report summarizes our evaluation of the potential energy-use and GHG-emissions reduction achieved by shifting freight from truck to rail under a most-likely scenario. A sensitivity analysis is also included. The sensitivity analysis shows changes in energy use and GHG emissions when key parameters are varied. The major contribution and distinction from previous studies is that this study considers the rail level of service (LOS) and commodity movements at the origin-destination (O-D) level. In addition, this study considers the fragility and time sensitivity of each commodity type.
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
Comparative Sensitivity Analysis of Muscle Activation Dynamics
Günther, Michael; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
Wegelin, Olivier; Bartels, Diny W M; Tromp, Ellen; Kuypers, Karel C; van Melick, Harm H E
2015-10-01
To evaluate the effects of cystoscopy on urine cytology and additional cytokeratin-20 (CK-20) staining in patients presenting with gross hematuria. For 83 patients presenting with gross hematuria, spontaneous and instrumented paired urine samples were analyzed. Three patients were excluded. Spontaneous samples were collected within 1 hour before cystoscopy, and the instrumented samples were tapped through the cystoscope. Subsequently, patients underwent cystoscopic evaluation and imaging of the urinary tract. If tumor suspicious lesions were found on cystoscopy or imaging, subjects underwent transurethral resection or ureterorenoscopy. Two blinded uropathological reviewers (DB, KK) evaluated 160 urine samples. Reference standards were results of cystoscopy, imaging, or histopathology. Thirty-seven patients (46.3%) underwent transurethral resection or ureterorenoscopy procedures. In 30 patients (37.5%) tumor presence was confirmed by histopathology. The specificity of urine analysis was significantly higher for spontaneous samples than instrumented samples for both cytology alone (94% vs 72%, P = .01) and for cytology combined with CK-20 analysis (98% vs 84%, P = .02). The difference in sensitivity between spontaneous and instrumented samples was not significant for both cytology alone (40% vs 53%) and combined with CK-20 analysis (67% vs 67%). The addition of CK-20 analysis to cytology significantly increases test sensitivity in spontaneous urine cytology (67% vs 40%, P = .03). Instrumentation significantly decreases specificity of urine cytology. This may lead to unnecessary diagnostic procedures. Additional CK-20 staining in spontaneous urine cytology significantly increases sensitivity but did not improve the already high specificity. We suggest performing urine cytology and CK-20 analysis on spontaneously voided urine. Copyright © 2015 Elsevier Inc. All rights reserved.
Enhanced electrochemical nanoring electrode for analysis of cytosol in single cells.
Zhuang, Lihong; Zuo, Huanzhen; Wu, Zengqiang; Wang, Yu; Fang, Danjun; Jiang, Dechen
2014-12-02
A microelectrode array has been applied for single cell analysis with relatively high throughput; however, the cells were typically cultured on the microelectrodes under cell-size microwell traps leading to the difficulty in the functionalization of an electrode surface for higher detection sensitivity. Here, nanoring electrodes embedded under the microwell traps were fabricated to achieve the isolation of the electrode surface and the cell support, and thus, the electrode surface can be modified to obtain enhanced electrochemical sensitivity for single cell analysis. Moreover, the nanometer-sized electrode permitted a faster diffusion of analyte to the surface for additional improvement in the sensitivity, which was evidenced by the electrochemical characterization and the simulation. To demonstrate the concept of the functionalized nanoring electrode for single cell analysis, the electrode surface was deposited with prussian blue to detect intracellular hydrogen peroxide at a single cell. Hundreds of picoamperes were observed on our functionalized nanoring electrode exhibiting the enhanced electrochemical sensitivity. The success in the achievement of a functionalized nanoring electrode will benefit the development of high throughput single cell electrochemical analysis.
[Numerical simulation and operation optimization of biological filter].
Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing
2014-12-01
BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.
Laurence Lin; J.R. Webster
2012-01-01
The constant nutrient addition technique has been used extensively to measure nutrient uptake in streams. However, this technique is impractical for large streams, and the pulse nutrient addition (PNA) has been suggested as an alternative. We developed a computer model to simulate Monod kinetics nutrient uptake in large rivers and used this model to evaluate the...
Monochromatic Measurements of the JPSS-1 VIIRS Polarization Sensitivity
NASA Technical Reports Server (NTRS)
McIntire, Jeff; Moyer, David; Brown, Steven W.; Lykke, Keith R.; Waluschka, Eugene; Oudrari, Hassan; Xiong, Xiaoxiong
2016-01-01
Polarization sensitivity is a critical property that must be characterized for spaceborne remote sensing instruments designed to measure reflected solar radiation. Broadband testing of the first Joint Polar-orbiting Satellite System (JPSS-1) Visible Infrared Imaging Radiometer Suite (VIIRS) showed unexpectedly large polarization sensitivities for the bluest bands on VIIRS (centered between 400 and 600 nm). Subsequent ray trace modeling indicated that large diattenuation on the edges of the bandpass for these spectral bands was the driver behind these large sensitivities. Additional testing using the National Institute of Standards and Technologies Traveling Spectral Irradiance and Radiance Responsivity Calibrations Using Uniform Sources was added to the test program to verify and enhance the model. The testing was limited in scope to two spectral bands at two scan angles; nonetheless, this additional testing provided valuable insight into the polarization sensitivity. Analysis has shown that the derived diattenuation agreed with the broadband measurements to within an absolute difference of about0.4 and that the ray trace model reproduced the general features of the measured data. Additionally, by deriving the spectral responsivity, the linear diattenuation is shown to be explicitly dependent on the changes in bandwidth with polarization state.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
Tanko, Zita; Shab, Arna; Diepgen, Thomas Ludwig; Weisshaar, Elke
2009-06-01
Fragrances are very common in everyday products. A metalworker with chronic hand eczema and previously diagnosed type IV sensitizations to epoxy resin, balsam of Peru, fragrance mix and fragrance mix II was diagnosed with additional type IV sensitizations to geraniol, hydroxycitronellal, lilial, tree moss, oak moss absolute, citral, citronellol, farnesol, Lyral, fragrance mix II and fragrance mix (with sorbitan sesquioleate). In addition, a type IV sensitization to the skin protection cream containing geraniol and citronellol used at the workplace was detected, and deemed occupationally relevant in this case. The patient could have had contact to fragrances through private use of cosmetics and detergents. On the other hand, the fragrance-containing skin protection cream supports occupational exposure. This case report demonstrates that fragrance contact allergy has to be searched for and clarified individually, which requires a thorough history and a detailed analysis of the work place.
Mess, Aylin; Vietzke, Jens-Peter; Rapp, Claudius; Francke, Wittko
2011-10-01
Tackifier resins play an important role as additives in pressure sensitive adhesives (PSAs) to modulate their desired properties. With dependence on their origin and processing, tackifier resins can be multicomponent mixtures. Once they have been incorporated in a polymer matrix, conventional chemical analysis of tackifiers usually tends to be challenging because a suitable sample pretreatment and/or separation is necessary and all characteristic components have to be detected for an unequivocal identification of the resin additive. Nevertheless, a reliable analysis of tackifiers is essential for product quality and safety reasons. A promising approach for the examination of tackifier resins in PSAs is the novel direct analysis in real time mass spectrometry (DART-MS) technique, which enables screening analysis without time-consuming sample preparation. In the present work, four key classes of tackifier resins were studied (rosin, terpene phenolic, polyterpene, and hydrocarbon resins). Their corresponding complex mass spectra were interpreted and used as reference spectra for subsequent analyses. These data were used to analyze tackifier additives in synthetic rubber and acrylic adhesive matrixes. To prove the efficiency of the developed method, complete PSA products containing two or three different tackifiers were analyzed. The tackifier resins were successfully identified, while measurement time and interpretation took less than 10 mins per sample. Determination of resin additives in PSAs can be performed down to 0.1% (w/w, limit of detection) using the three most abundant signals for each tackifier. In summary, DART-MS is a rapid and efficient screening method for the analysis of various tackifiers in PSAs.
General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models
Miller, David A.W.
2012-01-01
Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.
Wu, Bin; Ye, Ming; Chen, Huafeng; Shen, Jinfang F
2012-02-01
Adding trastuzumab to a conventional regimen of chemotherapy can improve survival in patients with human epidermal growth factor receptor 2 (HER2)-positive advanced gastric or gastroesophageal junction (GEJ) cancer, but the economic impact of this practice is unknown. The purpose of this cost-effectiveness analysis was to estimate the effects of adding trastuzumab to standard chemotherapy in patients with HER2-positive advanced gastric or GEJ cancer on health and economic outcomes in China. A Markov model was developed to simulate the clinical course of typical patients with HER2-positive advanced gastric or GEJ cancer. Five-year quality-adjusted life-years (QALYs), costs, and incremental cost-effectiveness ratios (ICERs) were estimated. Model inputs were derived from the published literature and government sources. Direct costs were estimated from the perspective of Chinese society. One-way and probabilistic sensitivity analyses were conducted. On baseline analysis, the addition of trastuzumab increased cost and QALY by $56,004.30 (year-2010 US $) and 0.18, respectively, relative to conventional chemotherapy, resulting in an ICER of $251,667.10/QALY gained. Probabilistic sensitivity analyses supported that the addition of trastuzumab was not cost-effective. Budgetary impact analysis estimated that the annual increase in fiscal expenditures would be ~$1 billion. On univariate sensitivity analysis, the median overall survival time for conventional chemotherapy was the most influential factor with respect to the robustness of the model. The findings from the present analysis suggest that the addition of trastuzumab to conventional chemotherapy might not be cost-effective in patients with HER2-positive advanced gastric or GEJ cancer. Copyright © 2012 Elsevier HS Journals, Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hou, Jean W.
1985-01-01
The thermal analysis and the calculation of thermal sensitivity of a cure cycle in autoclave processing of thick composite laminates were studied. A finite element program for the thermal analysis and design derivatives calculation for temperature distribution and the degree of cure was developed and verified. It was found that the direct differentiation was the best approach for the thermal design sensitivity analysis. In addition, the approach of the direct differentiation provided time histories of design derivatives which are of great value to the cure cycle designers. The approach of direct differentiation is to be used for further study, i.e., the optimal cycle design.
NASA Astrophysics Data System (ADS)
Honarvar, Elahe; Venter, Andre R.
2017-06-01
The analysis of protein by desorption electrospray ionization mass spectrometry (DESI-MS) is considered impractical due to a mass-dependent loss in sensitivity with increase in protein molecular weights. With the addition of ammonium bicarbonate to the DESI-MS analysis the sensitivity towards proteins by DESI was improved. The signal to noise ratio (S/N) improvement for a variety of proteins increased between 2- to 3-fold relative to solvent systems containing formic acid and more than seven times relative to aqueous methanol spray solvents. Three methods for ammonium bicarbonate addition during DESI-MS were investigated. The additive delivered improvements in S/N whether it was mixed with the analyte prior to sample deposition, applied over pre-prepared samples, or simply added to the desorption spray solvent. The improvement correlated well with protein pI but not with protein size. Other ammonium or bicarbonate salts did not produce similar improvements in S/N, nor was this improvement in S/N observed for ESI of the same samples. As was previously described for ESI, DESI also caused extensive protein unfolding upon the addition of ammonium bicarbonate. [Figure not available: see fulltext.
Conversion of paper sludge to ethanol, II: process design and economic analysis.
Fan, Zhiliang; Lynd, Lee R
2007-01-01
Process design and economics are considered for conversion of paper sludge to ethanol. A particular site, a bleached kraft mill operated in Gorham, NH by Fraser Papers (15 tons dry sludge processed per day), is considered. In addition, profitability is examined for a larger plant (50 dry tons per day) and sensitivity analysis is carried out with respect to capacity, tipping fee, and ethanol price. Conversion based on simultaneous saccharification and fermentation with intermittent feeding is examined, with ethanol recovery provided by distillation and molecular sieve adsorption. It was found that the Fraser plant achieves positive cash flow with or without xylose conversion and mineral recovery. Sensitivity analysis indicates economics are very sensitive to ethanol selling price and scale; significant but less sensitive to the tipping fee, and rather insensitive to the prices of cellulase and power. Internal rates of return exceeding 15% are projected for larger plants at most combinations of scale, tipping fee, and ethanol price. Our analysis lends support to the proposition that paper sludge is a leading point-of-entry and proving ground for emergent industrial processes featuring enzymatic hydrolysis of cellulosic biomass.
Analytical methods for Multi-Criteria Decision Analysis (MCDA) support the non-monetary valuation of ecosystem services for environmental decision making. Many published case studies transform ecosystem service outcomes into a common metric and aggregate the outcomes to set land ...
NASA Astrophysics Data System (ADS)
Luo, Jiannan; Lu, Wenxi
2014-06-01
Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
NASA Astrophysics Data System (ADS)
Siadaty, Moein; Kazazi, Mohsen
2018-04-01
Convective heat transfer, entropy generation and pressure drop of two water based nanofluids (Cu-water and Al2O3-water) in horizontal annular tubes are scrutinized by means of computational fluids dynamics, response surface methodology and sensitivity analysis. First, central composite design is used to perform a series of experiments with diameter ratio, length to diameter ratio, Reynolds number and solid volume fraction. Then, CFD is used to calculate the Nusselt Number, Euler number and entropy generation. After that, RSM is applied to fit second order polynomials on responses. Finally, sensitivity analysis is conducted to manage the above mentioned parameters inside tube. Totally, 62 different cases are examined. CFD results show that Cu-water and Al2O3-water have the highest and lowest heat transfer rate, respectively. In addition, analysis of variances indicates that increase in solid volume fraction increases dimensionless pressure drop for Al2O3-water. Moreover, it has a significant negative and insignificant effects on Cu-water Nusselt and Euler numbers, respectively. Analysis of Bejan number indicates that frictional and thermal entropy generations are the dominant irreversibility in Al2O3-water and Cu-water flows, respectively. Sensitivity analysis indicates dimensionless pressure drop sensitivity to tube length for Cu-water is independent of its diameter ratio at different Reynolds numbers.
Hu, Xiangdong; Liu, Yujiang; Qian, Linxue
2017-10-01
Real-time elastography (RTE) and shear wave elastography (SWE) are noninvasive and easily available imaging techniques that measure the tissue strain, and it has been reported that the sensitivity and the specificity of elastography were better in differentiating between benign and malignant thyroid nodules than conventional technologies. Relevant articles were searched in multiple databases; the comparison of elasticity index (EI) was conducted with the Review Manager 5.0. Forest plots of the sensitivity and specificity and SROC curve of RTE and SWE were performed with STATA 10.0 software. In addition, sensitivity analysis and bias analysis of the studies were conducted to examine the quality of articles; and to estimate possible publication bias, funnel plot was used and the Egger test was conducted. Finally 22 articles which eventually satisfied the inclusion criteria were included in this study. After eliminating the inefficient, benign and malignant nodules were 2106 and 613, respectively. The meta-analysis suggested that the difference of EI between benign and malignant nodules was statistically significant (SMD = 2.11, 95% CI [1.67, 2.55], P < .00001). The overall sensitivities of RTE and SWE were roughly comparable, whereas the difference of specificities between these 2 methods was statistically significant. In addition, statistically significant difference of AUC between RTE and SWE was observed between RTE and SWE (P < .01). The specificity of RTE was statistically higher than that of SWE; which suggests that compared with SWE, RTE may be more accurate on differentiating benign and malignant thyroid nodules.
Regier, Dean A; Friedman, Jan M; Marra, Carlo A
2010-05-14
Array genomic hybridization (AGH) provides a higher detection rate than does conventional cytogenetic testing when searching for chromosomal imbalance causing intellectual disability (ID). AGH is more costly than conventional cytogenetic testing, and it remains unclear whether AGH provides good value for money. Decision analytic modeling was used to evaluate the trade-off between costs, clinical effectiveness, and benefit of an AGH testing strategy compared to a conventional testing strategy. The trade-off between cost and effectiveness was expressed via the incremental cost-effectiveness ratio. Probabilistic sensitivity analysis was performed via Monte Carlo simulation. The baseline AGH testing strategy led to an average cost increase of $217 (95% CI $172-$261) per patient and an additional 8.2 diagnoses in every 100 tested (0.082; 95% CI 0.044-0.119). The mean incremental cost per additional diagnosis was $2646 (95% CI $1619-$5296). Probabilistic sensitivity analysis demonstrated that there was a 95% probability that AGH would be cost effective if decision makers were willing to pay $4550 for an additional diagnosis. Our model suggests that using AGH instead of conventional karyotyping for most ID patients provides good value for money. Deterministic sensitivity analysis found that employing AGH after first-line cytogenetic testing had proven uninformative did not provide good value for money when compared to using AGH as first-line testing. Copyright (c) 2010 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Griffiths, Rian L; Bunch, Josephine
2012-07-15
Matrix-assisted laser desorption/ionization (MALDI) is a powerful technique for the direct analysis of lipids in complex mixtures and thin tissue sections, making it an extremely attractive method for profiling lipids in health and disease. Lipids are readily detected as [M+H](+), [M+Na](+) and [M+K](+) ions in positive ion MALDI mass spectrometry (MS) experiments. This not only decreases sensitivity, but can also lead to overlapping m/z values of the various adducts of different lipids. Additives can be used to promote formation of a particular adduct, improving sensitivity, reducing spectral complexity and enhancing structural characterization in collision-induced dissociation (CID) experiments. Li(+), Na(+), K(+), Cs(+) and NH(4)(+) cations were considered as a range of salt types (acetates, chlorides and nitrates) incorporated into DHB matrix solutions at concentrations between 5 and 80 mM. The study was extended to evaluate the effect of these additives on CID experiments of a lipid standard, after optimization of collision energy parameters. Experiments were performed on a hybrid quadrupole time-of-flight (QqTOF) instrument. The systematic evaluation of new and existing additives in MALDI-MS and MS/MS of lipids demonstrated the importance of additive cation and anion choice and concentration for tailoring spectral results. The recommended choice of additive depends on the desired outcomes of the experiment to be performed (MS or MS/MS). Nitrates are found to be particularly useful additives for lipid analysis. Copyright © 2012 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, Peter
2014-01-24
This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.
Additional EIPC Study Analysis. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, Stanton W; Gotham, Douglas J.; Luciani, Ralph L.
Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations weremore » developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 14 topics was developed for further analysis. This paper brings together the earlier interim reports of the first 13 topics plus one additional topic into a single final report.« less
Aida, Mari; Iwai, Takahiro; Okamoto, Yuki; Kohno, Satoshi; Kakegawa, Ken; Miyahara, Hidekazu; Seto, Yasuo; Okino, Akitoshi
2017-01-01
We developed a dual plasma desorption/ionization system using two plasmas for the semi-invasive analysis of compounds on heat-sensitive substrates such as skin. The first plasma was used for the desorption of the surface compounds, whereas the second was used for the ionization of the desorbed compounds. Using the two plasmas, each process can be optimized individually. A successful analysis of phenyl salicylate and 2-isopropylpyridine was achieved using the developed system. Furthermore, we showed that it was possible to detect the mass signals derived from a sample even at a distance 50 times greater than the distance from the position at which the samples were detached. In addition, to increase the intensity of the mass signal, 0%–0.02% (v/v) of hydrogen gas was added to the base gas generated in the ionizing plasma. We found that by optimizing the gas flow rate through the addition of a small amount of hydrogen gas, it was possible to obtain the intensity of the mass signal that was 45–824 times greater than that obtained without the addition of hydrogen gas. PMID:29234573
A flexible, interpretable framework for assessing sensitivity to unmeasured confounding.
Dorie, Vincent; Harada, Masataka; Carnegie, Nicole Bohme; Hill, Jennifer
2016-09-10
When estimating causal effects, unmeasured confounding and model misspecification are both potential sources of bias. We propose a method to simultaneously address both issues in the form of a semi-parametric sensitivity analysis. In particular, our approach incorporates Bayesian Additive Regression Trees into a two-parameter sensitivity analysis strategy that assesses sensitivity of posterior distributions of treatment effects to choices of sensitivity parameters. This results in an easily interpretable framework for testing for the impact of an unmeasured confounder that also limits the number of modeling assumptions. We evaluate our approach in a large-scale simulation setting and with high blood pressure data taken from the Third National Health and Nutrition Examination Survey. The model is implemented as open-source software, integrated into the treatSens package for the R statistical programming language. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Optimization Issues with Complex Rotorcraft Comprehensive Analysis
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.
1998-01-01
This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.
Digital PCR Improves Mutation Analysis in Pancreas Fine Needle Aspiration Biopsy Specimens.
Sho, Shonan; Court, Colin M; Kim, Stephen; Braxton, David R; Hou, Shuang; Muthusamy, V Raman; Watson, Rabindra R; Sedarat, Alireza; Tseng, Hsian-Rong; Tomlinson, James S
2017-01-01
Applications of precision oncology strategies rely on accurate tumor genotyping from clinically available specimens. Fine needle aspirations (FNA) are frequently obtained in cancer management and often represent the only source of tumor tissues for patients with metastatic or locally advanced diseases. However, FNAs obtained from pancreas ductal adenocarcinoma (PDAC) are often limited in cellularity and/or tumor cell purity, precluding accurate tumor genotyping in many cases. Digital PCR (dPCR) is a technology with exceptional sensitivity and low DNA template requirement, characteristics that are necessary for analyzing PDAC FNA samples. In the current study, we sought to evaluate dPCR as a mutation analysis tool for pancreas FNA specimens. To this end, we analyzed alterations in the KRAS gene in pancreas FNAs using dPCR. The sensitivity of dPCR mutation analysis was first determined using serial dilution cell spiking studies. Single-cell laser-microdissection (LMD) was then utilized to identify the minimal number of tumor cells needed for mutation detection. Lastly, dPCR mutation analysis was performed on 44 pancreas FNAs (34 formalin-fixed paraffin-embedded (FFPE) and 10 fresh (non-fixed)), including samples highly limited in cellularity (100 cells) and tumor cell purity (1%). We found dPCR to detect mutations with allele frequencies as low as 0.17%. Additionally, a single tumor cell could be detected within an abundance of normal cells. Using clinical FNA samples, dPCR mutation analysis was successful in all preoperative FNA biopsies tested, and its accuracy was confirmed via comparison with resected tumor specimens. Moreover, dPCR revealed additional KRAS mutations representing minor subclones within a tumor that were not detected by the current clinical gold standard method of Sanger sequencing. In conclusion, dPCR performs sensitive and accurate mutation analysis in pancreas FNAs, detecting not only the dominant mutation subtype, but also the additional rare mutation subtypes representing tumor heterogeneity.
Digital PCR Improves Mutation Analysis in Pancreas Fine Needle Aspiration Biopsy Specimens
Court, Colin M.; Kim, Stephen; Braxton, David R.; Hou, Shuang; Muthusamy, V. Raman; Watson, Rabindra R.; Sedarat, Alireza; Tseng, Hsian-Rong; Tomlinson, James S.
2017-01-01
Applications of precision oncology strategies rely on accurate tumor genotyping from clinically available specimens. Fine needle aspirations (FNA) are frequently obtained in cancer management and often represent the only source of tumor tissues for patients with metastatic or locally advanced diseases. However, FNAs obtained from pancreas ductal adenocarcinoma (PDAC) are often limited in cellularity and/or tumor cell purity, precluding accurate tumor genotyping in many cases. Digital PCR (dPCR) is a technology with exceptional sensitivity and low DNA template requirement, characteristics that are necessary for analyzing PDAC FNA samples. In the current study, we sought to evaluate dPCR as a mutation analysis tool for pancreas FNA specimens. To this end, we analyzed alterations in the KRAS gene in pancreas FNAs using dPCR. The sensitivity of dPCR mutation analysis was first determined using serial dilution cell spiking studies. Single-cell laser-microdissection (LMD) was then utilized to identify the minimal number of tumor cells needed for mutation detection. Lastly, dPCR mutation analysis was performed on 44 pancreas FNAs (34 formalin-fixed paraffin-embedded (FFPE) and 10 fresh (non-fixed)), including samples highly limited in cellularity (100 cells) and tumor cell purity (1%). We found dPCR to detect mutations with allele frequencies as low as 0.17%. Additionally, a single tumor cell could be detected within an abundance of normal cells. Using clinical FNA samples, dPCR mutation analysis was successful in all preoperative FNA biopsies tested, and its accuracy was confirmed via comparison with resected tumor specimens. Moreover, dPCR revealed additional KRAS mutations representing minor subclones within a tumor that were not detected by the current clinical gold standard method of Sanger sequencing. In conclusion, dPCR performs sensitive and accurate mutation analysis in pancreas FNAs, detecting not only the dominant mutation subtype, but also the additional rare mutation subtypes representing tumor heterogeneity. PMID:28125707
The work and social adjustment scale: reliability, sensitivity and value.
Zahra, Daniel; Qureshi, Adam; Henley, William; Taylor, Rod; Quinn, Cath; Pooler, Jill; Hardy, Gillian; Newbold, Alexandra; Byng, Richard
2014-06-01
To investigate the psychometric properties of the Work and Social Adjustment Scale (WSAS) as an outcome measure for the Improving Access to Psychological Therapy programme, assessing its value as an addition to the Patient Health (PHQ-9) and Generalised Anxiety Disorder questionnaires (GAD-7). Little research has investigated these properties to date. Reliability and responsiveness to change were assessed using data from 4,835 patients. Principal components analysis was used to determine whether the WSAS measures a factor distinct from the PHQ-9 and GAD-7. The WSAS measures a distinct social functioning factor, has high internal reliability, and is sensitive to treatment effects. The WSAS, PHQ-9 and GAD-7 perform comparably on measures of reliability and sensitivity. The WSAS also measures a distinct social functioning component suggesting it has potential as an additional outcome measure.
Pseudotargeted MS Method for the Sensitive Analysis of Protein Phosphorylation in Protein Complexes.
Lyu, Jiawen; Wang, Yan; Mao, Jiawei; Yao, Yating; Wang, Shujuan; Zheng, Yong; Ye, Mingliang
2018-05-15
In this study, we presented an enrichment-free approach for the sensitive analysis of protein phosphorylation in minute amounts of samples, such as purified protein complexes. This method takes advantage of the high sensitivity of parallel reaction monitoring (PRM). Specifically, low confident phosphopeptides identified from the data-dependent acquisition (DDA) data set were used to build a pseudotargeted list for PRM analysis to allow the identification of additional phosphopeptides with high confidence. The development of this targeted approach is very easy as the same sample and the same LC-system were used for the discovery and the targeted analysis phases. No sample fractionation or enrichment was required for the discovery phase which allowed this method to analyze minute amount of sample. We applied this pseudotargeted MS method to quantitatively examine phosphopeptides in affinity purified endogenous Shc1 protein complexes at four temporal stages of EGF signaling and identified 82 phospho-sites. To our knowledge, this is the highest number of phospho-sites identified from the protein complexes. This pseudotargeted MS method is highly sensitive in the identification of low abundance phosphopeptides and could be a powerful tool to study phosphorylation-regulated assembly of protein complex.
Hasan, Nazim; Gopal, Judy; Wu, Hui-Fen
2011-11-01
Biofilm studies have extensive significance since their results can provide insights into the behavior of bacteria on material surfaces when exposed to natural water. This is the first attempt of using matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS) for detecting the polysaccharides formed in a complex biofilm consisting of a mixed consortium of marine microbes. MALDI-MS has been applied to directly analyze exopolysaccharides (EPS) in the biofilm formed on aluminum surfaces exposed to seawater. The optimal conditions for MALDI-MS applied to EPS analysis of biofilm have been described. In addition, microbiologically influenced corrosion of aluminum exposed to sea water by a marine fungus was also observed and the fungus identity established using MALDI-MS analysis of EPS. Rapid, sensitive and direct MALDI-MS analysis on biofilm would dramatically speed up and provide new insights into biofilm studies due to its excellent advantages such as simplicity, high sensitivity, high selectivity and high speed. This study introduces a novel, fast, sensitive and selective platform for biofilm study from natural water without the need of tedious culturing steps or complicated sample pretreatment procedures. Copyright © 2011 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ying
My graduate research has focused on separation science and bioanalytical analysis, which emphasized in method development. It includes three major areas: enantiomeric separations using high performance liquid chromatography (HPLC), Super/subcritical fluid chromatography (SFC), and capillary electrophoresis (CE); drug-protein binding behavior studies using CE; and carbohydrate analysis using liquid chromatograph-electrospray ionization mass spectrometry (LC-ESI-MS). Enantiomeric separations continue to be extremely important in the pharmaceutical industry. An in-depth evaluation of the enantiomeric separation capabilities of macrocyclic glycopeptides CSPs with SFC mobile phases was investigated using a set of over 100 chiral compounds. It was found that the macrocyclic based CSPs were ablemore » to separate enantiomers of various compounds with different polarities and functionalities. Seventy percent of all separations were achieved in less than 4 min due to the high flow rate (4.0 ml/min) that can be used in SFC. Drug-protein binding is an important process in determining the activity and fate of a drug once it enters the body. Two drug/protein systems have been studied using frontal analysis CE method. More sensitive fluorescence detection was introduced in this assay, which overcame the problem of low sensitivity that is common when using UV detection for drug-protein studies. In addition, the first usage of an argon ion laser with 257 nm beam coupled with CCD camera as a frontal analysis detection method enabled the simultaneous observation of drug fluorescence as well as the protein fluorescence. LC-ESI-MS was used for the separation and characterization of underivatized oligosaccharide mixtures. With the limits of detection as low as 50 picograms, all individual components of oligosaccharide mixtures (up to 11 glucose-units long) were baseline resolved on a Cyclobond I 2000 column and detected using ESI-MS. This system is characterized by high chromatographic resolution, high column stability, and high sensitivity. In addition, this method showed potential usefulness for the sensitive and quick analysis of hydrolysis products of polysaccharides, and for trace level analysis of individual oligosaccharides or oligosaccharide isomers from biological systems.« less
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
Assessment of energy and economic performance of office building models: a case study
NASA Astrophysics Data System (ADS)
Song, X. Y.; Ye, C. T.; Li, H. S.; Wang, X. L.; Ma, W. B.
2016-08-01
Energy consumption of building accounts for more than 37.3% of total energy consumption while the proportion of energy-saving buildings is just 5% in China. In this paper, in order to save potential energy, an office building in Southern China was selected as a test example for energy consumption characteristics. The base building model was developed by TRNSYS software and validated against the recorded data from the field work in six days out of August-September in 2013. Sensitivity analysis was conducted for energy performance of building envelope retrofitting; five envelope parameters were analyzed for assessing the thermal responses. Results indicated that the key sensitivity factors were obtained for the heat-transfer coefficient of exterior walls (U-wall), infiltration rate and shading coefficient (SC), of which the sum sensitivity factor was about 89.32%. In addition, the results were evaluated in terms of energy and economic analysis. The analysis of sensitivity validated against some important results of previous studies. On the other hand, the cost-effective method improved the efficiency of investment management in building energy.
Rotator cuff crepitus: could Codman really feel a cuff tear?
Ponce, Brent A; Kundukulam, Joseph A; Sheppard, Evan D; Determann, Jason R; McGwin, Gerald; Narducci, Carl A; Crowther, Marshall J
2014-07-01
The objective of this study was to assess the accuracy of palpating crepitus to diagnose rotator cuff tears. Seventy consecutive consenting patients who presented with shoulder pain and no previous imaging or surgery on the affected shoulder were prospectively enrolled during a 10-month period. A standardized patient history and examination, including the crepitus test, were recorded in addition to obtaining standard radiographs. Additional imaging after initial evaluation was performed with magnetic resonance imaging and interpreted by a musculoskeletal radiologist blinded to the examination findings. Statistical analysis was used to determine sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the crepitus test in the clinical diagnosis of a rotator cuff tear. Sixty-three patients had histories, examinations, and imaging studies available for analysis. The crepitus test had a sensitivity of 67%, specificity of 80%, PPV of 91%, and NPV of 43% for all types of rotator cuff tears. The sensitivity and specificity for full-thickness or high-grade partial tears was 82% and 73%, respectively; the PPV and NPV were 77% and 79%. Increasing age improved accuracy as the presence of crepitus in patients older than 55 years had a sensitivity of 76%, specificity of 100%, PPV of 100%, and NPV of 38%. The crepitus test has a favorable sensitivity, specificity, PPV, and NPV to assess the integrity of the rotator cuff and may be a useful examination in the clinical diagnosis of a rotator cuff tear. Published by Mosby, Inc.
Improving the analysis of slug tests
McElwee, C.D.
2002-01-01
This paper examines several techniques that have the potential to improve the quality of slug test analysis. These techniques are applicable in the range from low hydraulic conductivities with overdamped responses to high hydraulic conductivities with nonlinear oscillatory responses. Four techniques for improving slug test analysis will be discussed: use of an extended capability nonlinear model, sensitivity analysis, correction for acceleration and velocity effects, and use of multiple slug tests. The four-parameter nonlinear slug test model used in this work is shown to allow accurate analysis of slug tests with widely differing character. The parameter ?? represents a correction to the water column length caused primarily by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude. The water column velocity at slug initiation (V0) is an additional model parameter, which would ideally be zero but may not be due to the initiation mechanism. The remaining two model parameters are A (parameter for nonlinear effects) and K (hydraulic conductivity). Sensitivity analysis shows that in general ?? and V0 have the lowest sensitivity and K usually has the highest. However, for very high K values the sensitivity to A may surpass the sensitivity to K. Oscillatory slug tests involve higher accelerations and velocities of the water column; thus, the pressure transducer responses are affected by these factors and the model response must be corrected to allow maximum accuracy for the analysis. The performance of multiple slug tests will allow some statistical measure of the experimental accuracy and of the reliability of the resulting aquifer parameters. ?? 2002 Elsevier Science B.V. All rights reserved.
Is It True That Certain Foods Worsen Anxiety and Others Have a Calming Effect?
... degree of hypohydration adversely influences cognition: A mediator analysis. American Journal of Clinical Nutrition. 2016;104:603. Skypala IJ, et al. Sensitivity to food additives, vaso-active amines and salicylates: A review ...
Gamma Ray Observatory (GRO) OBC attitude error analysis
NASA Technical Reports Server (NTRS)
Harman, R. R.
1990-01-01
This analysis involves an in-depth look into the onboard computer (OBC) attitude determination algorithm. A review of TRW error analysis and necessary ground simulations to understand the onboard attitude determination process are performed. In addition, a plan is generated for the in-flight calibration and validation of OBC computed attitudes. Pre-mission expected accuracies are summarized and sensitivity of onboard algorithms to sensor anomalies and filter tuning parameters are addressed.
ANSYS-based birefringence property analysis of side-hole fiber induced by pressure and temperature
NASA Astrophysics Data System (ADS)
Zhou, Xinbang; Gong, Zhenfeng
2018-03-01
In this paper, we theoretically investigate the influences of pressure and temperature on the birefringence property of side-hole fibers with different shapes of holes using the finite element analysis method. A physical mechanism of the birefringence of the side-hole fiber is discussed with the presence of different external pressures and temperatures. The strain field distribution and birefringence values of circular-core, rectangular-core, and triangular-core side-hole fibers are presented. Our analysis shows the triangular-core side-hole fiber has low temperature sensitivity which weakens the cross sensitivity of temperature and strain. Additionally, an optimized structure design of the side-hole fiber is presented which can be used for the sensing application.
A novel bi-level meta-analysis approach: applied to biological pathway analysis.
Nguyen, Tin; Tagett, Rebecca; Donato, Michele; Mitrea, Cristina; Draghici, Sorin
2016-02-01
The accumulation of high-throughput data in public repositories creates a pressing need for integrative analysis of multiple datasets from independent experiments. However, study heterogeneity, study bias, outliers and the lack of power of available methods present real challenge in integrating genomic data. One practical drawback of many P-value-based meta-analysis methods, including Fisher's, Stouffer's, minP and maxP, is that they are sensitive to outliers. Another drawback is that, because they perform just one statistical test for each individual experiment, they may not fully exploit the potentially large number of samples within each study. We propose a novel bi-level meta-analysis approach that employs the additive method and the Central Limit Theorem within each individual experiment and also across multiple experiments. We prove that the bi-level framework is robust against bias, less sensitive to outliers than other methods, and more sensitive to small changes in signal. For comparative analysis, we demonstrate that the intra-experiment analysis has more power than the equivalent statistical test performed on a single large experiment. For pathway analysis, we compare the proposed framework versus classical meta-analysis approaches (Fisher's, Stouffer's and the additive method) as well as against a dedicated pathway meta-analysis package (MetaPath), using 1252 samples from 21 datasets related to three human diseases, acute myeloid leukemia (9 datasets), type II diabetes (5 datasets) and Alzheimer's disease (7 datasets). Our framework outperforms its competitors to correctly identify pathways relevant to the phenotypes. The framework is sufficiently general to be applied to any type of statistical meta-analysis. The R scripts are available on demand from the authors. sorin@wayne.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
Noise spectroscopy as an equilibrium analysis tool for highly sensitive electrical biosensing
NASA Astrophysics Data System (ADS)
Guo, Qiushi; Kong, Tao; Su, Ruigong; Zhang, Qi; Cheng, Guosheng
2012-08-01
We demonstrate an approach for highly sensitive bio-detection based on silicon nanowire field-effect transistors by employing low frequency noise spectroscopy analysis. The inverse of noise amplitude of the device exhibits an enhanced gate coupling effect in strong inversion regime when measured in buffer solution than that in air. The approach was further validated by the detection of cardiac troponin I of 0.23 ng/ml in fetal bovine serum, in which 2 orders of change in noise amplitude was characterized. The selectivity of the proposed approach was also assessed by the addition of 10 μg/ml bovine serum albumin solution.
NASA Technical Reports Server (NTRS)
Bittker, David A.; Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
Hu, Xiangdong; Liu, Yujiang; Qian, Linxue
2017-01-01
Abstract Background: Real-time elastography (RTE) and shear wave elastography (SWE) are noninvasive and easily available imaging techniques that measure the tissue strain, and it has been reported that the sensitivity and the specificity of elastography were better in differentiating between benign and malignant thyroid nodules than conventional technologies. Methods: Relevant articles were searched in multiple databases; the comparison of elasticity index (EI) was conducted with the Review Manager 5.0. Forest plots of the sensitivity and specificity and SROC curve of RTE and SWE were performed with STATA 10.0 software. In addition, sensitivity analysis and bias analysis of the studies were conducted to examine the quality of articles; and to estimate possible publication bias, funnel plot was used and the Egger test was conducted. Results: Finally 22 articles which eventually satisfied the inclusion criteria were included in this study. After eliminating the inefficient, benign and malignant nodules were 2106 and 613, respectively. The meta-analysis suggested that the difference of EI between benign and malignant nodules was statistically significant (SMD = 2.11, 95% CI [1.67, 2.55], P < .00001). The overall sensitivities of RTE and SWE were roughly comparable, whereas the difference of specificities between these 2 methods was statistically significant. In addition, statistically significant difference of AUC between RTE and SWE was observed between RTE and SWE (P < .01). Conclusion: The specificity of RTE was statistically higher than that of SWE; which suggests that compared with SWE, RTE may be more accurate on differentiating benign and malignant thyroid nodules. PMID:29068996
NASA Astrophysics Data System (ADS)
Ahmed, Hytham M.; Ebeid, Wael B.
2015-05-01
Complex samples analysis is a challenge in pharmaceutical and biopharmaceutical analysis. In this work, tobramycin (TOB) analysis in human urine samples and recombinant human erythropoietin (rhEPO) analysis in the presence of similar protein were selected as representative examples of such samples analysis. Assays of TOB in urine samples are difficult because of poor detectability. Therefore laser induced fluorescence detector (LIF) was combined with a separation technique, micellar electrokinetic chromatography (MEKC), to determine TOB through derivatization with fluorescein isothiocyanate (FITC). Borate was used as background electrolyte (BGE) with negative-charged mixed micelles as additive. The method was successively applied to urine samples. The LOD and LOQ for Tobramycin in urine were 90 and 200 ng/ml respectively and recovery was >98% (n = 5). All urine samples were analyzed by direct injection without sample pre-treatment. Another use of hyphenated analytical technique, capillary zone electrophoresis (CZE) connected to ultraviolet (UV) detector was also used for sensitive analysis of rhEPO at low levels (2000 IU) in the presence of large amount of human serum albumin (HSA). Analysis of rhEPO was achieved by the use of the electrokinetic injection (EI) with discontinuous buffers. Phosphate buffer was used as BGE with metal ions as additive. The proposed method can be used for the estimation of large number of quality control rhEPO samples in a short period.
2014-07-09
Rivera. Highly Sensitive Filter Paper Substrate for SERS Trace Explosives Detection , International Journal of Spectroscopy, (09 2012): 0. doi: 10.1155...Highly Sensitive Filter Paper Substrate for SERS Field Detection of Trace Threat Chemicals”, PITTCON-2013: Forensic Analysis in the Lab and Crime Scene...the surface. In addition, built-in algorithms were used for nearly real-time sample detection . Trace and bulk concentrations of the other substances
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2017-04-01
Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method
Sensitivity Analysis for Studying Impacts of Aging on Population Toxicokinetics and Toxicodynamics
Assessing the impacts of toxicant exposures upon susceptible populations such as the elderly requires adequate characterization of prior long-term exposures, reductions in various organ functions, and potential intake of multiple drugs. Additionally, significant uncertainties and...
NASA Astrophysics Data System (ADS)
Aravindran, S.; Sahilah, A. M.; Aminah, A.
2014-09-01
Halal surveillance on halal ingredients incorporated in surimi based products were studied using polymerase chain reaction (PCR)-southern hybridization on chip analysis. The primers used in this technique were targeted on mitochondria DNA (mtDNA) of cytochrome b (cyt b) gene sequence which able to differentiate 7 type (beef, chicken, duck, goat, buffalo, lamb and pork) of species on a single chip. 17 (n = 17*3) different brands of surimi-based product were purchased randomly from Selangor local market in January 2013. Of 17 brands, 3 (n = 3*3) brands were positive for chicken DNA, 1 (n = 1*3) brand was positive for goat DNA, and the remainder 13 brands (n = 13*3) have no DNA species detected. The sensitivity of PCR-southern hybridization primers to detect each meat species was 0.1 ng. In the present study, it is evidence that PCR-Southern Hybridization analysis offered a reliable result due to its highly specific and sensitive properties in detecting non-halal additive such as plasma protein incorporation in surimi-based product.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL
2016-09-15
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less
Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talukder, Srijeeta; Sen, Shrabani; Chaudhury, Pinaki, E-mail: pinakc@rediffmail.com
We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction ε{sub hb}(AT) for an AT base pair and the ring factor ξ turn out to be the most sensitive parameters. In addition, the stackingmore » interaction ε{sub st}(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.« less
A techno-economic assessment of grid connected photovoltaic system for hospital building in Malaysia
NASA Astrophysics Data System (ADS)
Mat Isa, Normazlina; Tan, Chee Wei; Yatim, AHM
2017-07-01
Conventionally, electricity in hospital building are supplied by the utility grid which uses mix fuel including coal and gas. Due to enhancement in renewable technology, many building shall moving forward to install their own PV panel along with the grid to employ the advantages of the renewable energy. This paper present an analysis of grid connected photovoltaic (GCPV) system for hospital building in Malaysia. A discussion is emphasized on the economic analysis based on Levelized Cost of Energy (LCOE) and total Net Present Post (TNPC) in regards with the annual interest rate. The analysis is performed using Hybrid Optimization Model for Electric Renewables (HOMER) software which give optimization and sensitivity analysis result. An optimization result followed by the sensitivity analysis also being discuss in this article thus the impact of the grid connected PV system has be evaluated. In addition, the benefit from Net Metering (NeM) mechanism also discussed.
dos Santos, Marcelo R.; Sayegh, Ana L.C.; Armani, Rafael; Costa-Hong, Valéria; de Souza, Francis R.; Toschi-Dias, Edgar; Bortolotto, Luiz A.; Yonamine, Mauricio; Negrão, Carlos E.; Alves, Maria-Janieire N.N.
2018-01-01
OBJECTIVES: Misuse of anabolic androgenic steroids in athletes is a strategy used to enhance strength and skeletal muscle hypertrophy. However, its abuse leads to an imbalance in muscle sympathetic nerve activity, increased vascular resistance, and increased blood pressure. However, the mechanisms underlying these alterations are still unknown. Therefore, we tested whether anabolic androgenic steroids could impair resting baroreflex sensitivity and cardiac sympathovagal control. In addition, we evaluate pulse wave velocity to ascertain the arterial stiffness of large vessels. METHODS: Fourteen male anabolic androgenic steroid users and 12 nonusers were studied. Heart rate, blood pressure, and respiratory rate were recorded. Baroreflex sensitivity was estimated by the sequence method, and cardiac autonomic control by analysis of the R-R interval. Pulse wave velocity was measured using a noninvasive automatic device. RESULTS: Mean spontaneous baroreflex sensitivity, baroreflex sensitivity to activation of the baroreceptors, and baroreflex sensitivity to deactivation of the baroreceptors were significantly lower in users than in nonusers. In the spectral analysis of heart rate variability, high frequency activity was lower, while low frequency activity was higher in users than in nonusers. Moreover, the sympathovagal balance was higher in users. Users showed higher pulse wave velocity than nonusers showing arterial stiffness of large vessels. Single linear regression analysis showed significant correlations between mean blood pressure and baroreflex sensitivity and pulse wave velocity. CONCLUSIONS: Our results provide evidence for lower baroreflex sensitivity and sympathovagal imbalance in anabolic androgenic steroid users. Moreover, anabolic androgenic steroid users showed arterial stiffness. Together, these alterations might be the mechanisms triggering the increased blood pressure in this population. PMID:29791601
Santos, Marcelo R Dos; Sayegh, Ana L C; Armani, Rafael; Costa-Hong, Valéria; Souza, Francis R de; Toschi-Dias, Edgar; Bortolotto, Luiz A; Yonamine, Mauricio; Negrão, Carlos E; Alves, Maria-Janieire N N
2018-05-21
Misuse of anabolic androgenic steroids in athletes is a strategy used to enhance strength and skeletal muscle hypertrophy. However, its abuse leads to an imbalance in muscle sympathetic nerve activity, increased vascular resistance, and increased blood pressure. However, the mechanisms underlying these alterations are still unknown. Therefore, we tested whether anabolic androgenic steroids could impair resting baroreflex sensitivity and cardiac sympathovagal control. In addition, we evaluate pulse wave velocity to ascertain the arterial stiffness of large vessels. Fourteen male anabolic androgenic steroid users and 12 nonusers were studied. Heart rate, blood pressure, and respiratory rate were recorded. Baroreflex sensitivity was estimated by the sequence method, and cardiac autonomic control by analysis of the R-R interval. Pulse wave velocity was measured using a noninvasive automatic device. Mean spontaneous baroreflex sensitivity, baroreflex sensitivity to activation of the baroreceptors, and baroreflex sensitivity to deactivation of the baroreceptors were significantly lower in users than in nonusers. In the spectral analysis of heart rate variability, high frequency activity was lower, while low frequency activity was higher in users than in nonusers. Moreover, the sympathovagal balance was higher in users. Users showed higher pulse wave velocity than nonusers showing arterial stiffness of large vessels. Single linear regression analysis showed significant correlations between mean blood pressure and baroreflex sensitivity and pulse wave velocity. Our results provide evidence for lower baroreflex sensitivity and sympathovagal imbalance in anabolic androgenic steroid users. Moreover, anabolic androgenic steroid users showed arterial stiffness. Together, these alterations might be the mechanisms triggering the increased blood pressure in this population.
Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Boer, Rob; Zauber, Ann; Habbema, J Dik F
2009-06-01
Estimates of the fecal occult blood test (FOBT) (Hemoccult II) sensitivity differed widely between screening trials and led to divergent conclusions on the effects of FOBT screening. We used microsimulation modeling to estimate a preclinical colorectal cancer (CRC) duration and sensitivity for unrehydrated FOBT from the data of 3 randomized controlled trials of Minnesota, Nottingham, and Funen. In addition to 2 usual hypotheses on the sensitivity of FOBT, we tested a novel hypothesis where sensitivity is linked to the stage of clinical diagnosis in the situation without screening. We used the MISCAN-Colon microsimulation model to estimate sensitivity and duration, accounting for differences between the trials in demography, background incidence, and trial design. We tested 3 hypotheses for FOBT sensitivity: sensitivity is the same for all preclinical CRC stages, sensitivity increases with each stage, and sensitivity is higher for the stage in which the cancer would have been diagnosed in the absence of screening than for earlier stages. Goodness-of-fit was evaluated by comparing expected and observed rates of screen-detected and interval CRC. The hypothesis with a higher sensitivity in the stage of clinical diagnosis gave the best fit. Under this hypothesis, sensitivity of FOBT was 51% in the stage of clinical diagnosis and 19% in earlier stages. The average duration of preclinical CRC was estimated at 6.7 years. Our analysis corroborated a long duration of preclinical CRC, with FOBT most sensitive in the stage of clinical diagnosis. (c) 2009 American Cancer Society.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P.; Oken, Barry S.
2011-01-01
Objectives To determine 1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and 2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Methods Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings two weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Results Time domain (especially mean R-R interval/RRI), frequency domain and, among nonlinear parameters- Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Conclusions Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. Significance A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. PMID:21459665
Loomba, Rohit S; Shah, Parinda H; Nijhawan, Karan; Aggarwal, Saurabh; Arora, Rohit
2015-03-01
Increased cardiothoracic ratio noted on chest radiographs often prompts concern and further evaluation with additional imaging. This study pools available data assessing the utility of cardiothoracic ratio in predicting left ventricular dilation. A systematic review of the literature was conducted to identify studies comparing cardiothoracic ratio by chest x-ray to left ventricular dilation by echocardiography. Electronic databases were used to identify studies which were then assessed for quality and bias, with those with adequate quality and minimal bias ultimately being included in the pooled analysis. The pooled data were used to determine the sensitivity, specificity, positive predictive value and negative predictive value of cardiomegaly in predicting left ventricular dilation. A total of six studies consisting of 466 patients were included in this analysis. Cardiothoracic ratio had 83.3% sensitivity, 45.4% specificity, 43.5% positive predictive value and 82.7% negative predictive value. When a secondary analysis was conducted with a pediatric study excluded, a total of five studies consisting of 371 patients were included. Cardiothoracic ratio had 86.2% sensitivity, 25.2% specificity, 42.5% positive predictive value and 74.0% negative predictive value. Cardiothoracic ratio as determined by chest radiograph is sensitive but not specific for identifying left ventricular dilation. Cardiothoracic ratio also has a strong negative predictive value for identifying left ventricular dilation.
NASA Astrophysics Data System (ADS)
Chen, Tao; Ye, Meng-li; Liu, Shu-liang; Deng, Yan
2018-03-01
In view of the principle for occurrence of cross-sensitivity, a series of calibration experiments are carried out to solve the cross-sensitivity problem of embedded fiber Bragg gratings (FBGs) using the reference grating method. Moreover, an ultrasonic-vibration-assisted grinding (UVAG) model is established, and finite element analysis (FEA) is carried out under the monitoring environment of embedded temperature measurement system. In addition, the related temperature acquisition tests are set in accordance with requirements of the reference grating method. Finally, comparative analyses of the simulation and experimental results are performed, and it may be concluded that the reference grating method may be utilized to effectively solve the cross-sensitivity of embedded FBGs.
Post Flight Analysis of Optical Specimens from MISSE7
NASA Technical Reports Server (NTRS)
Stewart, Alan F.; Finckenor, Miria
2012-01-01
More than 100 optical specimens were flown on the MISSE7 platform. These included bare substrates in addition to coatings designed to exhibit clearly defined or enhanced sensitivity to the accumulation of contamination. Measurements were performed using spectrophotometers operating from the UV through the IR as well as ellipsometry. Results will be presented in addition to discussion of the best options for design of samples for future exposure experiments.
Qian, Yushen; Pollom, Erqi L.; King, Martin T.; Dudley, Sara A.; Shaffer, Jenny L.; Chang, Daniel T.; Gibbs, Iris C.; Goldhaber-Fiebert, Jeremy D.; Horst, Kathleen C.
2016-01-01
Purpose The Clinical Evaluation of Pertuzumab and Trastuzumab (CLEOPATRA) study showed a 15.7-month survival benefit with the addition of pertuzumab to docetaxel and trastuzumab (THP) as first-line treatment for patients with human epidermal growth factor receptor 2 (HER2) –overexpressing metastatic breast cancer. We performed a cost-effectiveness analysis to assess the value of adding pertuzumab. Patient and Methods We developed a decision-analytic Markov model to evaluate the cost effectiveness of docetaxel plus trastuzumab (TH) with or without pertuzumab in US patients with metastatic breast cancer. The model followed patients weekly over their remaining lifetimes. Health states included stable disease, progressing disease, hospice, and death. Transition probabilities were based on the CLEOPATRA study. Costs reflected the 2014 Medicare rates. Health state utilities were the same as those used in other recent cost-effectiveness studies of trastuzumab and pertuzumab. Outcomes included health benefits expressed as discounted quality-adjusted life-years (QALYs), costs in US dollars, and cost effectiveness expressed as an incremental cost-effectiveness ratio. One- and multiway deterministic and probabilistic sensitivity analyses explored the effects of specific assumptions. Results Modeled median survival was 39.4 months for TH and 56.9 months for THP. The addition of pertuzumab resulted in an additional 1.81 life-years gained, or 0.62 QALYs, at a cost of $472,668 per QALY gained. Deterministic sensitivity analysis showed that THP is unlikely to be cost effective even under the most favorable assumptions, and probabilistic sensitivity analysis predicted 0% chance of cost effectiveness at a willingness to pay of $100,000 per QALY gained. Conclusion THP in patients with metastatic HER2-positive breast cancer is unlikely to be cost effective in the United States. PMID:26351332
Longitudinal study of factors affecting taste sense decline in old-old individuals.
Ogawa, T; Uota, M; Ikebe, K; Arai, Y; Kamide, K; Gondo, Y; Masui, Y; Ishizaki, T; Inomata, C; Takeshita, H; Mihara, Y; Hatta, K; Maeda, Y
2017-01-01
The sense of taste plays a pivotal role for personal assessment of the nutritional value, safety and quality of foods. Although it is commonly recognised that taste sensitivity decreases with age, alterations in that sensitivity over time in an old-old population have not been previously reported. Furthermore, no known studies utilised comprehensive variables regarding taste changes and related factors for assessments. Here, we report novel findings from a 3-year longitudinal study model aimed to elucidate taste sensitivity decline and its related factors in old-old individuals. We utilised 621 subjects aged 79-81 years who participated in the Septuagenarians, Octogenarians, Nonagenarians Investigation with Centenarians Study for baseline assessments performed in 2011 and 2012, and then conducted follow-up assessments 3 years later in 328 of those. Assessment of general health, an oral examination and determination of taste sensitivity were performed for each. We also evaluated cognitive function using Montreal Cognitive Assessment findings, then excluded from analysis those with a score lower than 20 in order to secure the validity and reliability of the subjects' answers. Contributing variables were selected using univariate analysis, then analysed with multivariate logistic regression analysis. We found that males showed significantly greater declines in taste sensitivity for sweet and sour tastes than females. Additionally, subjects with lower cognitive scores showed a significantly greater taste decrease for salty in multivariate analysis. In conclusion, our longitudinal study revealed that gender and cognitive status are major factors affecting taste sensitivity in geriatric individuals. © 2016 John Wiley & Sons Ltd.
Sensitivity analysis of consumption cycles
NASA Astrophysics Data System (ADS)
Jungeilges, Jochen; Ryazanova, Tatyana; Mitrofanova, Anastasia; Popova, Irina
2018-05-01
We study the special case of a nonlinear stochastic consumption model taking the form of a 2-dimensional, non-invertible map with an additive stochastic component. Applying the concept of the stochastic sensitivity function and the related technique of confidence domains, we establish the conditions under which the system's complex consumption attractor is likely to become observable. It is shown that the level of noise intensities beyond which the complex consumption attractor is likely to be observed depends on the weight given to past consumption in an individual's preference adjustment.
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R
2017-07-12
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.
Nakagawa, Hiroko; Yuno, Tomoji; Itho, Kiichi
2009-03-01
Recently, specific detection method for Bacteria, by flow cytometry method using nucleic acid staining, was developed as a function of automated urine formed elements analyzer for routine urine testing. Here, we performed a basic study on this bacteria analysis method. In addition, we also have a comparison among urine sediment analysis, urine Gram staining and urine quantitative cultivation, the conventional methods performed up to now. As a result, the bacteria analysis with flow cytometry method that uses nucleic acid staining was excellent in reproducibility, and higher sensitivity compared with microscopic urinary sediment analysis. Based on the ROC curve analysis, which settled urine culture method as standard, cut-off level of 120/microL was defined and its sensitivity = 85.7%, specificity = 88.2%. In the analysis of scattergram, accompanied with urine culture method, among 90% of rod positive samples, 80% of dots were appeared in the area of 30 degrees from axis X. In addition, one case even indicated that analysis of bacteria by flow cytometry and scattergram of time series analysis might be helpful to trace the progress of causative bacteria therefore the information supposed to be clinically significant. Reporting bacteria information with nucleic acid staining flow cytometry method is expected to contribute to a rapid diagnostics and treatment of urinary tract infections. Besides, the contribution to screening examination of microbiology and clinical chemistry, will deliver a more efficient solution to urine analysis.
Mohammadkhani, Parvaneh; Pourshahbaz, Abbas; Kami, Maryam; Mazidi, Mahdi; Abasi, Imaneh
2016-01-01
Objective: Generalized anxiety disorder is one of the most common anxiety disorders in the general population. Several studies suggest that anxiety sensitivity is a vulnerability factor in generalized anxiety severity. However, some other studies suggest that negative repetitive thinking and experiential avoidance as response factors can explain this relationship. Therefore, this study aimed to investigate the mediating role of experiential avoidance and negative repetitive thinking in the relationship between anxiety sensitivity and generalized anxiety severity. Method: This was a cross-sectional and correlational study. A sample of 475 university students was selected through stratified sampling method. The participants completed Anxiety Sensitivity Inventory-3, Acceptance and Action Questionnaire-II, Perseverative Thinking Questionnaire, and Generalized Anxiety Disorder 7-item Scale. Data were analyzed by Pearson correlation, multiple regression analysis and path analysis. Results: The results revealed a positive relationship between anxiety sensitivity, particularly cognitive anxiety sensitivity, experiential avoidance, repetitive thinking and generalized anxiety severity. In addition, findings showed that repetitive thinking, but not experiential avoidance, fully mediated the relationship between cognitive anxiety sensitivity and generalized anxiety severity. α Level was p<0.005. Conclusion: Consistent with the trans-diagnostic hypothesis, anxiety sensitivity predicts generalized anxiety severity, but its effect is due to the generating repetitive negative thought. PMID:27928245
Cost benefit analysis of anti-strip additives in hot mix asphalt with various aggregates.
DOT National Transportation Integrated Search
2015-05-01
This report documents research on moisture sensitivity testing of hot-mix asphalt (HMA) mixes in Pennsylvania and the : associated use of antistrip. The primary objective of the research was to evaluate and compare benefit/cost ratios of mandatory us...
Naing, Cho; Poovorawan, Yong; Mak, Joon Wah; Aung, Kyan; Kamolratankul, Pirom
2015-06-01
The present study aimed to assess the cost-utility analysis of using an adjunctive recombinant activated factor VIIa (rFVIIa) in children for controlling life-threatening bleeding in dengue haemorrhagic fever (DHF)/dengue shock syndrome (DSS). We constructed a decision-tree model, comparing a standard care and the use of an additional adjuvant rFVIIa for controlling life-threatening bleeding in children with DHF/DSS. Cost and utility benefit were estimated from the societal perspective. The outcome measure was cost per quality-adjusted life years (QALYs). Overall, treatment with adjuvant rFVIIa gained QALYs, but the total cost was higher. The incremental cost-utility ratio for the introduction of adjuvant rFVIIa was $4241.27 per additional QALY. Sensitivity analyses showed the utility value assigned for calculation of QALY was the most sensitive parameter. We concluded that despite high cost, there is a role for rFVIIa in the treatment of life-threatening bleeding in patients with DHF/DSS.
Probabilistic analysis of a materially nonlinear structure
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.; Fossum, A. F.
1990-01-01
A probabilistic finite element program is used to perform probabilistic analysis of a materially nonlinear structure. The program used in this study is NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), under development at Southwest Research Institute. The cumulative distribution function (CDF) of the radial stress of a thick-walled cylinder under internal pressure is computed and compared with the analytical solution. In addition, sensitivity factors showing the relative importance of the input random variables are calculated. Significant plasticity is present in this problem and has a pronounced effect on the probabilistic results. The random input variables are the material yield stress and internal pressure with Weibull and normal distributions, respectively. The results verify the ability of NESSUS to compute the CDF and sensitivity factors of a materially nonlinear structure. In addition, the ability of the Advanced Mean Value (AMV) procedure to assess the probabilistic behavior of structures which exhibit a highly nonlinear response is shown. Thus, the AMV procedure can be applied with confidence to other structures which exhibit nonlinear behavior.
Potential Interactions of Calcium-Sensitive Reagents with Zinc Ion in Different Cultured Cells
Fujikawa, Koichi; Fukumori, Ryo; Nakamura, Saki; Kutsukake, Takaya; Takarada, Takeshi; Yoneda, Yukio
2015-01-01
Background Several chemicals have been widely used to evaluate the involvement of free Ca2+ in mechanisms underlying a variety of biological responses for decades. Here, we report high reactivity to zinc of well-known Ca2+-sensitive reagents in diverse cultured cells. Methodology/Principal Findings In rat astrocytic C6 glioma cells loaded with the fluorescent Ca2+ dye Fluo-3, the addition of ZnCl2 gradually increased the fluorescence intensity in a manner sensitive to the Ca2+ chelator EGTA irrespective of added CaCl2. The addition of the Ca2+ ionophore A23187 drastically increased Fluo-3 fluorescence in the absence of ZnCl2, while the addition of the Zn2+ ionophore pyrithione rapidly and additionally increased the fluorescence in the presence of ZnCl2, but not in its absence. In cells loaded with the zinc dye FluoZin-3 along with Fluo-3, a similarly gradual increase was seen in the fluorescence of Fluo-3, but not of FluoZin-3, in the presence of both CaCl2 and ZnCl2. Further addition of pyrithione drastically increased the fluorescence intensity of both dyes, while the addition of the Zn2+ chelator N,N,N',N'-tetrakis(2-pyridylmethyl)ethane-1,2-diamine (TPEN) rapidly and drastically decreased FluoZin-3 fluorescence. In cells loaded with FluoZin-3 alone, the addition of ZnCl2 induced a gradual increase in the fluorescence in a fashion independent of added CaCl2 but sensitive to EGTA. Significant inhibition was found in the vitality to reduce 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide in a manner sensitive to TPEN, EDTA and BAPTA in C6 glioma cells exposed to ZnCl2, with pyrithione accelerating the inhibition. Similar inhibition occurred in an EGTA-sensitive fashion after brief exposure to ZnCl2 in pluripotent P19 cells, neuronal Neuro2A cells and microglial BV2 cells, which all expressed mRNA for particular zinc transporters. Conclusions/Significance Taken together, comprehensive analysis is absolutely required for the demonstration of a variety of physiological and pathological responses mediated by Ca2+ in diverse cells enriched of Zn2+. PMID:26010609
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1993-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.
McConnell, Bradley K; Singh, Sonal; Fan, Qiying; Hernandez, Adriana; Portillo, Jesus P; Reiser, Peter J; Tikunova, Svetlana B
2015-01-01
The physiological consequences of aberrant Ca(2+) binding and exchange with cardiac myofilaments are not clearly understood. In order to examine the effect of decreasing Ca(2+) sensitivity of cTnC on cardiac function, we generated knock-in mice carrying a D73N mutation (not known to be associated with heart disease in human patients) in cTnC. The D73N mutation was engineered into the regulatory N-domain of cTnC in order to reduce Ca(2+) sensitivity of reconstituted thin filaments by increasing the rate of Ca(2+) dissociation. In addition, the D73N mutation drastically blunted the extent of Ca(2+) desensitization of reconstituted thin filaments induced by cTnI pseudo-phosphorylation. Compared to wild-type mice, heterozygous knock-in mice carrying the D73N mutation exhibited a substantially decreased Ca(2+) sensitivity of force development in skinned ventricular trabeculae. Kaplan-Meier survival analysis revealed that median survival time for knock-in mice was 12 weeks. Echocardiographic analysis revealed that knock-in mice exhibited increased left ventricular dimensions with thinner walls. Echocardiographic analysis also revealed that measures of systolic function, such as ejection fraction (EF) and fractional shortening (FS), were dramatically reduced in knock-in mice. In addition, knock-in mice displayed electrophysiological abnormalities, namely prolonged QRS and QT intervals. Furthermore, ventricular myocytes isolated from knock-in mice did not respond to β-adrenergic stimulation. Thus, knock-in mice developed pathological features similar to those observed in human patients with dilated cardiomyopathy (DCM). In conclusion, our results suggest that decreasing Ca(2+) sensitivity of the regulatory N-domain of cTnC is sufficient to trigger the development of DCM.
Auction development for the price-based electric power industry
NASA Astrophysics Data System (ADS)
Dekrajangpetch, Somgiat
The restructuring of the electric power industry is to move away from the cost-based monopolistic environment of the past to the priced-based competitive environment. As the electric power industry is restructuring in many places, there are still many problems that need to be solved. The work in this dissertation contributes to solve some of the electric power auction problems. The majority of this work is aimed to help develop good markets. A LaGrangian relaxation (LR) Centralized Daily Commitment Auction (CDCA) has been implemented. It has been shown that the solution might not be optimal nor fair to some generation companies (GENCOs) when identical or similar generating units participate in a LR CDCA based auction. Supporting information for bidding strategies on how to change unit data to enhance the chances of bid acceptance has been developed. The majority of this work is based on Single Period Commodity Auction (SPCA). Alternative structures for the SPCA are outlined. Whether the optimal solution is degenerated is investigated. Good pricing criteria are summarized and the pricing method following good pricing criteria is developed. Electricity is generally considered as a homogeneous product. When availability level is used as additional characteristic to distinct electricity, electricity can be considered a heterogeneous product. The procedure to trade electricity as a heterogeneous product is developed. The SPCA is formulated as a linear program. The basic IPLP algorithm has been extended so that sensitivity analysis can be performed as in the simplex method. Sensitivity analysis is used to determine market reach. Additionally, sensitivity analysis is used in combination with the investigation of historical auction results to provide raw data for power system expansion. Market power is a critical issue in electric power deregulation. Firms with market power have an advantage over other competitor firms in terms of market reach. Various approaches to determine market power and market reach are to be investigated. How firms can acquire additional customers or additional transactions, given the auction results, is to be investigated. Additionally, how firms can utilize their market power to enhance their chances of success is to be investigated.
Ahmed, Hytham M; Ebeid, Wael B
2015-05-15
Complex samples analysis is a challenge in pharmaceutical and biopharmaceutical analysis. In this work, tobramycin (TOB) analysis in human urine samples and recombinant human erythropoietin (rhEPO) analysis in the presence of similar protein were selected as representative examples of such samples analysis. Assays of TOB in urine samples are difficult because of poor detectability. Therefore laser induced fluorescence detector (LIF) was combined with a separation technique, micellar electrokinetic chromatography (MEKC), to determine TOB through derivatization with fluorescein isothiocyanate (FITC). Borate was used as background electrolyte (BGE) with negative-charged mixed micelles as additive. The method was successively applied to urine samples. The LOD and LOQ for Tobramycin in urine were 90 and 200ng/ml respectively and recovery was >98% (n=5). All urine samples were analyzed by direct injection without sample pre-treatment. Another use of hyphenated analytical technique, capillary zone electrophoresis (CZE) connected to ultraviolet (UV) detector was also used for sensitive analysis of rhEPO at low levels (2000IU) in the presence of large amount of human serum albumin (HSA). Analysis of rhEPO was achieved by the use of the electrokinetic injection (EI) with discontinuous buffers. Phosphate buffer was used as BGE with metal ions as additive. The proposed method can be used for the estimation of large number of quality control rhEPO samples in a short period. Copyright © 2015 Elsevier B.V. All rights reserved.
Analysis of ZDDP Content and Thermal Decomposition in Motor Oils Using NAA and NMR
NASA Astrophysics Data System (ADS)
Ferguson, S.; Johnson, J.; Gonzales, D.; Hobbs, C.; Allen, C.; Williams, S.
Zinc dialkyldithiophosphates (ZDDPs) are one of the most common anti-wear additives present in commercially-available motor oils. The ZDDP concentrations of motor oils are most commonly determined using inductively coupled plasma atomic emission spectroscopy (ICP-AES). As part of an undergraduate research project, we have determined the Zn concentrations of eight commercially-available motor oils and one oil additive using neutron activation analysis (NAA), which has potential for greater accuracy and less sensitivity to matrix effects as compared to ICP-AES. The 31P nuclear magnetic resonance (31P-NMR) spectra were also obtained for several oil additive samples which have been heated to various temperatures in order to study the thermal decomposition of ZDDPs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinbrink, Nicholas M.N.; Weinheimer, Christian; Glück, Ferenc
The KATRIN experiment aims to determine the absolute neutrino mass by measuring the endpoint region of the tritium β-spectrum. As a large-scale experiment with a sharp energy resolution, high source luminosity and low background it may also be capable of testing certain theories of neutrino interactions beyond the standard model (SM). An example of a non-SM interaction are right-handed currents mediated by right-handed W bosons in the left-right symmetric model (LRSM). In this extension of the SM, an additional SU(2){sub R} symmetry in the high-energy limit is introduced, which naturally includes sterile neutrinos and predicts the seesaw mechanism. In tritiummore » β decay, this leads to an additional term from interference between left- and right-handed interactions, which enhances or suppresses certain regions near the endpoint of the beta spectrum. In this work, the sensitivity of KATRIN to right-handed currents is estimated for the scenario of a light sterile neutrino with a mass of some eV. This analysis has been performed with a Bayesian analysis using Markov Chain Monte Carlo (MCMC). The simulations show that, in principle, KATRIN will be able to set sterile neutrino mass-dependent limits on the interference strength. The sensitivity is significantly increased if the Q value of the β decay can be sufficiently constrained. However, the sensitivity is not high enough to improve current upper limits from right-handed W boson searches at the LHC.« less
NASA Astrophysics Data System (ADS)
Steinbrink, Nicholas M. N.; Glück, Ferenc; Heizmann, Florian; Kleesiek, Marco; Valerius, Kathrin; Weinheimer, Christian; Hannestad, Steen
2017-06-01
The KATRIN experiment aims to determine the absolute neutrino mass by measuring the endpoint region of the tritium β-spectrum. As a large-scale experiment with a sharp energy resolution, high source luminosity and low background it may also be capable of testing certain theories of neutrino interactions beyond the standard model (SM). An example of a non-SM interaction are right-handed currents mediated by right-handed W bosons in the left-right symmetric model (LRSM). In this extension of the SM, an additional SU(2)R symmetry in the high-energy limit is introduced, which naturally includes sterile neutrinos and predicts the seesaw mechanism. In tritium β decay, this leads to an additional term from interference between left- and right-handed interactions, which enhances or suppresses certain regions near the endpoint of the beta spectrum. In this work, the sensitivity of KATRIN to right-handed currents is estimated for the scenario of a light sterile neutrino with a mass of some eV. This analysis has been performed with a Bayesian analysis using Markov Chain Monte Carlo (MCMC). The simulations show that, in principle, KATRIN will be able to set sterile neutrino mass-dependent limits on the interference strength. The sensitivity is significantly increased if the Q value of the β decay can be sufficiently constrained. However, the sensitivity is not high enough to improve current upper limits from right-handed W boson searches at the LHC.
Santurtún, Ana; Riancho, José A; Arozamena, Jana; López-Duarte, Mónica; Zarrabeitia, María T
2017-01-01
Several methods have been developed to determinate genetic profiles from a mixed samples and chimerism analysis in transplanted patients. The aim of this study was to explore the effectiveness of using the droplet digital PCR (ddPCR) for mixed chimerism detection (a mixture of genetic profiles resulting after allogeneic hematopoietic stem cell transplantation (HSCT)). We analyzed 25 DNA samples from patients who had undergone HSCT and compared the performance of ddPCR and two established methods for chimerism detection, based upon the Indel and STRs analysis, respectively. Additionally, eight artificial mixture DNA samples were created to evaluate the sensibility of ddPCR. Our results show that the chimerism percentages estimated by the analysis of a single Indel using ddPCR were very similar to those calculated by the amplification of 15 STRs (r 2 = 0.970) and with the results obtained by the amplification of 38 Indels (r 2 = 0.975). Moreover, the amplification of a single Indel by ddPCR was sensitive enough to detect a minor DNA contributor comprising down to 0.5 % of the sample. We conclude that ddPCR can be a powerful tool for the determination of a genetic profile of forensic mixtures and clinical chimerism analysis when traditional techniques are not sensitive enough.
Temperature Compensation Fiber Bragg Grating Pressure Sensor Based on Plane Diaphragm
NASA Astrophysics Data System (ADS)
Liang, Minfu; Fang, Xinqiu; Ning, Yaosheng
2018-06-01
Pressure sensors are the essential equipments in the field of pressure measurement. In this work, we propose a temperature compensation fiber Bragg grating (FBG) pressure sensor based on the plane diaphragm. The plane diaphragm and pressure sensitivity FBG (PS FBG) are used as the pressure sensitive components, and the temperature compensation FBG (TC FBG) is used to improve the temperature cross-sensitivity. Mechanical deformation model and deformation characteristics simulation analysis of the diaphragm are presented. The measurement principle and theoretical analysis of the mathematical relationship between the FBG central wavelength shift and pressure of the sensor are introduced. The sensitivity and measure range can be adjusted by utilizing the different materials and sizes of the diaphragm to accommodate different measure environments. The performance experiments are carried out, and the results indicate that the pressure sensitivity of the sensor is 35.7 pm/MPa in a range from 0 MPa to 50 MPa and has good linearity with a linear fitting correlation coefficient of 99.95%. In addition, the sensor has the advantages of low frequency chirp and high stability, which can be used to measure pressure in mining engineering, civil engineering, or other complex environment.
Acevedo, Joseph R; Fero, Katherine E; Wilson, Bayard; Sacco, Assuntina G; Mell, Loren K; Coffey, Charles S; Murphy, James D
2016-11-10
Purpose Recently, a large randomized trial found a survival advantage among patients who received elective neck dissection in conjunction with primary surgery for clinically node-negative oral cavity cancer compared with those receiving primary surgery alone. However, elective neck dissection comes with greater upfront cost and patient morbidity. We present a cost-effectiveness analysis of elective neck dissection for the initial surgical management of early-stage oral cavity cancer. Methods We constructed a Markov model to simulate primary, adjuvant, and salvage therapy; disease recurrence; and survival in patients with T1/T2 clinically node-negative oral cavity squamous cell carcinoma. Transition probabilities were derived from clinical trial data; costs (in 2015 US dollars) and health utilities were estimated from the literature. Incremental cost-effectiveness ratios, expressed as dollar per quality-adjusted life-year (QALY), were calculated with incremental cost-effectiveness ratios less than $100,000/QALY considered cost effective. We conducted one-way and probabilistic sensitivity analyses to examine model uncertainty. Results Our base-case model found that over a lifetime the addition of elective neck dissection to primary surgery reduced overall costs by $6,000 and improved effectiveness by 0.42 QALYs compared with primary surgery alone. The decrease in overall cost despite the added neck dissection was a result of less use of salvage therapy. On one-way sensitivity analysis, the model was most sensitive to assumptions about disease recurrence, survival, and the health utility reduction from a neck dissection. Probabilistic sensitivity analysis found that treatment with elective neck dissection was cost effective 76% of the time at a willingness-to-pay threshold of $100,000/QALY. Conclusion Our study found that the addition of elective neck dissection reduces costs and improves health outcomes, making this a cost-effective treatment strategy for patients with early-stage oral cavity cancer.
Liu, Wei; Xu, Libin; Lamberson, Connor; Haas, Dorothea; Korade, Zeljka; Porter, Ned A.
2014-01-01
We describe a highly sensitive method for the detection of 7-dehydrocholesterol (7-DHC), the biosynthetic precursor of cholesterol, based on its reactivity with 4-phenyl-1,2,4-triazoline-3,5-dione (PTAD) in a Diels-Alder cycloaddition reaction. Samples of biological tissues and fluids with added deuterium-labeled internal standards were derivatized with PTAD and analyzed by LC-MS. This protocol permits fast processing of samples, short chromatography times, and high sensitivity. We applied this method to the analysis of cells, blood, and tissues from several sources, including human plasma. Another innovative aspect of this study is that it provides a reliable and highly reproducible measurement of 7-DHC in 7-dehydrocholesterol reductase (Dhcr7)-HET mouse (a model for Smith-Lemli-Opitz syndrome) samples, showing regional differences in the brain tissue. We found that the levels of 7-DHC are consistently higher in Dhcr7-HET mice than in controls, with the spinal cord and peripheral nerve showing the biggest differences. In addition to 7-DHC, sensitive analysis of desmosterol in tissues and blood was also accomplished with this PTAD method by assaying adducts formed from the PTAD “ene” reaction. The method reported here may provide a highly sensitive and high throughput way to identify at-risk populations having errors in cholesterol biosynthesis. PMID:24259532
Anomaly metrics to differentiate threat sources from benign sources in primary vehicle screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, Israel Dov; Mengesha, Wondwosen
2011-09-01
Discrimination of benign sources from threat sources at Port of Entries (POE) is of a great importance in efficient screening of cargo and vehicles using Radiation Portal Monitors (RPM). Currently RPM's ability to distinguish these radiological sources is seriously hampered by the energy resolution of the deployed RPMs. As naturally occurring radioactive materials (NORM) are ubiquitous in commerce, false alarms are problematic as they require additional resources in secondary inspection in addition to impacts on commerce. To increase the sensitivity of such detection systems without increasing false alarm rates, alarm metrics need to incorporate the ability to distinguish benign andmore » threat sources. Principal component analysis (PCA) and clustering technique were implemented in the present study. Such techniques were investigated for their potential to lower false alarm rates and/or increase sensitivity to weaker threat sources without loss of specificity. Results of the investigation demonstrated improved sensitivity and specificity in discriminating benign sources from threat sources.« less
Lu, Hongzhi; Quan, Shuai; Xu, Shoufang
2017-11-08
In this work, we developed a simple and sensitive ratiometric fluorescent assay for sensing trinitrotoluene (TNT) based on the inner filter effect (IFE) between gold nanoparticles (AuNPs) and ratiometric fluorescent nanoparticles (RFNs), which was designed by hybridizing green emissive carbon dots (CDs) and red emissive quantum dots (QDs) into a silica sphere as a fluorophore pair. AuNPs in their dispersion state can be a powerful absorber to quench CDs, while the aggregated AuNPs can quench QDs in the IFE-based fluorescent assays as a result of complementary overlap between the absorption spectrum of AuNPs and emission spectrum of RFNs. As a result of the fact that TNT can induce the aggregation of AuNPs, with the addition of TNT, the fluorescent of QDs can be quenched, while the fluorescent of CDs would be recovered. Then, ratiometric fluorescent detection of TNT is feasible. The present IFE-based ratiometric fluorescent sensor can detect TNT ranging from 0.1 to 270 nM, with a detection limit of 0.029 nM. In addition, the developed method was successfully applied to investigate TNT in water and soil samples with satisfactory recoveries ranging from 95 to 103%, with precision below 4.5%. The simple sensing approach proposed here could improve the sensitivity of colorimetric analysis by changing the ultraviolet analysis to ratiometric fluorescent analysis and promote the development of a dual-mode detection system.
Inverse gene-for-gene interactions contribute additively to tan spot susceptibility in wheat.
Liu, Zhaohui; Zurn, Jason D; Kariyawasam, Gayan; Faris, Justin D; Shi, Gongjun; Hansen, Jana; Rasmussen, Jack B; Acevedo, Maricelis
2017-06-01
Tan spot susceptibility is conferred by multiple interactions of necrotrophic effector and host sensitivity genes. Tan spot of wheat, caused by Pyrenophora tritici-repentis, is an important disease in almost all wheat-growing areas of the world. The disease system is known to involve at least three fungal-produced necrotrophic effectors (NEs) that interact with the corresponding host sensitivity (S) genes in an inverse gene-for-gene manner to induce disease. However, it is unknown if the effects of these NE-S gene interactions contribute additively to the development of tan spot. In this work, we conducted disease evaluations using different races and quantitative trait loci (QTL) analysis in a wheat recombinant inbred line (RIL) population derived from a cross between two susceptible genotypes, LMPG-6 and PI 626573. The two parental lines each harbored a single known NE sensitivity gene with LMPG-6 having the Ptr ToxC sensitivity gene Tsc1 and PI 626573 having the Ptr ToxA sensitivity gene Tsn1. Transgressive segregation was observed in the population for all races. QTL mapping revealed that both loci (Tsn1 and Tsc1) were significantly associated with susceptibility to race 1 isolates, which produce both Ptr ToxA and Ptr ToxC, and the two genes contributed additively to tan spot susceptibility. For isolates of races 2 and 3, which produce only Ptr ToxA and Ptr ToxC, only Tsn1 and Tsc1 were associated with tan spot susceptibility, respectively. This work clearly demonstrates that tan spot susceptibility in this population is due primarily to two NE-S interactions. Breeders should remove both sensitivity genes from wheat lines to obtain high levels of tan spot resistance.
Phi, Xuan-Anh; Saadatmand, Sepideh; De Bock, Geertruida H; Warner, Ellen; Sardanelli, Francesco; Leach, Martin O; Riedl, Christopher C; Trop, Isabelle; Hooning, Maartje J; Mandel, Rodica; Santoro, Filippo; Kwan-Lim, Gek; Helbich, Thomas H; Tilanus-Linthorst, Madeleine MA; van den Heuvel, Edwin R; Houssami, Nehmat
2016-01-01
Background: We investigated the additional contribution of mammography to screening accuracy in BRCA1/2 mutation carriers screened with MRI at different ages using individual patient data from six high-risk screening trials. Methods: Sensitivity and specificity of MRI, mammography and the combination of these tests were compared stratified for BRCA mutation and age using generalised linear mixed models with random effect for studies. Number of screens needed (NSN) for additional mammography-only detected cancer was estimated. Results: In BRCA1/2 mutation carriers of all ages (BRCA1=1219 and BRCA2=732), adding mammography to MRI did not significantly increase screening sensitivity (increased by 3.9% in BRCA1 and 12.6% in BRCA2 mutation carriers, P>0.05). However, in women with BRCA2 mutation younger than 40 years, one-third of breast cancers were detected by mammography only. Number of screens needed for mammography to detect one breast cancer not detected by MRI was much higher for BRCA1 compared with BRCA2 mutation carriers at initial and repeat screening. Conclusions: Additional screening sensitivity from mammography above that from MRI is limited in BRCA1 mutation carriers, whereas mammography contributes to screening sensitivity in BRCA2 mutation carriers, especially those ⩽40 years. The evidence from our work highlights that a differential screening schedule by BRCA status is worth considering. PMID:26908327
Phi, Xuan-Anh; Saadatmand, Sepideh; De Bock, Geertruida H; Warner, Ellen; Sardanelli, Francesco; Leach, Martin O; Riedl, Christopher C; Trop, Isabelle; Hooning, Maartje J; Mandel, Rodica; Santoro, Filippo; Kwan-Lim, Gek; Helbich, Thomas H; Tilanus-Linthorst, Madeleine M A; van den Heuvel, Edwin R; Houssami, Nehmat
2016-03-15
We investigated the additional contribution of mammography to screening accuracy in BRCA1/2 mutation carriers screened with MRI at different ages using individual patient data from six high-risk screening trials. Sensitivity and specificity of MRI, mammography and the combination of these tests were compared stratified for BRCA mutation and age using generalised linear mixed models with random effect for studies. Number of screens needed (NSN) for additional mammography-only detected cancer was estimated. In BRCA1/2 mutation carriers of all ages (BRCA1 = 1,219 and BRCA2 = 732), adding mammography to MRI did not significantly increase screening sensitivity (increased by 3.9% in BRCA1 and 12.6% in BRCA2 mutation carriers, P > 0.05). However, in women with BRCA2 mutation younger than 40 years, one-third of breast cancers were detected by mammography only. Number of screens needed for mammography to detect one breast cancer not detected by MRI was much higher for BRCA1 compared with BRCA2 mutation carriers at initial and repeat screening. Additional screening sensitivity from mammography above that from MRI is limited in BRCA1 mutation carriers, whereas mammography contributes to screening sensitivity in BRCA2 mutation carriers, especially those ⩽ 40 years. The evidence from our work highlights that a differential screening schedule by BRCA status is worth considering.
NASA Astrophysics Data System (ADS)
Cao, Ensi; Yang, Yuqing; Cui, Tingting; Zhang, Yongjia; Hao, Wentao; Sun, Li; Peng, Hua; Deng, Xiao
2017-01-01
LaFeO3-δ nanoparticles were prepared by citric sol-gel method with different raw material choosing and calcination process. The choosing of polyethylene glycol instead of ethylene glycol as raw material and additional pre-calcination at 400 °C rather than direct calcination at 600 °C could result in the decrease of resistance due to the reduction of activation energy Ea. Meanwhile, the choosing of ethylene glycol as raw material and additional pre-calcination leads to the enhancement of sensitivity to ethanol. Comprehensive analysis on the sensitivity and XRD, SEM, TEM, XPS results indicates that the sensing performance of LaFeO3-δ should be mainly determined by the adsorbed oxygen species on Fe ions, with certain contribution from native active oxygen. The best sensitivity of 46.1-200 ppm ethanol at prime working temperature of 112 °C is obtained by the sample using ethylene glycol as raw material with additional pre-calcination, which originates from its uniformly-sized and well-dispersed particles as well as high atomic ratio of Fe/La at surface region.
Kim, Eun Ju; Lee, Dong Hun; Kim, Yeon Kyung; Kim, Min-Kyoung; Kim, Jung Yun; Lee, Min Jung; Choi, Won Woo; Eun, Hee Chul; Chung, Jin Ho
2014-12-01
Sensitive skin represents hyperactive sensory symptoms showing exaggerated reactions in response to internal stimulants or external irritants. Although sensitive skin is a very common condition affecting an estimated 50% of the population, its pathophysiology remains largely elusive, particularly with regard to its metabolic aspects. The objective of our study was to investigate the pathogenesis of sensitive skin. We recruited healthy participants with 'sensitive' or 'non-sensitive' skin based on standardized questionnaires and 10% lactic acid stinging test, and obtained skin samples for microarray analysis and subsequent experiments. Microarray transcriptome profiling revealed that genes involved in muscle contraction, carbohydrate and lipid metabolism, and ion transport and balance were significantly decreased in sensitive skin. These altered genes could account for the abnormal muscle contraction, decreased ATP amount in sensitive skin. In addition, pain-related transcripts such as TRPV1, ASIC3 and CGRP were significantly up-regulated in sensitive skin, compared with non-sensitive skin. Our findings suggest that sensitive skin is closely associated with the dysfunction of muscle contraction and metabolic homeostasis. Copyright © 2014 Japanese Society for Investigative Dermatology. Published by Elsevier Ireland Ltd. All rights reserved.
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P; Oken, Barry S
2011-10-01
To determine (1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and (2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings 2 weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Time domain, especially mean R-R interval (RRI), frequency domain and, among non-linear parameters - Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Hierarchical Nanogold Labels to Improve the Sensitivity of Lateral Flow Immunoassay
NASA Astrophysics Data System (ADS)
Serebrennikova, Kseniya; Samsonova, Jeanne; Osipov, Alexander
2018-06-01
Lateral flow immunoassay (LFIA) is a widely used express method and offers advantages such as a short analysis time, simplicity of testing and result evaluation. However, an LFIA based on gold nanospheres lacks the desired sensitivity, thereby limiting its wide applications. In this study, spherical nanogold labels along with new types of nanogold labels such as gold nanopopcorns and nanostars were prepared, characterized, and applied for LFIA of model protein antigen procalcitonin. It was found that the label with a structure close to spherical provided more uniform distribution of specific antibodies on its surface, indicative of its suitability for this type of analysis. LFIA using gold nanopopcorns as a label allowed procalcitonin detection over a linear range of 0.5-10 ng mL-1 with the limit of detection of 0.1 ng mL-1, which was fivefold higher than the sensitivity of the assay with gold nanospheres. Another approach to improve the sensitivity of the assay included the silver enhancement method, which was used to compare the amplification of LFIA for procalcitonin detection. The sensitivity of procalcitonin determination by this method was 10 times better the sensitivity of the conventional LFIA with gold nanosphere as a label. The proposed approach of LFIA based on gold nanopopcorns improved the detection sensitivity without additional steps and prevented the increased consumption of specific reagents (antibodies).
Sensitivity curves for searches for gravitational-wave backgrounds
NASA Astrophysics Data System (ADS)
Thrane, Eric; Romano, Joseph D.
2013-12-01
We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs “power-law integrated curves.” For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves.
Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel
2017-01-01
We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725
First- and second-order sensitivity analysis of linear and nonlinear structures
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Mroz, Z.
1986-01-01
This paper employs the principle of virtual work to derive sensitivity derivatives of structural response with respect to stiffness parameters using both direct and adjoint approaches. The computations required are based on additional load conditions characterized by imposed initial strains, body forces, or surface tractions. As such, they are equally applicable to numerical or analytical solution techniques. The relative efficiency of various approaches for calculating first and second derivatives is assessed. It is shown that for the evaluation of second derivatives the most efficient approach is one that makes use of both the first-order sensitivities and adjoint vectors. Two example problems are used for demonstrating the various approaches.
Global sensitivity analysis of a dynamic agroecosystem model under different irrigation treatments
USDA-ARS?s Scientific Manuscript database
Savings in consumptive use through limited or deficit irrigation in agriculture has become an increasingly viable source of additional water for places with high population growth such as the Colorado Front Range, USA. Crop models provide a mechanism to evaluate various management methods without pe...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-13
... number. Comments also should not include any sensitive health information, such as medical records or other individually identifiable health information. In addition, comments should not include any ``[t... fertilizers: nitrogen, phosphate, and potash, as well as control release fertilizers and micronutrients...
Mass Spectrometry contamination from Tinuvin 770, a common additive in laboratory plastics
USDA-ARS?s Scientific Manuscript database
The superior sensitivity of current mass spectrometers makes them prone to contamination issues which can have deleterious effects on sample analysis. Here, Bis(2,2,6,6-tetramethyl-4-piperidyl) sebacate (marketed under the name Tinuvin 770) is identified as a major contaminant in applications utiliz...
Analysis of JPSS J1 VIIRS Polarization Sensitivity Using the NIST T-SIRCUS
NASA Technical Reports Server (NTRS)
McIntire, Jeffrey W.; Young, James B.; Moyer, David; Waluschka, Eugene; Oudrari, Hassan; Xiong, Xiaoxiong
2015-01-01
The polarization sensitivity of the Joint Polar Satellite System (JPSS) J1 Visible Infrared Imaging Radiometer Suite (VIIRS) measured pre-launch using a broadband source was observed to be larger than expected for many reflective bands. Ray trace modeling predicted that the observed polarization sensitivity was the result of larger diattenuation at the edges of the focal plane filter spectral bandpass. Additional ground measurements were performed using a monochromatic source (the NIST T-SIRCUS) to input linearly polarized light at a number of wavelengths across the bandpass of two VIIRS spectral bands and two scan angles. This work describes the data processing, analysis, and results derived from the T-SIRCUS measurements, comparing them with broadband measurements. Results have shown that the observed degree of linear polarization, when weighted by the sensor's spectral response function, is generally larger on the edges and smaller in the center of the spectral bandpass, as predicted. However, phase angle changes in the center of the bandpass differ between model and measurement. Integration of the monochromatic polarization sensitivity over wavelength produced results consistent with the broadband source measurements, for all cases considered.
Cox, Jonathan T.; Kronewitter, Scott R.; Shukla, Anil K.; ...
2014-09-15
Subambient pressure ionization with nanoelectrospray (SPIN) has proven to be effective in producing ions with high efficiency and transmitting them to low pressures for high sensitivity mass spectrometry (MS) analysis. Here we present evidence that not only does the SPIN source improve MS sensitivity but also allows for gentler ionization conditions. The gentleness of a conventional heated capillary electrospray ionization (ESI) source and the SPIN source was compared by the liquid chromatography mass spectrometry (LC-MS) analysis of colominic acid. Colominic acid is a mixture of sialic acid polymers of different lengths containing labile glycosidic linkages between monomer units necessitating amore » gentle ion source. By coupling the SPIN source with high resolution mass spectrometry and using advanced data processing tools, we demonstrate much extended coverage of sialic acid polymer chains as compared to using the conventional ESI source. Additionally we show that SPIN-LC-MS is effective in elucidating polymer features with high efficiency and high sensitivity previously unattainable by the conventional ESI-LC-MS methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karaulanov, Todor; Savukov, Igor; Kim, Young Jin
We constructed a spin-exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell. This configuration provides almost complete overlap of the beams in the cell, and prevents the pump beam from entering the probe detection channel. By coupling the lasers in multi-mode fibers, without an optical isolator or field modulation, we demonstrate a sensitivity of 10 fTmore » $$/\\sqrt{\\text{Hz}}$$ for frequencies between 10 Hz and 100 Hz. In addition to the experimental study of sensitivity, we present a theoretical analysis of SERF magnetometer response to magnetic fields for small-angle and parallel-beam configurations, and show that at optimal DC offset fields the magnetometer response is comparable to that in the orthogonal-beam configuration. Based on the analysis, we also derive fundamental and probe-limited sensitivities for the arbitrary non-orthogonal geometry. The expected practical and fundamental sensitivities are of the same order as those in the orthogonal geometry. As a result, we anticipate that our design will be useful for magnetoencephalography (MEG) and magnetocardiography (MCG) applications.« less
Commercial test kits for detection of Lyme borreliosis: a meta-analysis of test accuracy
Cook, Michael J; Puri, Basant K
2016-01-01
The clinical diagnosis of Lyme borreliosis can be supported by various test methodologies; test kits are available from many manufacturers. Literature searches were carried out to identify studies that reported characteristics of the test kits. Of 50 searched studies, 18 were included where the tests were commercially available and samples were proven to be positive using serology testing, evidence of an erythema migrans rash, and/or culture. Additional requirements were a test specificity of ≥85% and publication in the last 20 years. The weighted mean sensitivity for all tests and for all samples was 59.5%. Individual study means varied from 30.6% to 86.2%. Sensitivity for each test technology varied from 62.4% for Western blot kits, and 62.3% for enzyme-linked immunosorbent assay tests, to 53.9% for synthetic C6 peptide ELISA tests and 53.7% when the two-tier methodology was used. Test sensitivity increased as dissemination of the pathogen affected different organs; however, the absence of data on the time from infection to serological testing and the lack of standard definitions for “early” and “late” disease prevented analysis of test sensitivity versus time of infection. The lack of standardization of the definitions of disease stage and the possibility of retrospective selection bias prevented clear evaluation of test sensitivity by “stage”. The sensitivity for samples classified as acute disease was 35.4%, with a corresponding sensitivity of 64.5% for samples from patients defined as convalescent. Regression analysis demonstrated an improvement of 4% in test sensitivity over the 20-year study period. The studies did not provide data to indicate the sensitivity of tests used in a clinical setting since the effect of recent use of antibiotics or steroids or other factors affecting antibody response was not factored in. The tests were developed for only specific Borrelia species; sensitivities for other species could not be calculated. PMID:27920571
Characterization of Homopolymer and Polymer Blend Films by Phase Sensitive Acoustic Microscopy
NASA Astrophysics Data System (ADS)
Ngwa, Wilfred; Wannemacher, Reinhold; Grill, Wolfgang
2003-03-01
CHARACTERIZATION OF HOMOPOLYMER AND POLYMER BLEND FILMS BY PHASE SENSITIVE ACOUSTIC MICROSCOPY W Ngwa, R Wannemacher, W Grill Institute of Experimental Physics II, University of Leipzig, 04103 Leipzig, Germany Abstract We have used phase sensitive acoustic microscopy (PSAM) to study homopolymer thin films of polystyrene (PS) and poly (methyl methacrylate) (PMMA), as well as PS/PMMA blend films. We show from our results that PSAM can be used as a complementary and highly valuable technique for elucidating the three-dimensional (3D) morphology and micromechanical properties of thin films. Three-dimensional image acquisition with vector contrast provides the basis for: complex V(z) analysis (per image pixel), 3D image processing, height profiling, and subsurface image analysis of the polymer films. Results show good agreement with previous studies. In addition, important new information on the three dimensional structure and properties of polymer films is obtained. Homopolymer film structure analysis reveals (pseudo-) dewetting by retraction of droplets, resulting in a morphology that can serve as a starting point for the analysis of polymer blend thin films. The outcome of confocal laser scanning microscopy studies, performed on the same samples are correlated with the obtained results. Advantages and limitations of PSAM are discussed.
NASA Astrophysics Data System (ADS)
Yang, Xiaofeng; Cui, Yu; Li, Yexin; Zheng, Luyi; Xie, Lijun; Ning, Rui; Liu, Zheng; Lu, Junling; Zhang, Gege; Liu, Chunxiang; Zhang, Guangyou
2015-02-01
A new probe was synthesized by incorporating an α,β -unsaturated ketone to a diketopyrrolopyrrole fluorophore. The probe had exhibited a selective and sensitive response to the sulfite against other thirteen anions and biothiols (Cys, Hcy and GSH), through the nucleophilic addition of sulfite to the alkene of probe with the detection limit of 0.1 μM in HEPES (10 mM, pH 7.4) THF/H2O (1:1, v/v). Meanwhile, it could be easily observed that the probe for sulfite changed from pink to colorless by the naked eye, and from pink to blue under UV lamp after the sulfite was added for 20 min. The NMR and Mass spectral analysis demonstrated the expected addition of sulfite to the Cdbnd C bonds.
Cost-effectiveness of Chlamydia Vaccination Programs for Young Women
Chesson, Harrell W.; Gift, Thomas L.; Brunham, Robert C.; Bolan, Gail
2015-01-01
We explored potential cost-effectiveness of a chlamydia vaccine for young women in the United States by using a compartmental heterosexual transmission model. We tracked health outcomes (acute infections and sequelae measured in quality-adjusted life-years [QALYs]) and determined incremental cost-effectiveness ratios (ICERs) over a 50-year analytic horizon. We assessed vaccination of 14-year-old girls and catch-up vaccination for 15–24-year-old women in the context of an existing chlamydia screening program and assumed 2 prevaccination prevalences of 3.2% by main analysis and 3.7% by additional analysis. Estimated ICERs of vaccinating 14-year-old girls were $35,300/QALY by main analysis and $16,200/QALY by additional analysis compared with only screening. Catch-up vaccination for 15–24-year-old women resulted in estimated ICERs of $53,200/QALY by main analysis and $26,300/QALY by additional analysis. The ICER was most sensitive to prevaccination prevalence for women, followed by cost of vaccination, duration of vaccine-conferred immunity, and vaccine efficacy. Our results suggest that a successful chlamydia vaccine could be cost-effective. PMID:25989525
Kauppila, T J; Flink, A; Pukkila, J; Ketola, R A
2016-02-28
Fast methods that allow the in situ analysis of explosives from a variety of surfaces are needed in crime scene investigations and home-land security. Here, the feasibility of the ambient mass spectrometry technique desorption atmospheric pressure photoionization (DAPPI) in the analysis of the most common nitrogen-based explosives is studied. DAPPI and desorption electrospray ionization (DESI) were compared in the direct analysis of trinitrotoluene (TNT), trinitrophenol (picric acid), octogen (HMX), cyclonite (RDX), pentaerythritol tetranitrate (PETN), and nitroglycerin (NG). The effect of different additives in DAPPI dopant and in DESI spray solvent on the ionization efficiency was tested, as well as the suitability of DAPPI to detect explosives from a variety of surfaces. The analytes showed ions only in negative ion mode. With negative DAPPI, TNT and picric acid formed deprotonated molecules with all dopant systems, while RDX, HMX, PETN and NG were ionized by adduct formation. The formation of adducts was enhanced by addition of chloroform, formic acid, acetic acid or nitric acid to the DAPPI dopant. DAPPI was more sensitive than DESI for TNT, while DESI was more sensitive for HMX and picric acid. DAPPI could become an important method for the direct analysis of nitroaromatics from a variety of surfaces. For compounds that are thermally labile, or that have very low vapor pressure, however, DESI is better suited. Copyright © 2016 John Wiley & Sons, Ltd.
Sensitivity analysis of Repast computational ecology models with R/Repast.
Prestes García, Antonio; Rodríguez-Patón, Alfonso
2016-12-01
Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.
Byers, Helen; Wallis, Yvonne; van Veen, Elke M; Lalloo, Fiona; Reay, Kim; Smith, Philip; Wallace, Andrew J; Bowers, Naomi; Newman, William G; Evans, D Gareth
2016-11-01
The sensitivity of testing BRCA1 and BRCA2 remains unresolved as the frequency of deep intronic splicing variants has not been defined in high-risk familial breast/ovarian cancer families. This variant category is reported at significant frequency in other tumour predisposition genes, including NF1 and MSH2. We carried out comprehensive whole gene RNA analysis on 45 high-risk breast/ovary and male breast cancer families with no identified pathogenic variant on exonic sequencing and copy number analysis of BRCA1/2. In addition, we undertook variant screening of a 10-gene high/moderate risk breast/ovarian cancer panel by next-generation sequencing. DNA testing identified the causative variant in 50/56 (89%) breast/ovarian/male breast cancer families with Manchester scores of ≥50 with two variants being confirmed to affect splicing on RNA analysis. RNA sequencing of BRCA1/BRCA2 on 45 individuals from high-risk families identified no deep intronic variants and did not suggest loss of RNA expression as a cause of lost sensitivity. Panel testing in 42 samples identified a known RAD51D variant, a high-risk ATM variant in another breast ovary family and a truncating CHEK2 mutation. Current exonic sequencing and copy number analysis variant detection methods of BRCA1/2 have high sensitivity in high-risk breast/ovarian cancer families. Sequence analysis of RNA does not identify any variants undetected by current analysis of BRCA1/2. However, RNA analysis clarified the pathogenicity of variants of unknown significance detected by current methods. The low diagnostic uplift achieved through sequence analysis of the other known breast/ovarian cancer susceptibility genes indicates that further high-risk genes remain to be identified.
NASA Astrophysics Data System (ADS)
Steinbrink, Nicholas M. N.; Behrens, Jan D.; Mertens, Susanne; Ranitzsch, Philipp C.-O.; Weinheimer, Christian
2018-03-01
We investigate the sensitivity of the Karlsruhe Tritium Neutrino Experiment (KATRIN) to keV-scale sterile neutrinos, which are promising dark matter candidates. Since the active-sterile mixing would lead to a second component in the tritium β-spectrum with a weak relative intensity of order sin ^2θ ≲ 10^{-6}, additional experimental strategies are required to extract this small signature and to eliminate systematics. A possible strategy is to run the experiment in an alternative time-of-flight (TOF) mode, yielding differential TOF spectra in contrast to the integrating standard mode. In order to estimate the sensitivity from a reduced sample size, a new analysis method, called self-consistent approximate Monte Carlo (SCAMC), has been developed. The simulations show that an ideal TOF mode would be able to achieve a statistical sensitivity of sin ^2θ ˜ 5 × 10^{-9} at one σ , improving the standard mode by approximately a factor two. This relative benefit grows significantly if additional exemplary systematics are considered. A possible implementation of the TOF mode with existing hardware, called gated filtering, is investigated, which, however, comes at the price of a reduced average signal rate.
Nanowire size dependence on sensitivity of silicon nanowire field-effect transistor-based pH sensor
NASA Astrophysics Data System (ADS)
Lee, Ryoongbin; Kwon, Dae Woong; Kim, Sihyun; Kim, Sangwan; Mo, Hyun-Sun; Kim, Dae Hwan; Park, Byung-Gook
2017-12-01
In this study, we investigated the effects of nanowire size on the current sensitivity of silicon nanowire (SiNW) ion-sensitive field-effect transistors (ISFETs). The changes in on-current (I on) and resistance according to pH were measured in fabricated SiNW ISFETs of various lengths and widths. As a result, it was revealed that the sensitivity expressed as relative I on change improves as the width decreases. Through technology computer-aided design (TCAD) simulation analysis, the width dependence on the relative I on change can be explained by the observation that the target molecules located at the edge region along the channel width have a stronger effect on the sensitivity as the SiNW width is reduced. Additionally, the length dependence on the sensitivity can be understood in terms of the resistance ratio of the fixed parasitic resistance, including source/drain resistance, to the varying channel resistance as a function of channel length.
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.
2017-01-01
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
Wang, Heye; Dou, Peng; Lü, Chenchen; Liu, Zhen
2012-07-13
Erythropoietin (EPO) is an important glycoprotein hormone. Recombinant human EPO (rhEPO) is an important therapeutic drug and can be also used as doping reagent in sports. The analysis of EPO glycoforms in pharmaceutical and sports areas greatly challenges analytical scientists from several aspects, among which sensitive detection and effective and facile sample preparation are two essential issues. Herein, we investigated new possibilities for these two aspects. Deep UV laser-induced fluorescence detection (deep UV-LIF) was established to detect the intrinsic fluorescence of EPO while an immuno-magnetic beads-based extraction (IMBE) was developed to specifically extract EPO glycoforms. Combined with capillary zone electrophoresis (CZE), CZE-deep UV-LIF allows high resolution glycoform profiling with improved sensitivity. The detection sensitivity was improved by one order of magnitude as compared with UV absorbance detection. An additional advantage is that the original glycoform distribution can be completely preserved because no fluorescent labeling is needed. By combining IMBE with CZE-deep UV-LIF, the overall detection sensitivity was 1.5 × 10⁻⁸ mol/L, which was enhanced by two orders of magnitude relative to conventional CZE with UV absorbance detection. It is applicable to the analysis of pharmaceutical preparations of EPO, but the sensitivity is insufficient for the anti-doping analysis of EPO in blood and urine. IMBE can be straightforward and effective approach for sample preparation. However, antibodies with high specificity were the key for application to urine samples because some urinary proteins can severely interfere the immuno-extraction. Copyright © 2012 Elsevier B.V. All rights reserved.
Recent development of electrochemiluminescence sensors for food analysis.
Hao, Nan; Wang, Kun
2016-10-01
Food quality and safety are closely related to human health. In the face of unceasing food safety incidents, various analytical techniques, such as mass spectrometry, chromatography, spectroscopy, and electrochemistry, have been applied in food analysis. High sensitivity usually requires expensive instruments and complicated procedures. Although these modern analytical techniques are sensitive enough to ensure food safety, sometimes their applications are limited because of the cost, usability, and speed of analysis. Electrochemiluminescence (ECL) is a powerful analytical technique that is attracting more and more attention because of its outstanding performance. In this review, the mechanisms of ECL and common ECL luminophores are briefly introduced. Then an overall review of the principles and applications of ECL sensors for food analysis is provided. ECL can be flexibly combined with various separation techniques. Novel materials (e.g., various nanomaterials) and strategies (e.g., immunoassay, aptasensors, and microfluidics) have been progressively introduced into the design of ECL sensors. By illustrating some selected representative works, we summarize the state of the art in the development of ECL sensors for toxins, heavy metals, pesticides, residual drugs, illegal additives, viruses, and bacterias. Compared with other methods, ECL can provide rapid, low-cost, and sensitive detection for various food contaminants in complex matrixes. However, there are also some limitations and challenges. Improvements suited to the characteristics of food analysis are still necessary.
Analysis of airfoil leading edge separation bubbles
NASA Technical Reports Server (NTRS)
Carter, J. E.; Vatsa, V. N.
1982-01-01
A local inviscid-viscous interaction technique was developed for the analysis of low speed airfoil leading edge transitional separation bubbles. In this analysis an inverse boundary layer finite difference analysis is solved iteratively with a Cauchy integral representation of the inviscid flow which is assumed to be a linear perturbation to a known global viscous airfoil analysis. Favorable comparisons with data indicate the overall validity of the present localized interaction approach. In addition numerical tests were performed to test the sensitivity of the computed results to the mesh size, limits on the Cauchy integral, and the location of the transition region.
NASA Astrophysics Data System (ADS)
da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho
2018-04-01
A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.
Wsol, Agnieszka; Wydra, Wioletta; Chmielewski, Marek; Swiatowiec, Andrzej; Kuch, Marek
2017-01-01
A retrospective study was designed to investigate P-wave duration changes in exercise stress test (EST) for the prediction of angiographically documented substantial coronary artery disease (CAD). We analyzed 265 cases of patients, who underwent EST and subsequently coronary angiography. Analysis of P-wave duration was performed in leads II, V5 at rest, and in the recovery period. The sensitivity and specificity for the isolated ST-segment depression were only 31% and 76%, respectively. The combination of ST-depression with other exercise-induced clinical and electrocardio-graphic abnormalities (chest pain, ventricular arrhythmia, hypotension, left bundle branch block) was characterized by 41% sensitivity and 69% specificity. The combination of abnormal recovery P-wave duration (≥ 120 ms) with ST-depression and other exercise-induced abnormalities had 83% sensitivity but only 20% specificity. Combined analysis of increased delta P-wave duration, ST-depression and other exercise-induced abnormalities had 69% sensitivity and 42% specificity. Sensitivity and specificity of the increase in delta P-wave duration for left CAD was 69% and 47%, respectively, and for 3-vessel CAD 70% and 50%, respectively. The presence of arterial hypertension negatively influenced the prog-nostic value of P-wave changes in the stress test. The results of the study show that an addition of P-wave duration changes assessment to ST-depression analysis and other exercise-induced abnormalities increase sensitivity of EST, especially for left CAD and 3-vessel coronary disease. We have also provided evidence for the negative influence of the presence of arterial hypertension on the predictive value of P-wave changes in the stress test. (Cardiol J 2017; 24, 2: 159-166).
Alrajab, Saadah; Youssef, Asser M; Akkus, Nuri I; Caldito, Gloria
2013-09-23
Ultrasonography is being increasingly utilized in acute care settings with expanding applications. Pneumothorax evaluation by ultrasonography is a fast, safe, easy and inexpensive alternative to chest radiographs. In this review, we provide a comprehensive analysis of the current literature comparing ultrasonography and chest radiography for the diagnosis of pneumothorax. We searched English-language articles in MEDLINE, EMBASE and Cochrane Library dealing with both ultrasonography and chest radiography for diagnosis of pneumothorax. In eligible studies that met strict inclusion criteria, we conducted a meta-analysis to evaluate the diagnostic accuracy of pleural ultrasonography in comparison with chest radiography for the diagnosis of pneumothorax. We reviewed 601 articles and selected 25 original research articles for detailed review. Only 13 articles met all of our inclusion criteria and were included in the final analysis. One study used lung sliding sign alone, 12 studies used lung sliding and comet tail signs, and 6 studies searched for lung point in addition to the other two signs. Ultrasonography had a pooled sensitivity of 78.6% (95% CI, 68.1 to 98.1) and a specificity of 98.4% (95% CI, 97.3 to 99.5). Chest radiography had a pooled sensitivity of 39.8% (95% CI, 29.4 to 50.3) and a specificity of 99.3% (95% CI, 98.4 to 100). Our meta-regression and subgroup analyses indicate that consecutive sampling of patients compared to convenience sampling provided higher sensitivity results for both ultrasonography and chest radiography. Consecutive versus nonconsecutive sampling and trauma versus nontrauma settings were significant sources of heterogeneity. In addition, subgroup analysis showed significant variations related to operator and type of probe used. Our study indicates that ultrasonography is more accurate than chest radiography for detection of pneumothorax. The results support the previous investigations in this field, add new valuable information obtained from subgroup analysis, and provide accurate estimates for the performance parameters of both bedside ultrasonography and chest radiography for pneumothorax evaluation.
Marquezin, Maria Carolina Salomé; Pedroni-Pereira, Aline; Araujo, Darlle Santos; Rosar, João Vicente; Barbosa, Taís S; Castelo, Paula Midori
2016-08-01
The objective of this study is to better understand salivary and masticatory characteristics, this study evaluated the relationship among salivary parameters, bite force (BF), masticatory performance (MP) and gustatory sensitivity in healthy children. The secondary outcome was to evaluate possible gender differences. One hundred and sixteen eutrophic subjects aged 7-11 years old were evaluated, caries-free and with no definite need of orthodontic treatment. Salivary flow rate and pH, total protein (TP), alpha-amylase (AMY), calcium (CA) and phosphate (PHO) concentrations were determined in stimulated (SS) and unstimulated saliva (US). BF and MP were evaluated using digital gnathodynamometer and fractional sieving method, respectively. Gustatory sensitivity was determined by detecting the four primary tastes (sweet, salty, sour and bitter) in three different concentrations. Data were evaluated using descriptive statistics, Mann-Whitney/t-test, Spearman correlation and multiple regression analysis, considering α = 0.05. Significant positive correlation between taste and age was observed. CA and PHO concentrations correlated negatively with salivary flow and pH; sweet taste scores correlated with AMY concentrations and bitter taste sensitivity correlated with US flow rate (p < 0.05). No significant difference between genders in salivary, masticatory characteristics and gustatory sensitivity was observed. The regression analysis showed a weak relationship between the distribution of chewed particles among the different sieves and BF. The concentration of some analytes was influenced by salivary flow and pH. Age, saliva flow and AMY concentrations influenced gustatory sensitivity. In addition, salivary, masticatory and taste characteristics did not differ between genders, and only a weak relation between MP and BF was observed.
The Role of Attention in Somatosensory Processing: A Multi-trait, Multi-method Analysis
Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.
2016-01-01
Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different traits)/cross-trait (e.g., attention and tactile sensitivity) correlations, suggesting that parent-reported tactile sensory dysfunction and performance-based tactile sensitivity describe different behavioral phenomena. Additionally, both parent-reported tactile functioning and performance-based tactile sensitivity measures were significantly associated with measures of attention. Findings suggest that sensory (tactile) processing abnormalities in ASD are multifaceted, and may partially reflect a more global deficit in behavioral regulation (including attention). Challenges of relying solely on parent-report to describe sensory difficulties faced by children/families with ASD are also highlighted. PMID:27448580
Tropospheric Ozone Near-Nadir-Viewing IR Spectral Sensitivity and Ozone Measurements from NAST-I
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Smith, William L.; Larar, Allen M.
2001-01-01
Infrared ozone spectra from near nadir observations have provided atmospheric ozone information from the sensor to the Earth's surface. Simulations of the NPOESS Airborne Sounder Testbed-Interferometer (NAST-I) from the NASA ER-2 aircraft (approximately 20 km altitude) with a spectral resolution of 0.25/cm were used for sensitivity analysis. The spectral sensitivity of ozone retrievals to uncertainties in atmospheric temperature and water vapor is assessed in order to understand the relationship between the IR emissions and the atmospheric state. In addition, ozone spectral radiance sensitivity to its ozone layer densities and radiance weighting functions reveals the limit of the ozone profile retrieval accuracy from NAST-I measurements. Statistical retrievals of ozone with temperature and moisture retrievals from NAST-I spectra have been investigated and the preliminary results from NAST-I field campaigns are presented.
A retrospective analysis of preoperative staging modalities for oral squamous cell carcinoma.
Kähling, Ch; Langguth, T; Roller, F; Kroll, T; Krombach, G; Knitschke, M; Streckbein, Ph; Howaldt, H P; Wilbrand, J-F
2016-12-01
An accurate preoperative assessment of cervical lymph node status is a prerequisite for individually tailored cancer therapies in patients with oral squamous cell carcinoma. The detection of malignant spread and its treatment crucially influence the prognosis. The aim of the present study was to analyze the different staging modalities used among patients with a diagnosis of primary oral squamous cell carcinoma between 2008 and 2015. An analysis of preoperative staging findings, collected by clinical palpation, ultrasound, and computed tomography (CT), was performed. The results obtained were compared with the results of the final histopathological findings of the neck dissection specimens. A statistical analysis using McNemar's test was performed. The sensitivity of CT for the detection of malignant cervical tumor spread was 74.5%. The ultrasound obtained a sensitivity of 60.8%. Both CT and ultrasound demonstrated significantly enhanced sensitivity compared to the clinical palpation with a sensitivity of 37.1%. No significant difference was observed between CT and ultrasound. A combination of different staging modalities increased the sensitivity significantly compared with ultrasound staging alone. No significant difference in sensitivity was found between the combined use of different staging modalities and CT staging alone. The highest sensitivity, of 80.0%, was obtained by a combination of all three staging modalities: clinical palpation, ultrasound and CT. The present study indicates that CT has an essential role in the preoperative staging of patients with oral squamous cell carcinoma. Its use not only significantly increases the sensitivity of cervical lymph node metastasis detection but also offers a preoperative assessment of local tumor spread and resection borders. An additional non-invasive cervical lymph node examination increases the sensitivity of the tumor staging process and reduces the risk of occult metastasis. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Component resolution reveals additional major allergens in patients with honeybee venom allergy.
Köhler, Julian; Blank, Simon; Müller, Sabine; Bantleon, Frank; Frick, Marcel; Huss-Marp, Johannes; Lidholm, Jonas; Spillner, Edzard; Jakob, Thilo
2014-05-01
Detection of IgE to recombinant Hymenoptera venom allergens has been suggested to improve the diagnostic precision in Hymenoptera venom allergy. However, the frequency of sensitization to the only available recombinant honeybee venom (HBV) allergen, rApi m 1, in patients with HBV allergy is limited, suggesting that additional HBV allergens might be of relevance. We performed an analysis of sensitization profiles of patients with HBV allergy to a panel of HBV allergens. Diagnosis of HBV allergy (n = 144) was based on history, skin test results, and allergen-specific IgE levels to HBV. IgE reactivity to 6 HBV allergens devoid of cross-reactive carbohydrate determinants (CCD) was analyzed by ImmunoCAP. IgE reactivity to rApi m 1, rApi m 2, rApi m 3, nApi m 4, rApi m 5, and rApi m 10 was detected in 72.2%, 47.9%, 50.0%, 22.9%, 58.3%, and 61.8% of the patients with HBV allergy, respectively. Positive results to at least 1 HBV allergen were detected in 94.4%. IgE reactivity to Api m 3, Api m 10, or both was detected in 68.0% and represented the only HBV allergen-specific IgE in 5% of the patients. Limited inhibition of IgE binding by therapeutic HBV and limited induction of Api m 3- and Api m 10-specific IgG4 in patients obtaining immunotherapy supports recent reports on the underrepresentation of these allergens in therapeutic HBV preparations. Analysis of a panel of CCD-free HBV allergens improved diagnostic sensitivity compared with use of rApi m 1 alone, identified additional major allergens, and revealed sensitizations to allergens that have been reported to be absent or underrepresented in therapeutic HBV preparations. Copyright © 2014 The Authors. Published by Mosby, Inc. All rights reserved.
Field-sensitivity To Rheological Parameters
NASA Astrophysics Data System (ADS)
Freund, Jonathan; Ewoldt, Randy
2017-11-01
We ask this question: where in a flow is a quantity of interest Q quantitatively sensitive to the model parameters θ-> describing the rheology of the fluid? This field sensitivity is computed via the numerical solution of the adjoint flow equations, as developed to expose the target sensitivity δQ / δθ-> (x) via the constraint of satisfying the flow equations. Our primary example is a sphere settling in Carbopol, for which we have experimental data. For this Carreau-model configuration, we simultaneously calculate how much a local change in the fluid intrinsic time-scale λ, limit-viscosities ηo and η∞, and exponent n would affect the drag D. Such field sensitivities can show where different fluid physics in the model (time scales, elastic versus viscous components, etc.) are important for the target observable and generally guide model refinement based on predictive goals. In this case, the computational cost of solving the local sensitivity problem is negligible relative to the flow. The Carreau-fluid/sphere example is illustrative; the utility of field sensitivity is in the design and analysis of less intuitive flows, for which we provide some additional examples.
Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.
Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric
2018-07-01
Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.
Test and Analysis of a Buckling-Critical Large-Scale Sandwich Composite Cylinder
NASA Technical Reports Server (NTRS)
Schultz, Marc R.; Sleight, David W.; Gardner, Nathaniel W.; Rudd, Michelle T.; Hilburger, Mark W.; Palm, Tod E.; Oldfield, Nathan J.
2018-01-01
Structural stability is an important design consideration for launch-vehicle shell structures and it is well known that the buckling response of such shell structures can be very sensitive to small geometric imperfections. As part of an effort to develop new buckling design guidelines for sandwich composite cylindrical shells, an 8-ft-diameter honeycomb-core sandwich composite cylinder was tested under pure axial compression to failure. The results from this test are compared with finite-element-analysis predictions and overall agreement was very good. In particular, the predicted buckling load was within 1% of the test and the character of the response matched well. However, it was found that the agreement could be improved by including composite material nonlinearity in the analysis, and that the predicted buckling initiation site was sensitive to the addition of small bending loads to the primary axial load in analyses.
Additional EIPC Study Analysis: Interim Report on High Priority Topics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, Stanton W
Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations weremore » developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 13 topics was developed for further analysis; this paper discusses the first five.« less
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
Hanna, T P; Shafiq, J; Delaney, G P; Vinod, S K; Thompson, S R; Barton, M B
2018-02-01
To describe the population benefit of radiotherapy in a high-income setting if evidence-based guidelines were routinely followed. Australian decision tree models were utilized. Radiotherapy alone (RT) benefit was defined as the absolute proportional benefit of radiotherapy compared with no treatment for radical indications, and of radiotherapy over surgery alone for adjuvant indications. Chemoradiotherapy (CRT) benefit was the absolute incremental benefit of concurrent chemoradiotherapy over RT. Five-year local control (LC) and overall survival (OS) benefits were measured. Citation databases were systematically queried for benefit data. Meta-analysis and sensitivity analysis were performed. 48% of all cancer patients have indications for radiotherapy, 34% curative and 14% palliative. RT provides 5-year LC benefit in 10.4% of all cancer patients (95% Confidence Interval 9.3, 11.8) and 5-year OS benefit in 2.4% (2.1, 2.7). CRT provides 5-year LC benefit in an additional 0.6% of all cancer patients (0.5, 0.6), and 5-year OS benefit for an additional 0.3% (0.2, 0.4). RT benefit was greatest for head and neck (LC 32%, OS 16%), and cervix (LC 33%, OS 18%). CRT LC benefit was greatest for rectum (6%) and OS for cervix (3%) and brain (3%). Sensitivity analysis confirmed a robust model. Radiotherapy provides significant 5-year LC and OS benefits as part of evidence-based cancer care. CRT provides modest additional benefits. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Optimal frequency-response sensitivity of compressible flow over roughness elements
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.
2017-04-01
Compressible flow over a flat plate with two localised and well-separated roughness elements is analysed by global frequency-response analysis. This analysis reveals a sustained feedback loop consisting of a convectively unstable shear-layer instability, triggered at the upstream roughness, and an upstream-propagating acoustic wave, originating at the downstream roughness and regenerating the shear-layer instability at the upstream protrusion. A typical multi-peaked frequency response is recovered from the numerical simulations. In addition, the optimal forcing and response clearly extract the components of this feedback loop and isolate flow regions of pronounced sensitivity and amplification. An efficient parametric-sensitivity framework is introduced and applied to the reference case which shows that first-order increases in Reynolds number and roughness height act destabilising on the flow, while changes in Mach number or roughness separation cause corresponding shifts in the peak frequencies. This information is gained with negligible effort beyond the reference case and can easily be applied to more complex flows.
Bashyam, Ashvin; Li, Matthew; Cima, Michael J
2018-07-01
Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bashyam, Ashvin; Li, Matthew; Cima, Michael J.
2018-07-01
Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR.
Arkusz, Joanna; Stępnik, Maciej; Sobala, Wojciech; Dastych, Jarosław
2010-11-10
The aim of this study was to find differentially regulated genes in THP-1 monocytic cells exposed to sensitizers and nonsensitizers and to investigate if such genes could be reliable markers for an in vitro predictive method for the identification of skin sensitizing chemicals. Changes in expression of 35 genes in the THP-1 cell line following treatment with chemicals of different sensitizing potential (from nonsensitizers to extreme sensitizers) were assessed using real-time PCR. Verification of 13 candidate genes by testing a large number of chemicals (an additional 22 sensitizers and 8 nonsensitizers) revealed that prediction of contact sensitization potential was possible based on evaluation of changes in three genes: IL8, HMOX1 and PAIMP1. In total, changes in expression of these genes allowed correct detection of sensitization potential of 21 out of 27 (78%) test sensitizers. The gene expression levels inside potency groups varied and did not allow estimation of sensitization potency of test chemicals. Results of this study indicate that evaluation of changes in expression of proposed biomarkers in THP-1 cells could be a valuable model for preliminary screening of chemicals to discriminate an appreciable majority of sensitizers from nonsensitizers. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Vinnakota, Kalyan C; Beard, Daniel A; Dash, Ranjan K
2009-01-01
Identification of a complex biochemical system model requires appropriate experimental data. Models constructed on the basis of data from the literature often contain parameters that are not identifiable with high sensitivity and therefore require additional experimental data to identify those parameters. Here we report the application of a local sensitivity analysis to design experiments that will improve the identifiability of previously unidentifiable model parameters in a model of mitochondrial oxidative phosphorylation and tricaboxylic acid cycle. Experiments were designed based on measurable biochemical reactants in a dilute suspension of purified cardiac mitochondria with experimentally feasible perturbations to this system. Experimental perturbations and variables yielding the most number of parameters above a 5% sensitivity level are presented and discussed.
Akula, Sravani; Kamasani, Swapna; Sivan, Sree Kanth; Manga, Vijjulatha; Vudem, Dashavantha Reddy; Kancha, Rama Krishna
2018-05-01
A significant proportion of patients with lung cancer carry mutations in the EGFR kinase domain. The presence of a deletion mutation in exon 19 or L858R point mutation in the EGFR kinase domain has been shown to cause enhanced efficacy of inhibitor treatment in patients with NSCLC. Several less frequent (uncommon) mutations in the EGFR kinase domain with potential implications in treatment response have also been reported. The role of a limited number of uncommon mutations in drug sensitivity was experimentally verified. However, a huge number of these mutations remain uncharacterized for inhibitor sensitivity or resistance. A large-scale computational analysis of clinically reported 298 point mutants of EGFR kinase domain has been performed, and drug sensitivity profiles for each mutant toward seven kinase inhibitors has been determined by molecular docking. In addition, the relative inhibitor binding affinity toward each drug as compared with that of adenosine triphosphate was calculated for each mutant. The inhibitor sensitivity profiles predicted in this study for a set of previously characterized mutants correlated well with the published clinical, experimental, and computational data. Both the single and compound mutations displayed differential inhibitor sensitivity toward first- and next-generation kinase inhibitors. The present study provides predicted drug sensitivity profiles for a large panel of uncommon EGFR mutations toward multiple inhibitors, which may help clinicians in deciding mutant-specific treatment strategies. Copyright © 2018 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Glaubitz, Ulrike; Li, Xia; Schaedel, Sandra; Erban, Alexander; Sulpice, Ronan; Kopka, Joachim; Hincha, Dirk K; Zuther, Ellen
2017-01-01
Transcript and metabolite profiling were performed on leaves from six rice cultivars under high night temperature (HNT) condition. Six genes were identified as central for HNT response encoding proteins involved in transcription regulation, signal transduction, protein-protein interactions, jasmonate response and the biosynthesis of secondary metabolites. Sensitive cultivars showed specific changes in transcript abundance including abiotic stress responses, changes of cell wall-related genes, of ABA signaling and secondary metabolism. Additionally, metabolite profiles revealed a highly activated TCA cycle under HNT and concomitantly increased levels in pathways branching off that could be corroborated by enzyme activity measurements. Integrated data analysis using clustering based on one-dimensional self-organizing maps identified two profiles highly correlated with HNT sensitivity. The sensitivity profile included genes of the functional bins abiotic stress, hormone metabolism, cell wall, signaling, redox state, transcription factors, secondary metabolites and defence genes. In the tolerance profile, similar bins were affected with slight differences in hormone metabolism and transcription factor responses. Metabolites of the two profiles revealed involvement of GABA signaling, thus providing a link to the TCA cycle status in sensitive cultivars and of myo-inositol as precursor for inositol phosphates linking jasmonate signaling to the HNT response specifically in tolerant cultivars. © 2016 John Wiley & Sons Ltd.
Comparative peptidomics analysis of neural adaptations in rats repeatedly exposed to amphetamine.
Romanova, Elena V; Lee, Ji Eun; Kelleher, Neil L; Sweedler, Jonathan V; Gulley, Joshua M
2012-10-01
Repeated exposure to amphetamine (AMPH) induces long-lasting behavioral changes, referred to as sensitization, that are accompanied by various neuroadaptations in the brain. To investigate the chemical changes that occur during behavioral sensitization, we applied a comparative proteomics approach to screen for neuropeptide changes in a rodent model of AMPH-induced sensitization. By measuring peptide profiles with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry and comparing signal intensities using principal component analysis and variance statistics, subsets of peptides are found with significant differences in the dorsal striatum, nucleus accumbens, and medial prefrontal cortex of AMPH-sensitized male Sprague-Dawley rats. These biomarker peptides, identified in follow-up analyses using liquid chromatography and tandem mass spectrometry, suggest that behavioral sensitization to AMPH is associated with complex chemical adaptations that regulate energy/metabolism, neurotransmission, apoptosis, neuroprotection, and neuritogenesis, as well as cytoskeleton integrity and neuronal morphology. Our data contribute to a growing number of reports showing that in addition to the mesolimbic dopamine system, which is the best known signaling pathway involved with reinforcing the effect of psychostimulants, concomitant chemical changes in other pathways and in neuronal organization may play a part in the overall effect of chronic AMPH exposure on behavior. © 2012 The Authors Journal of Neurochemistry © 2012 International Society for Neurochemistry.
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-09-02
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.
Dehghan-Nayeri, Nasrin; Eshghi, Peyman; Pour, Kourosh Goudarzi; Rezaei-Tavirani, Mostafa; Omrani, Mir Davood; Gharehbaghian, Ahmad
2017-07-01
Dexamethasone is considered as a direct chemotherapeutic agent in the treatment of pediatric acute lymphoblastic leukemia (ALL). Beside the advantages of the drug, some problems arising from the dose-related side effects are challenging issues during the treatment. Accordingly, the classification of patients to dexamethasone sensitive and resistance groups can help to select optimizing the therapeutic dose with the lowest adverse effects particularly in sensitive cases. For this purpose, we investigated inhibited proliferation and induced cytotoxicity in NALM-6 cells, as sensitive cells, after dexamethasone treatment. In addition, comparative protein expression analysis using the 2DE-MALDI-TOF MS technique was performed to identify the specific altered proteins. In addition, we evaluated mRNA expression levels of the identified proteins in bone-marrow samples from pediatric ALL patients using the real-time q-PCR method. Eventually, proteomic analysis revealed a combination of biomarkers, including capping proteins (CAPZA1 and CAPZB), chloride channel (CLIC1), purine nucleoside phosphorylase (PNP), and proteasome activator (PSME1), in response to the dexamethasone treatment. In addition, our results indicated low expression of identified proteins at both the mRNA and protein expression levels after drug treatment. Moreover, quantitative real-time PCR data analysis indicated that independent of the molecular subtypes of the leukemia, CAPZA1, CAPZB, CLIC1, and PNP expression levels were lower in ALL samples than normal samples, although PSME1 expression level was higher in ALL samples than normal samples. Furthermore, the expression level of all proteins (except PSME1) was different between high-risk and standard-risk patients that suggesting the prognostic value of them. In conclusion, our study suggests a panel of biomarkers comprising CAPZA1, CAPZB, CLIC1, PNP, and PSME1 as early diagnosis and treatment evaluation markers that may differentiate cancer cells which are presumably to benefit from dexamethasone-based chemotherapy and may facilitate the prediction of clinical outcome.
Zhang, Pengfei; Wen, Feng; Fu, Ping; Yang, Yu; Li, Qiu
2017-07-31
The effectiveness of the addition of docetaxel and/or zoledronic acid to the standard of care (SOC) for hormone-naive prostate cancer has been evaluated in the STAMPEDE trial. The object of the present analysis was to evaluate the cost-effectiveness of these treatment options in the treatment of advanced hormone-naive prostate cancer in China. A cost-effectiveness analysis using a Markov model was carried out from the Chinese societal perspective. The efficacy data were obtained from the STAMPEDE trial and health utilities were derived from previous studies. Transition probabilities were calculated based on the survival in each group. The primary endpoint in the analysis was the incremental cost-effectiveness ratio (ICER), and model uncertainties were explored by 1-way sensitivity analysis and probabilistic sensitivity analysis. SOC alone generated an effectiveness of 2.65 quality-adjusted life years (QALYs) at a lifetime cost of $20,969.23. At a cost of $25,001.34, SOC plus zoledronic acid was associated with 2.69 QALYs, resulting in an ICER of $100,802.75/QALY compared with SOC alone. SOC plus docetaxel gained an effectiveness of 2.85 QALYs at a cost of $28,764.66, while the effectiveness and cost data in the SOC plus zoledronic acid/docetaxel group were 2.78 QALYs and $32,640.95. Based on the results of the analysis, SOC plus zoledronic acid, SOC plus docetaxel, and SOC plus zoledronic acid/docetaxel are unlikely to be cost-effective options in patients with advanced hormone-naive prostate cancer compared with SOC alone.
Gabriel, A A; Salazar, S K P
2014-08-01
This study evaluated the use of sodium benzoate (SB) and licorice root extract (LRE) as heat-sensitizing additives against Escherichia coli O157:H7 in mildly heated young coconut liquid endosperm. Consumer acceptance scoring showed that maximum permissible supplementation (MPS) levels for SB and LRE were at 300 and 250 ppm, respectively. The MPS values were considered in the generation of a 2-factor rotatable central composite design for the tested SB and LRE concentration combinations. Liquid endosperm with various SB and LRE supplementation combinations was inoculated with E. coli O157:H7 and heated to 55°C. The susceptibility of the cells towards heating was expressed in terms of the decimal reduction time (D55 ). Response surface analysis showed that only the individual linear effect of benzoate significantly influenced D55 value, where increasing supplementation level resulted in increasing susceptibility. The results reported could serve as baseline information in further investigating other additives that could be used as heat-sensitizing agents against pathogens in heat-labile food systems. Fruit juice products have been linked to outbreaks of microbial infection, where unpasteurized products were proven vectors of diseases. Processors often opt not to apply heat process to juice products as the preservation technique often compromises the sensorial quality. This work evaluated two common additives for their heat-sensitizing effects against E. coli O157:H7 in coconut liquid endosperm, the results of which may serve as baseline information to small- and medium-scale processors, and researchers in the establishment of mild heat process schedule for the test commodity and other similar products. © 2014 The Society for Applied Microbiology.
Li, Liang; Wang, Yiying; Xu, Jiting; Flora, Joseph R V; Hoque, Shamia; Berge, Nicole D
2018-08-01
Hydrothermal carbonization (HTC) is a wet, low temperature thermal conversion process that continues to gain attention for the generation of hydrochar. The importance of specific process conditions and feedstock properties on hydrochar characteristics is not well understood. To evaluate this, linear and non-linear models were developed to describe hydrochar characteristics based on data collected from HTC-related literature. A Sobol analysis was subsequently conducted to identify parameters that most influence hydrochar characteristics. Results from this analysis indicate that for each investigated hydrochar property, the model fit and predictive capability associated with the random forest models is superior to both the linear and regression tree models. Based on results from the Sobol analysis, the feedstock properties and process conditions most influential on hydrochar yield, carbon content, and energy content were identified. In addition, a variational process parameter sensitivity analysis was conducted to determine how feedstock property importance changes with process conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Jacobson, Sheldon H; Yu, Ge; Jokela, Janet A
2016-07-01
This paper provides an alternative policy for Ebola entry screening at airports in the United States. This alternative policy considers a social contact tracing (SCT) risk level, in addition to the current health risk level used by the CDC. The performances of both policies are compared based on the scenarios that occur and the expected cost associated with implementing such policies. Sensitivity analysis is performed to identify conditions under which one policy dominates the other policy. This analysis takes into account that the alternative policy requires additional data collection, which is balanced by a more cost-effective allocation of resources. Copyright © 2016 Elsevier Inc. All rights reserved.
Tsuchiyama, Hiromi; Maeda, Akihisa; Nakajima, Mayumi; Kitsukawa, Mika; Takahashi, Kei; Miyoshi, Tomoya; Mutsuga, Mayu; Asaoka, Yoshiji; Miyamoto, Yohei; Oshida, Keiyu
2017-10-05
The murine local lymph node assay (LLNA) is widely used to test chemicals to induce skin sensitization. Exposure of mouse auricle skin to a sensitizer results in proliferation of local lymph node T cells, which has been measured by in vivo incorporation of H 3 -methyl thymidine or 5-bromo-2'-deoxyuridine (BrdU). The stimulation index (SI), the ratio of the mean proliferation in each treated group to that in the concurrent vehicle control group, is frequently used as a regulatory-authorized endpoint for LLNA. However, some non-sensitizing irritants, such as sodium dodecyl sulfate (SDS) or methyl salicylate (MS), have been reported as false-positives by this endpoint. In search of a potential endpoint to enhance the specificity of existing endpoints, we evaluated 3 contact sensitizers; (hexyl cinnamic aldehyde [HCA], oxazolone [OXA], and 2,4-dinitrochlorobenzene [DNCB]), 1 respiratory sensitizer (toluene 2,4-diisocyanate [TDI]), and 2 non-sensitizing irritants (MS and SDS) by several endpoints in LLNA. Each test substance was applied to both ears of female CBA/Ca mice daily for 3 consecutive days. The ears and auricle lymph node cells were analyzed on day 5 for endpoints including the SI value, lymph node cell count, cytokine release from lymph node cells, and histopathological changes and gene expression profiles in auricle skin. The SI values indicated that all the test substances induced significant proliferation of lymph node cells. The lymph node cell counts showed no significant changes by the non-sensitizers assessed. The inflammatory findings of histopathology were similar among the auricle skins treated by sensitizers and irritants. Gene expression profiles of cytokines IFN-γ, IL-4, and IL-17 in auricle skin were similar to the cytokine release profiles in draining lymph node cells. In addition, the gene expression of the chemokine CXCL1 and/or CXCL2 showed that it has the potential to discriminate sensitizers and non-sensitizing irritants. Our results suggest that multi-endpoint analysis in the LLNA leads to a better determination of the sensitizing potential of test substances. We also show that the gene expression of CXCL1 and/or CXCL2, which is involved in elicitation of contact hypersensitivity (CHS), can be a possible additional endpoint for discrimination of sensitizing compounds in LLNA. Copyright © 2017 Elsevier B.V. All rights reserved.
Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.
Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi
2017-05-01
Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.
Mendes, Paula; Nunes, Luis Miguel; Teixeira, Margarida Ribau
2014-09-01
This article demonstrates how decision-makers can be guided in the process of defining performance target values in the balanced scorecard system. We apply a method based on sensitivity analysis with Monte Carlo simulation to the municipal solid waste management system in Loulé Municipality (Portugal). The method includes two steps: sensitivity analysis of performance indicators to identify those performance indicators with the highest impact on the balanced scorecard model outcomes; and sensitivity analysis of the target values for the previously identified performance indicators. Sensitivity analysis shows that four strategic objectives (IPP1: Comply with the national waste strategy; IPP4: Reduce nonrenewable resources and greenhouse gases; IPP5: Optimize the life-cycle of waste; and FP1: Meet and optimize the budget) alone contribute 99.7% of the variability in overall balanced scorecard value. Thus, these strategic objectives had a much stronger impact on the estimated balanced scorecard outcome than did others, with the IPP1 and the IPP4 accounting for over 55% and 22% of the variance in overall balanced scorecard value, respectively. The remaining performance indicators contribute only marginally. In addition, a change in the value of a single indicator's target value made the overall balanced scorecard value change by as much as 18%. This may lead to involuntarily biased decisions by organizations regarding performance target-setting, if not prevented with the help of methods such as that proposed and applied in this study. © The Author(s) 2014.
Processing companies' preferences for attributes of beef in Switzerland.
Boesch, Irene
2014-01-01
The aim of this work was to assess processing companies' preferences for attributes of Swiss beef. To this end, qualitative interviews were used to derive product attributes that determine the buying decision. Through an adaptive-choice based conjoint analysis survey and latent class analysis of choice data, we compute class preferences. Results show that there are two distinct classes. A smaller class emphasizes traceability back to the birth farm and low producer price, a larger class focuses on environmental effects and origin. Additionally we see that larger companies are more price-sensitive and smaller companies are more sensitive to origin of the animals. The results outlined in this paper may be used to target market segments and to derive differentiation strategies based on product characteristics. Copyright © 2013 Elsevier Ltd. All rights reserved.
The economics of a pharmacy-based central intravenous additive service for paediatric patients.
Armour, D J; Cairns, C J; Costello, I; Riley, S J; Davies, E G
1996-10-01
This study was designed to compare the costs of a pharmacy-based Central Intravenous Additive Service (CIVAS) with those of traditional ward-based preparation of intravenous doses for a paediatric population. Labour costs were derived from timings of preparation of individual doses in both the pharmacy and ward by an independent observer. The use of disposables and diluents was recorded and their acquisition costs apportioned to the cost of each dose prepared. Data were collected from 20 CIVAS sessions (501 doses) and 26 ward-based sessions (30 doses). In addition, the costs avoided by the use of part vials in CIVAS was calculated. This was derived from a total of 50 CIVAS sessions. Labour, disposable and diluent costs were significantly lower for CIVAS compared with ward-based preparation (p < 0.001). The ratio of costs per dose [in 1994 pounds sterling] between ward and pharmacy was 2.35:1 (2.51 pounds:1.07 pounds). Sensitivity analysis of the best and worst staff mixes in both locations ranged from 2.3:1 to 4.0:1, always in favour of CIVAS. There were considerable costs avoided in CIVAS from the multiple use of vials; the estimated annual sum derived from the study was 44,000 pounds. In addition, CIVAS was less vulnerable to unanticipated interruptions in work flow than ward-based preparation. CIVAS for children was more economical than traditional ward-based preparation, because of a cost-minimisation effect. Sensitivity analysis showed that these advantages were maintained over a full range of skill mixes. Additionally, significant savings accrued from the multiple use of vials in CIVAS.
Hegedus, Eric J; Goode, Adam P; Cook, Chad E; Michener, Lori; Myer, Cortney A; Myer, Daniel M; Wright, Alexis A
2012-11-01
To update our previously published systematic review and meta-analysis by subjecting the literature on shoulder physical examination (ShPE) to careful analysis in order to determine each tests clinical utility. This review is an update of previous work, therefore the terms in the Medline and CINAHL search strategies remained the same with the exception that the search was confined to the dates November, 2006 through to February, 2012. The previous study dates were 1966 - October, 2006. Further, the original search was expanded, without date restrictions, to include two new databases: EMBASE and the Cochrane Library. The Quality Assessment of Diagnostic Accuracy Studies, version 2 (QUADAS 2) tool was used to critique the quality of each new paper. Where appropriate, data from the prior review and this review were combined to perform meta-analysis using the updated hierarchical summary receiver operating characteristic and bivariate models. Since the publication of the 2008 review, 32 additional studies were identified and critiqued. For subacromial impingement, the meta-analysis revealed that the pooled sensitivity and specificity for the Neer test was 72% and 60%, respectively, for the Hawkins-Kennedy test was 79% and 59%, respectively, and for the painful arc was 53% and 76%, respectively. Also from the meta-analysis, regarding superior labral anterior to posterior (SLAP) tears, the test with the best sensitivity (52%) was the relocation test; the test with the best specificity (95%) was Yergason's test; and the test with the best positive likelihood ratio (2.81) was the compression-rotation test. Regarding new (to this series of reviews) ShPE tests, where meta-analysis was not possible because of lack of sufficient studies or heterogeneity between studies, there are some individual tests that warrant further investigation. A highly specific test (specificity >80%, LR+ ≥ 5.0) from a low bias study is the passive distraction test for a SLAP lesion. This test may rule in a SLAP lesion when positive. A sensitive test (sensitivity >80%, LR- ≤ 0.20) of note is the shoulder shrug sign, for stiffness-related disorders (osteoarthritis and adhesive capsulitis) as well as rotator cuff tendinopathy. There are six additional tests with higher sensitivities, specificities, or both but caution is urged since all of these tests have been studied only once and more than one ShPE test (ie, active compression, biceps load II) has been introduced with great diagnostic statistics only to have further research fail to replicate the results of the original authors. The belly-off and modified belly press tests for subscapularis tendinopathy, bony apprehension test for bony instability, olecranon-manubrium percussion test for bony abnormality, passive compression for a SLAP lesion, and the lateral Jobe test for rotator cuff tear give reason for optimism since they demonstrated both high sensitivities and specificities reported in low bias studies. Finally, one additional test was studied in two separate papers. The dynamic labral shear may be sensitive for SLAP lesions but, when modified, be diagnostic of labral tears generally. Based on data from the original 2008 review and this update, the use of any single ShPE test to make a pathognomonic diagnosis cannot be unequivocally recommended. There exist some promising tests but their properties must be confirmed in more than one study. Combinations of ShPE tests provide better accuracy, but marginally so. These findings seem to provide support for stressing a comprehensive clinical examination including history and physical examination. However, there is a great need for large, prospective, well-designed studies that examine the diagnostic accuracy of the many aspects of the clinical examination and what combinations of these aspects are useful in differentially diagnosing pathologies of the shoulder.
Woolgar, Alexandra; Golland, Polina; Bode, Stefan
2014-09-01
Multivoxel pattern analysis (MVPA) is a sensitive and increasingly popular method for examining differences between neural activation patterns that cannot be detected using classical mass-univariate analysis. Recently, Todd et al. ("Confounds in multivariate pattern analysis: Theory and rule representation case study", 2013, NeuroImage 77: 157-165) highlighted a potential problem for these methods: high sensitivity to confounds at the level of individual participants due to the use of directionless summary statistics. Unlike traditional mass-univariate analyses where confounding activation differences in opposite directions tend to approximately average out at group level, group level MVPA results may be driven by any activation differences that can be discriminated in individual participants. In Todd et al.'s empirical data, factoring out differences in reaction time (RT) reduced a classifier's ability to distinguish patterns of activation pertaining to two task rules. This raises two significant questions for the field: to what extent have previous multivoxel discriminations in the literature been driven by RT differences, and by what methods should future studies take RT and other confounds into account? We build on the work of Todd et al. and compare two different approaches to remove the effect of RT in MVPA. We show that in our empirical data, in contrast to that of Todd et al., the effect of RT on rule decoding is negligible, and results were not affected by the specific details of RT modelling. We discuss the meaning of and sensitivity for confounds in traditional and multivoxel approaches to fMRI analysis. We observe that the increased sensitivity of MVPA comes at a price of reduced specificity, meaning that these methods in particular call for careful consideration of what differs between our conditions of interest. We conclude that the additional complexity of the experimental design, analysis and interpretation needed for MVPA is still not a reason to favour a less sensitive approach. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.
2016-12-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Survey of methods for calculating sensitivity of general eigenproblems
NASA Technical Reports Server (NTRS)
Murthy, Durbha V.; Haftka, Raphael T.
1987-01-01
A survey of methods for sensitivity analysis of the algebraic eigenvalue problem for non-Hermitian matrices is presented. In addition, a modification of one method based on a better normalizing condition is proposed. Methods are classified as Direct or Adjoint and are evaluated for efficiency. Operation counts are presented in terms of matrix size, number of design variables and number of eigenvalues and eigenvectors of interest. The effect of the sparsity of the matrix and its derivatives is also considered, and typical solution times are given. General guidelines are established for the selection of the most efficient method.
Chemical Analysis through CL-Detection Assisted by Periodate Oxidation
Evmiridis, Nicholaos P.; Vlessidis, Athanasios G.; Thanasoulias, Nicholas C.
2007-01-01
The progress of the research work of the author and his colleagues on the field of CL-emission generated by pyrogallol oxidation and further application for the direct determination of periodate and indirect or direct determination of other compounds through flow-injection manifold/CL-detection set up is described. The instrumentation used for these studies was a simple flow-injection manifold that provides good reproducibility, coupled to a red sensitive photomultiplier that gives sensitive CL-detection. In addition, recent reports on studies and analytical methods based on CL-emission generated by periodate oxidation by other authors are included. PMID:17611611
Optimal dynamic pricing for deteriorating items with reference-price effects
NASA Astrophysics Data System (ADS)
Xue, Musen; Tang, Wansheng; Zhang, Jianxiong
2016-07-01
In this paper, a dynamic pricing problem for deteriorating items with the consumers' reference-price effect is studied. An optimal control model is established to maximise the total profit, where the demand not only depends on the current price, but also is sensitive to the historical price. The continuous-time dynamic optimal pricing strategy with reference-price effect is obtained through solving the optimal control model on the basis of Pontryagin's maximum principle. In addition, numerical simulations and sensitivity analysis are carried out. Finally, some managerial suggestions that firm may adopt to formulate its pricing policy are proposed.
Bone Composition Diagnostics: Photoacoustics Versus Ultrasound
NASA Astrophysics Data System (ADS)
Yang, Lifeng; Lashkari, Bahman; Mandelis, Andreas; Tan, Joel W. Y.
2015-06-01
Ultrasound (US) backscatter from bones depends on the mechanical properties and the microstructure of the interrogated bone. On the other hand, photoacoustics (PA) is sensitive to optical properties of tissue and can detect composition variation. Therefore, PA can provide complementary information about bone health and integrity. In this work, a comparative study of US backscattering and PA back-propagating signals from animal trabecular bones was performed. Both methods were applied using a linear frequency modulation chirp and matched filtering. A 2.2 MHz ultrasonic transducer was employed to detect both signals. The use of the frequency domain facilitates spectral analysis. The variation of signals shows that in addition to sensitivity to mineral changes, PA exhibits sensitivity to changes in the organic part of the bone. It is, therefore, concluded that the combination of both modalities can provide complementary detailed information on bone health than either method separately. In addition, comparison of PA and US depthwise images shows the higher penetration of US. Surface scan images exhibit very weak correlation between US and PA which could be caused by the different signal generation origins in mechanical versus optical properties, respectively.
Chevance, Aurélie; Schuster, Tibor; Steele, Russell; Ternès, Nils; Platt, Robert W
2015-10-01
Robustness of an existing meta-analysis can justify decisions on whether to conduct an additional study addressing the same research question. We illustrate the graphical assessment of the potential impact of an additional study on an existing meta-analysis using published data on statin use and the risk of acute kidney injury. A previously proposed graphical augmentation approach is used to assess the sensitivity of the current test and heterogeneity statistics extracted from existing meta-analysis data. In addition, we extended the graphical augmentation approach to assess potential changes in the pooled effect estimate after updating a current meta-analysis and applied the three graphical contour definitions to data from meta-analyses on statin use and acute kidney injury risk. In the considered example data, the pooled effect estimates and heterogeneity indices demonstrated to be considerably robust to the addition of a future study. Supportingly, for some previously inconclusive meta-analyses, a study update might yield statistically significant kidney injury risk increase associated with higher statin exposure. The illustrated contour approach should become a standard tool for the assessment of the robustness of meta-analyses. It can guide decisions on whether to conduct additional studies addressing a relevant research question. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
Schuff, M M; Gore, J P; Nauman, E A
2013-12-01
The treatment of cancerous tumors is dependent upon the delivery of therapeutics through the blood by means of the microcirculation. Differences in the vasculature of normal and malignant tissues have been recognized, but it is not fully understood how these differences affect transport and the applicability of existing mathematical models has been questioned at the microscale due to the complex rheology of blood and fluid exchange with the tissue. In addition to determining an appropriate set of governing equations it is necessary to specify appropriate model parameters based on physiological data. To this end, a two stage sensitivity analysis is described which makes it possible to determine the set of parameters most important to the model's calibration. In the first stage, the fluid flow equations are examined and a sensitivity analysis is used to evaluate the importance of 11 different model parameters. Of these, only four substantially influence the intravascular axial flow providing a tractable set that could be calibrated using red blood cell velocity data from the literature. The second stage also utilizes a sensitivity analysis to evaluate the importance of 14 model parameters on extravascular flux. Of these, six exhibit high sensitivity and are integrated into the model calibration using a response surface methodology and experimental intra- and extravascular accumulation data from the literature (Dreher et al. in J Natl Cancer Inst 98(5):335-344, 2006). The model exhibits good agreement with the experimental results for both the mean extravascular concentration and the penetration depth as a function of time for inert dextran over a wide range of molecular weights.
Jo, J A; Fang, Q; Papaioannou, T; Qiao, J H; Fishbein, M C; Dorafshar, A; Reil, T; Baker, D; Freischlag, J; Marcu, L
2004-01-01
This study investigates the ability of new analytical methods of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data to characterize tissue in-vivo, such as the composition of atherosclerotic vulnerable plaques. A total of 73 TR-LIFS measurements were taken in-vivo from the aorta of 8 rabbits, and subsequently analyzed using the Laguerre deconvolution technique. The investigated spots were classified as normal aorta, thin or thick lesions, and lesions rich in either collagen or macrophages/foam-cells. Different linear and nonlinear classification algorithms (linear discriminant analysis, stepwise linear discriminant analysis, principal component analysis, and feedforward neural networks) were developed using spectral and TR features (ratios of intensity values and Laguerre expansion coefficients, respectively). Normal intima and thin lesions were discriminated from thick lesions (sensitivity >90%, specificity 100%) using only spectral features. However, both spectral and time-resolved features were necessary to discriminate thick lesions rich in collagen from thick lesions rich in foam cells (sensitivity >85%, specificity >93%), and thin lesions rich in foam cells from normal aorta and thin lesions rich in collagen (sensitivity >85%, specificity >94%). Based on these findings, we believe that TR-LIFS information derived from the Laguerre expansion coefficients can provide a valuable additional dimension for in-vivo tissue characterization.
Chromatographic-ICPMS methods for trace element and isotope analysis of water and biogenic calcite
NASA Astrophysics Data System (ADS)
Klinkhammer, G. P.; Haley, B. A.; McManus, J.; Palmer, M. R.
2003-04-01
ICP-MS is a powerful technique because of its sensitivity and speed of analysis. This is especially true for refractory elements that are notoriously difficult using TIMS and less energetic techniques. However, as ICP-MS instruments become more sensitive to elements of interest they also become more sensitive to interference. This becomes a pressing issue when analyzing samples with high total dissolved solids. This paper describes two trace element methods that overcome these problems by using chromatographic techniques to precondition samples prior to analysis by ICP-MS: separation of rare earth elements (REEs) from seawater using HPLC-ICPMS, and flow-through dissolution of foraminiferal calcite. Using HPLC in combination with ICP-MS it is possible to isolate the REEs from matrix, other transition elements, and each other. This method has been developed for small volume samples (5ml) making it possible to analyze sediment pore waters. As another example, subjecting foram shells to flow-through reagent addition followed by time-resolved analysis in the ICP-MS allows for systematic cleaning and dissolution of foram shells. This method provides information about the relationship between dissolution tendency and elemental composition. Flow-through is also amenable to automation thus yielding the high sample throughput required for paleoceanography, and produces a highly resolved elemental matrix that can be statistically analyzed.
Sensitivity analyses for sparse-data problems-using weakly informative bayesian priors.
Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R
2013-03-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.
Sensitivity Analyses for Sparse-Data Problems—Using Weakly Informative Bayesian Priors
Hamra, Ghassan B.; MacLehose, Richard F.; Cole, Stephen R.
2013-01-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist. PMID:23337241
Spin-exchange relaxation-free magnetometer with nearly parallel pump and probe beams
Karaulanov, Todor; Savukov, Igor; Kim, Young Jin
2016-03-22
We constructed a spin-exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell. This configuration provides almost complete overlap of the beams in the cell, and prevents the pump beam from entering the probe detection channel. By coupling the lasers in multi-mode fibers, without an optical isolator or field modulation, we demonstrate a sensitivity of 10 fTmore » $$/\\sqrt{\\text{Hz}}$$ for frequencies between 10 Hz and 100 Hz. In addition to the experimental study of sensitivity, we present a theoretical analysis of SERF magnetometer response to magnetic fields for small-angle and parallel-beam configurations, and show that at optimal DC offset fields the magnetometer response is comparable to that in the orthogonal-beam configuration. Based on the analysis, we also derive fundamental and probe-limited sensitivities for the arbitrary non-orthogonal geometry. The expected practical and fundamental sensitivities are of the same order as those in the orthogonal geometry. As a result, we anticipate that our design will be useful for magnetoencephalography (MEG) and magnetocardiography (MCG) applications.« less
Characterization of iron-doped lithium niobate for holographic storage applications
NASA Technical Reports Server (NTRS)
Shah, R. R.; Kim, D. M.; Rabson, T. A.; Tittel, F. K.
1976-01-01
A comprehensive characterization of chemical and holographic properties of eight systematically chosen Fe:LiNbO3 crystals is performed in order to determine optimum performance of the crystals in holographic storage and display applications. The discussion covers determination of Fe(2+) and Fe(3+) ion concentrations in Fe:LiNbO3 system from optical absorption and EPR measurements; establishment of the relation between the photorefractive sensitivity of Fe(2+) and Fe(3+) concentrations; study of the spectral dependence, the effect of oxygen annealing, and of other impurities on the photorefractive sensitivity; analysis of the diffraction efficiency curves for different crystals and corresponding sensitivities with the dynamic theory of hologram formation; and determination of the bulk photovoltaic fields as a function of Fe(2+) concentrations. In addition to the absolute Fe(2+) concentration, the relative concentrations of Fe(2+) and Fe(3+) ions are also important in determining the photorefractive sensitivity. There exists an optimal set of crystal characteristics for which the photorefractive sensitivity is most favorable.
Cost-effectiveness of additional catheter-directed thrombolysis for deep vein thrombosis.
Enden, T; Resch, S; White, C; Wik, H S; Kløw, N E; Sandset, P M
2013-06-01
Additional treatment with catheter-directed thrombolysis (CDT) has recently been shown to reduce post-thrombotic syndrome (PTS). To estimate the cost effectiveness of additional CDT compared with standard treatment alone. Using a Markov decision model, we compared the two treatment strategies in patients with a high proximal deep vein thrombosis (DVT) and a low risk of bleeding. The model captured the development of PTS, recurrent venous thromboembolism and treatment-related adverse events within a lifetime horizon and the perspective of a third-party payer. Uncertainty was assessed with one-way and probabilistic sensitivity analyzes. Model inputs from the CaVenT study included PTS development, major bleeding from CDT and utilities for post DVT states including PTS. The remaining clinical inputs were obtained from the literature. Costs obtained from the CaVenT study, hospital accounts and the literature are expressed in US dollars ($); effects in quality adjusted life years (QALY). In base case analyzes, additional CDT accumulated 32.31 QALYs compared with 31.68 QALYs after standard treatment alone. Direct medical costs were $64,709 for additional CDT and $51,866 for standard treatment. The incremental cost-effectiveness ratio (ICER) was $20,429/QALY gained. One-way sensitivity analysis showed model sensitivity to the clinical efficacy of both strategies, but the ICER remained < $55,000/QALY over the full range of all parameters. The probability that CDT is cost effective was 82% at a willingness to pay threshold of $50,000/QALY gained. Additional CDT is likely to be a cost-effective alternative to the standard treatment for patients with a high proximal DVT and a low risk of bleeding. © 2013 International Society on Thrombosis and Haemostasis.
Cost-effectiveness of additional catheter-directed thrombolysis for deep vein thrombosis
ENDEN, T.; RESCH, S.; WHITE, C.; WIK, H. S.; KLØW, N. E.; SANDSET, P. M.
2013-01-01
Summary Background Additional treatment with catheter-directed thrombolysis (CDT) has recently been shown to reduce post-thrombotic syndrome (PTS). Objectives To estimate the cost effectiveness of additional CDT compared with standard treatment alone. Methods Using a Markov decision model, we compared the two treatment strategies in patients with a high proximal deep vein thrombosis (DVT) and a low risk of bleeding. The model captured the development of PTS, recurrent venous thromboembolism and treatment-related adverse events within a lifetime horizon and the perspective of a third-party payer. Uncertainty was assessed with one-way and probabilistic sensitivity analyzes. Model inputs from the CaVenT study included PTS development, major bleeding from CDT and utilities for post DVT states including PTS. The remaining clinical inputs were obtained from the literature. Costs obtained from the CaVenT study, hospital accounts and the literature are expressed in US dollars ($); effects in quality adjusted life years (QALY). Results In base case analyzes, additional CDT accumulated 32.31 QALYs compared with 31.68 QALYs after standard treatment alone. Direct medical costs were $64 709 for additional CDT and $51 866 for standard treatment. The incremental cost-effectiveness ratio (ICER) was $20 429/QALY gained. One-way sensitivity analysis showed model sensitivity to the clinical efficacy of both strategies, but the ICER remained < $55 000/QALY over the full range of all parameters. The probability that CDT is cost effective was 82% at a willingness to pay threshold of $50 000/QALY gained. Conclusions Additional CDT is likely to be a cost-effective alternative to the standard treatment for patients with a high proximal DVT and a low risk of bleeding. PMID:23452204
Factor weighting in DRASTIC modeling.
Pacheco, F A L; Pires, L M G R; Santos, R M B; Sanches Fernandes, L F
2015-02-01
Evaluation of aquifer vulnerability comprehends the integration of very diverse data, including soil characteristics (texture), hydrologic settings (recharge), aquifer properties (hydraulic conductivity), environmental parameters (relief), and ground water quality (nitrate contamination). It is therefore a multi-geosphere problem to be handled by a multidisciplinary team. The DRASTIC model remains the most popular technique in use for aquifer vulnerability assessments. The algorithm calculates an intrinsic vulnerability index based on a weighted addition of seven factors. In many studies, the method is subject to adjustments, especially in the factor weights, to meet the particularities of the studied regions. However, adjustments made by different techniques may lead to markedly different vulnerabilities and hence to insecurity in the selection of an appropriate technique. This paper reports the comparison of 5 weighting techniques, an enterprise not attempted before. The studied area comprises 26 aquifer systems located in Portugal. The tested approaches include: the Delphi consensus (original DRASTIC, used as reference), Sensitivity Analysis, Spearman correlations, Logistic Regression and Correspondence Analysis (used as adjustment techniques). In all cases but Sensitivity Analysis, adjustment techniques have privileged the factors representing soil characteristics, hydrologic settings, aquifer properties and environmental parameters, by leveling their weights to ≈4.4, and have subordinated the factors describing the aquifer media by downgrading their weights to ≈1.5. Logistic Regression predicts the highest and Sensitivity Analysis the lowest vulnerabilities. Overall, the vulnerability indices may be separated by a maximum value of 51 points. This represents an uncertainty of 2.5 vulnerability classes, because they are 20 points wide. Given this ambiguity, the selection of a weighting technique to integrate a vulnerability index may require additional expertise to be set up satisfactorily. Following a general criterion that weights must be proportional to the range of the ratings, Correspondence Analysis may be recommended as the best adjustment technique. Copyright © 2014 Elsevier B.V. All rights reserved.
Briso, André Luiz Fraga; Rahal, Vanessa; Azevedo, Fernanda Almeida de; Gallinari, Marjorie de Oliveira; Gonçalves, Rafael Simões; Santos, Paulo Henrique Dos; Cintra, Luciano Tavares Angelo
2018-01-01
Objective The objective of this study was to evaluate dental sensitivity using visual analogue scale, a Computerized Visual Analogue Scale (CoVAS) and a neurosensory analyzer (TSA II) during at-home bleaching with 10% carbamide peroxide, with and without potassium oxalate. Materials and Methods Power Bleaching 10% containing potassium oxalate was used on one maxillary hemi-arch of the 25 volunteers, and Opalescence 10% was used on the opposite hemi-arch. Bleaching agents were used daily for 3 weeks. Analysis was performed before treatment, 24 hours later, 7, 14, and 21 days after the start of the treatment, and 7 days after its conclusion. The spontaneous tooth sensitivity was evaluated using the visual analogue scale and the sensitivity caused by a continuous 0°C stimulus was analyzed using CoVAS. The cold sensation threshold was also analyzed using the TSA II. The temperatures obtained were statistically analyzed using ANOVA and Tukey's test (α=5%). Results The data obtained with the other methods were also analyzed. 24 hours, 7 and 14 days before the beginning of the treatment, over 20% of the teeth presented spontaneous sensitivity, the normal condition was restored after the end of the treatment. Regarding the cold sensation temperatures, both products sensitized the teeth (p<0.05) and no differences were detected between the products in each period (p>0.05). In addition, when they were compared using CoVAS, Power Bleaching caused the highest levels of sensitivity in all study periods, with the exception of the 14th day of treatment. Conclusion We concluded that the bleaching treatment sensitized the teeth and the product with potassium oxalate was not able to modulate tooth sensitivity.
Briso, André Luiz Fraga; Rahal, Vanessa; de Azevedo, Fernanda Almeida; Gallinari, Marjorie de Oliveira; Gonçalves, Rafael Simões; dos Santos, Paulo Henrique; Cintra, Luciano Tavares Angelo
2018-01-01
Abstract Objective The objective of this study was to evaluate dental sensitivity using visual analogue scale, a Computerized Visual Analogue Scale (CoVAS) and a neurosensory analyzer (TSA II) during at-home bleaching with 10% carbamide peroxide, with and without potassium oxalate. Materials and Methods Power Bleaching 10% containing potassium oxalate was used on one maxillary hemi-arch of the 25 volunteers, and Opalescence 10% was used on the opposite hemi-arch. Bleaching agents were used daily for 3 weeks. Analysis was performed before treatment, 24 hours later, 7, 14, and 21 days after the start of the treatment, and 7 days after its conclusion. The spontaneous tooth sensitivity was evaluated using the visual analogue scale and the sensitivity caused by a continuous 0°C stimulus was analyzed using CoVAS. The cold sensation threshold was also analyzed using the TSA II. The temperatures obtained were statistically analyzed using ANOVA and Tukey's test (α=5%). Results The data obtained with the other methods were also analyzed. 24 hours, 7 and 14 days before the beginning of the treatment, over 20% of the teeth presented spontaneous sensitivity, the normal condition was restored after the end of the treatment. Regarding the cold sensation temperatures, both products sensitized the teeth (p<0.05) and no differences were detected between the products in each period (p>0.05). In addition, when they were compared using CoVAS, Power Bleaching caused the highest levels of sensitivity in all study periods, with the exception of the 14th day of treatment. Conclusion We concluded that the bleaching treatment sensitized the teeth and the product with potassium oxalate was not able to modulate tooth sensitivity. PMID:29742258
Hearon, Keith; Besset, Celine J.; Lonnecker, Alexander T.; Ware, Taylor; Voit, Walter E.; Wilson, Thomas S.; Wooley, Karen L.; Maitland, Duncan J.
2014-01-01
The synthetic design and thermomechanical characterization of shape memory polymers (SMPs) built from a new polyurethane chemistry that enables facile, bulk and tunable cross-linking of low-molecular weight thermoplastics by electron beam irradiation is reported in this study. SMPs exhibit stimuli-induced geometry changes and are being proposed for applications in numerous fields. We have previously reported a polyurethane SMP system that exhibits the complex processing capabilities of thermoplastic polymers and the mechanical robustness and tunability of thermomechanical properties that are often characteristic of thermoset materials. These previously reported polyurethanes suffer practically because the thermoplastic molecular weights needed to achieve target cross-link densities severely limit high-throughput thermoplastic processing and because thermally unstable radiation-sensitizing additives must be used to achieve high enough cross-link densities to enable desired tunable shape memory behavior. In this study, we demonstrate the ability to manipulate cross-link density in low-molecular weight aliphatic thermoplastic polyurethane SMPs (Mw as low as ~1.5 kDa) without radiation-sensitizing additives by incorporating specific structural motifs into the thermoplastic polymer side chains that we hypothesized would significantly enhance susceptibility to e-beam cross-linking. A custom diol monomer was first synthesized and then implemented in the synthesis of neat thermoplastic polyurethane SMPs that were irradiated at doses ranging from 1 to 500 kGy. Dynamic mechanical analysis (DMA) demonstrated rubbery moduli to be tailorable between 0.1 and 55 MPa, and both DMA and sol/gel analysis results provided fundamental insight into our hypothesized mechanism of electron beam cross-linking, which enables controllable bulk cross-linking to be achieved in highly processable, low-molecular weight thermoplastic shape memory polymers without sensitizing additives. PMID:25411511
Avila, Jacob; Smith, Ben; Mead, Therese; Jurma, Duane; Dawson, Matthew; Mallin, Michael; Dugan, Adam
2018-04-24
It is unknown whether the addition of M-mode to B-mode ultrasound (US) has any effect on the overall accuracy of interpretation of lung sliding in the evaluation of a pneumothorax by emergency physicians. This study aimed to determine what effect, if any, this addition has on US interpretation by emergency physicians of varying training levels. One hundred forty emergency physicians were randomized via online software to receive a quiz with B-mode clips alone or B-mode with corresponding M-mode images and asked to identify the presence or absence of lung sliding. The sensitivity, specificity, and accuracy of the diagnosis of lung sliding with and without M-mode US were compared. Overall, the sensitivities, specificities, and accuracies of B-mode + M-mode US versus B-mode US alone were 93.1% and 93.2% (P = .8), 96.0% and 89.8% (P < .0001), and 91.5% and 94.5% (P = .0091), respectively. A subgroup analysis showed that in those providers with fewer than 250 total US scans done previously, M-mode US increased accuracy from 88.2% (95% confidence interval, 86.2%-90.2%) to 94.4% (92.8%-96.0%; P = .001) and increased the specificity from 87.0% (84.5%-89.5%) to 97.2% (95.4%-99.0%; P < .0001) compared with B-mode US alone. There was no statistically significant difference observed in the sensitivity, specificity, and accuracy of B-mode + M-mode US compared with B-mode US alone in those with more than 250 scans. The addition of M-mode images to B-mode clips aids in the accurate diagnosis of lung sliding by emergency physicians. The subgroup analysis showed that the benefit of M-mode US disappears after emergency physicians have performed more than 250 US examinations. © 2018 by the American Institute of Ultrasound in Medicine.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-02
... responsible for making sure that your comment does not include any sensitive health information, like medical records or other individually identifiable health information. In addition, do not include any ``[t]rade... between CoStar and Xceligent, Inc. (``Xceligent''), and increasing the likelihood that CoStar will...
Wei, Binnian; McGuffey, James E; Blount, Benjamin C; Wang, Lanqing
2016-01-01
Maternal exposure to marijuana during the lactation period-either active or passive-has prompted concerns about transmission of cannabinoids to breastfed infants and possible subsequent adverse health consequences. Assessing these health risks requires a sensitive analytical approach that is able to quantitatively measure trace-level cannabinoids in breast milk. Here, we describe a saponification-solid phase extraction approach combined with ultra-high-pressure liquid chromatography-tandem mass spectrometry for simultaneously quantifying Δ9-tetrahydrocannabinol (THC), cannabidiol (CBD), and cannabinol (CBN) in breast milk. We demonstrate for the first time that constraints on sensitivity can be overcome by utilizing alkaline saponification of the milk samples. After extensively optimizing the saponification procedure, the validated method exhibited limits of detections of 13, 4, and 66 pg/mL for THC, CBN, and CBD, respectively. Notably, the sensitivity achieved was significantly improved, for instance, the limits of detection for THC is at least 100-fold more sensitive compared to that previously reported in the literature. This is essential for monitoring cannabinoids in breast milk resulting from passive or nonrecent active maternal exposure. Furthermore, we simultaneously acquired multiple reaction monitoring transitions for 12 C- and 13 C-analyte isotopes. This combined analysis largely facilitated data acquisition by reducing the repetitive analysis rate for samples exceeding the linear limits of 12 C-analytes. In addition to high sensitivity and broad quantitation range, this method delivers excellent accuracy (relative error within ±10%), precision (relative standard deviation <10%), and efficient analysis. In future studies, we expect this method to play a critical role in assessing infant exposure to cannabinoids through breastfeeding.
Photon spectroscopy by picoseconds differential Geiger-mode Si photomultiplier
NASA Astrophysics Data System (ADS)
Yamamoto, Masanobu; Hernandez, Keegan; Robinson, J. Paul
2018-02-01
The pixel array silicon photomultiplier (SiPM) is known as an excellent photon sensor with picoseconds avalanche process with the capacity for millions amplification of photoelectrons. In addition, a higher quantum efficiency(QE), small size, low bias voltage, light durability are attractive features for biological applications. The primary disadvantage is the limited dynamic range due to the 50ns recharge process and a high dark count which is an additional hurdle. We have developed a wide dynamic Si photon detection system applying ultra-fast differentiation signal processing, temperature control by thermoelectric device and Giga photon counter with 9 decimal digits dynamic range. The tested performance is six orders of magnitude with 600ps pulse width and sub-fW sensitivity. Combined with 405nm laser illumination and motored monochromator, Laser Induced Fluorescence Photon Spectrometry (LIPS) has been developed with a scan range from 200 900nm at maximum of 500nm/sec and 1nm FWHM. Based on the Planck equation E=hν, this photon counting spectrum provides a fundamental advance in spectral analysis by digital processing. Advantages include its ultimate sensitivity, theoretical linearity, as well as quantitative and logarithmic analysis without use of arbitrary units. Laser excitation is also useful for evaluation of photobleaching or oxidation in materials by higher energy illumination. Traditional typical photocurrent detection limit is about 1pW which includes millions of photons, however using our system it is possible to evaluate the photon spectrum and determine background noise and auto fluorescence(AFL) in optics in any cytometry or imaging system component. In addition, the photon-stream digital signal opens up a new approach for picosecond time-domain analysis. Photon spectroscopy is a powerful method for analysis of fluorescence and optical properties in biology.
Cost-effectiveness analysis of implants versus autologous perforator flaps using the BREAST-Q.
Matros, Evan; Albornoz, Claudia R; Razdan, Shantanu N; Mehrara, Babak J; Macadam, Sheina A; Ro, Teresa; McCarthy, Colleen M; Disa, Joseph J; Cordeiro, Peter G; Pusic, Andrea L
2015-04-01
Reimbursement has been recognized as a physician barrier to autologous reconstruction. Autologous reconstructions are more expensive than prosthetic reconstructions, but provide greater health-related quality of life. The authors' hypothesis is that autologous tissue reconstructions are cost-effective compared with prosthetic techniques when considering health-related quality of life and patient satisfaction. A cost-effectiveness analysis from the payer perspective, including patient input, was performed for unilateral and bilateral reconstructions with deep inferior epigastric perforator (DIEP) flaps and implants. The effectiveness measure was derived using the BREAST-Q and interpreted as the cost for obtaining 1 year of perfect breast health-related quality-adjusted life-year. Costs were obtained from the 2010 Nationwide Inpatient Sample. The incremental cost-effectiveness ratio was generated. A sensitivity analysis for age and stage at diagnosis was performed. BREAST-Q scores from 309 patients with implants and 217 DIEP flap reconstructions were included. The additional cost for obtaining 1 year of perfect breast-related health for a unilateral DIEP flap compared with implant reconstruction was $11,941. For bilateral DIEP flaps compared with implant reconstructions, the cost for an additional breast health-related quality-adjusted life-year was $28,017. The sensitivity analysis demonstrated that the cost for an additional breast health-related quality-adjusted life-year for DIEP flaps compared with implants was less for younger patients and earlier stage breast cancer. DIEP flaps are cost-effective compared with implants, especially for unilateral reconstructions. Cost-effectiveness of autologous techniques is maximized in women with longer life expectancy. Patient-reported outcomes findings can be incorporated into cost-effectiveness analyses to demonstrate the relative value of reconstructive procedures.
Yan, Jun; Shi, Songshan; Wang, Hongwei; Liu, Ruimin; Li, Ning; Chen, Yonglin; Wang, Shunchun
2016-01-20
A novel analytical method for neutral monosaccharide composition analysis of plant-derived oligo- and polysaccharides was developed using hydrophilic interaction liquid chromatography coupled to a charged aerosol detector. The effects of column type, additives, pH and column temperature on retention and separation were evaluated. Additionally, the method could distinguish potential impurities in samples, including chloride, sulfate and sodium, from sugars. The results of validation demonstrated that this method had good linearity (R(2) ≥ 0.9981), high precision (relative standard deviation ≤ 4.43%), and adequate accuracy (94.02-103.37% recovery) and sensitivity (detection limit: 15-40 ng). Finally, the monosaccharide compositions of the polysaccharide from Eclipta prostrasta L. and stachyose were successfully profiled through this method. This report represents the first time that all of these common monosaccharides could be well-separated and determined simultaneously by high performance liquid chromatography without additional derivatization. This newly developed method is convenient, efficient and reliable for monosaccharide analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades
NASA Technical Reports Server (NTRS)
Lorence, Christopher B.; Hall, Kenneth C.
1995-01-01
A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide vanes are redesigned for reduced downstream radiated noise. In addition, a framework detailing how the two-dimensional version of the method may be used to redesign three-dimensional geometries is presented.
Automatic differentiation evaluated as a tool for rotorcraft design and optimization
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.
1995-01-01
This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.
Wavelet analysis enables system-independent texture analysis of optical coherence tomography images.
Lingley-Papadopoulos, Colleen A; Loew, Murray H; Zara, Jason M
2009-01-01
Texture analysis for tissue characterization is a current area of optical coherence tomography (OCT) research. We discuss some of the differences between OCT systems and the effects those differences have on the resulting images and subsequent image analysis. In addition, as an example, two algorithms for the automatic recognition of bladder cancer are compared: one that was developed on a single system with no consideration for system differences, and one that was developed to address the issues associated with system differences. The first algorithm had a sensitivity of 73% and specificity of 69% when tested using leave-one-out cross-validation on data taken from a single system. When tested on images from another system with a different central wavelength, however, the method classified all images as cancerous regardless of the true pathology. By contrast, with the use of wavelet analysis and the removal of system-dependent features, the second algorithm reported sensitivity and specificity values of 87 and 58%, respectively, when trained on images taken with one imaging system and tested on images taken with another.
Wavelet analysis enables system-independent texture analysis of optical coherence tomography images
NASA Astrophysics Data System (ADS)
Lingley-Papadopoulos, Colleen A.; Loew, Murray H.; Zara, Jason M.
2009-07-01
Texture analysis for tissue characterization is a current area of optical coherence tomography (OCT) research. We discuss some of the differences between OCT systems and the effects those differences have on the resulting images and subsequent image analysis. In addition, as an example, two algorithms for the automatic recognition of bladder cancer are compared: one that was developed on a single system with no consideration for system differences, and one that was developed to address the issues associated with system differences. The first algorithm had a sensitivity of 73% and specificity of 69% when tested using leave-one-out cross-validation on data taken from a single system. When tested on images from another system with a different central wavelength, however, the method classified all images as cancerous regardless of the true pathology. By contrast, with the use of wavelet analysis and the removal of system-dependent features, the second algorithm reported sensitivity and specificity values of 87 and 58%, respectively, when trained on images taken with one imaging system and tested on images taken with another.
Branched-chain amino acids for people with hepatic encephalopathy.
Gluud, Lise Lotte; Dam, Gitte; Les, Iñigo; Córdoba, Juan; Marchesini, Giulio; Borre, Mette; Aagaard, Niels Kristian; Vilstrup, Hendrik
2015-02-25
Hepatic encephalopathy is a brain dysfunction with neurological and psychiatric changes associated with liver insufficiency or portal-systemic shunting. The severity ranges from minor symptoms to coma. A Cochrane systematic review including 11 randomised clinical trials on branched-chain amino acids (BCAA) versus control interventions has evaluated if BCAA may benefit people with hepatic encephalopathy. To evaluate the beneficial and harmful effects of BCAA versus any control intervention for people with hepatic encephalopathy. We identified trials through manual and electronic searches in The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index on 2 October 2014. We included randomised clinical trials, irrespective of the bias control, language, or publication status. The authors independently extracted data based on published reports and collected data from the primary investigators. We changed our primary outcomes in this update of the review to include mortality (all cause), hepatic encephalopathy (number of people without improved manifestations of hepatic encephalopathy), and adverse events. The analyses included random-effects and fixed-effect meta-analyses. We performed subgroup, sensitivity, regression, and trial sequential analyses to evaluate sources of heterogeneity (including intervention, and participant and trial characteristics), bias (using The Cochrane Hepato-Biliary Group method), small-study effects, and the robustness of the results after adjusting for sparse data and multiplicity. We graded the quality of the evidence using the GRADE approach. We found 16 randomised clinical trials including 827 participants with hepatic encephalopathy classed as overt (12 trials) or minimal (four trials). Eight trials assessed oral BCAA supplements and seven trials assessed intravenous BCAA. The control groups received placebo/no intervention (two trials), diets (10 trials), lactulose (two trials), or neomycin (two trials). In 15 trials, all participants had cirrhosis. Based on the combined Cochrane Hepato-Biliary Group score, we classed seven trials as low risk of bias and nine trials as high risk of bias (mainly due to lack of blinding or for-profit funding). In a random-effects meta-analysis of mortality, we found no difference between BCAA and controls (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.69 to 1.11; 760 participants; 15 trials; moderate quality of evidence). We found no evidence of small-study effects. Sensitivity analyses of trials with a low risk of bias found no beneficial or detrimental effect of BCAA on mortality. Trial sequential analysis showed that the required information size was not reached, suggesting that additional evidence was needed. BCAA had a beneficial effect on hepatic encephalopathy (RR 0.73, 95% CI 0.61 to 0.88; 827 participants; 16 trials; high quality of evidence). We found no small-study effects and confirmed the beneficial effect of BCAA in a sensitivity analysis that only included trials with a low risk of bias (RR 0.71, 95% CI 0.52 to 0.96). The trial sequential analysis showed that firm evidence was reached. In a fixed-effect meta-analysis, we found that BCAA increased the risk of nausea and vomiting (RR 5.56; 2.93 to 10.55; moderate quality of evidence). We found no beneficial or detrimental effects of BCAA on nausea or vomiting in a random-effects meta-analysis or on quality of life or nutritional parameters. We did not identify predictors of the intervention effect in the subgroup, sensitivity, or meta-regression analyses. In sensitivity analyses that excluded trials with a lactulose or neomycin control, BCAA had a beneficial effect on hepatic encephalopathy (RR 0.76, 95% CI 0.63 to 0.92). Additional sensitivity analyses found no difference between BCAA and lactulose or neomycin (RR 0.66, 95% CI 0.34 to 1.30). In this updated review, we included five additional trials. The analyses showed that BCAA had a beneficial effect on hepatic encephalopathy. We found no effect on mortality, quality of life, or nutritional parameters, but we need additional trials to evaluate these outcomes. Likewise, we need additional randomised clinical trials to determine the effect of BCAA compared with interventions such as non-absorbable disaccharides, rifaximin, or other antibiotics.
Branched-chain amino acids for people with hepatic encephalopathy.
Gluud, Lise Lotte; Dam, Gitte; Les, Iñigo; Marchesini, Giulio; Borre, Mette; Aagaard, Niels Kristian; Vilstrup, Hendrik
2017-05-18
Hepatic encephalopathy is a brain dysfunction with neurological and psychiatric changes associated with liver insufficiency or portal-systemic shunting. The severity ranges from minor symptoms to coma. A Cochrane systematic review including 11 randomised clinical trials on branched-chain amino acids (BCAA) versus control interventions has evaluated if BCAA may benefit people with hepatic encephalopathy. To evaluate the beneficial and harmful effects of BCAA versus any control intervention for people with hepatic encephalopathy. We identified trials through manual and electronic searches in The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Science Citation Index Expanded and Conference Proceedings Citation Index - Science, and LILACS (May 2017). We included randomised clinical trials, irrespective of the bias control, language, or publication status. The authors independently extracted data based on published reports and collected data from the primary investigators. We changed our primary outcomes in this update of the review to include mortality (all cause), hepatic encephalopathy (number of people without improved manifestations of hepatic encephalopathy), and adverse events. The analyses included random-effects and fixed-effect meta-analyses. We performed subgroup, sensitivity, regression, and trial sequential analyses to evaluate sources of heterogeneity (including intervention, and participant and trial characteristics), bias (using The Cochrane Hepato-Biliary Group method), small-study effects, and the robustness of the results after adjusting for sparse data and multiplicity. We graded the quality of the evidence using the GRADE approach. We found 16 randomised clinical trials including 827 participants with hepatic encephalopathy classed as overt (12 trials) or minimal (four trials). Eight trials assessed oral BCAA supplements and seven trials assessed intravenous BCAA. The control groups received placebo/no intervention (two trials), diets (10 trials), lactulose (two trials), or neomycin (two trials). In 15 trials, all participants had cirrhosis. We classed seven trials as low risk of bias and nine trials as high risk of bias (mainly due to lack of blinding or for-profit funding). In a random-effects meta-analysis of mortality, we found no difference between BCAA and controls (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.69 to 1.11; 760 participants; 15 trials; moderate quality of evidence). We found no evidence of small-study effects. Sensitivity analyses of trials with a low risk of bias found no beneficial or detrimental effect of BCAA on mortality. Trial sequential analysis showed that the required information size was not reached, suggesting that additional evidence was needed. BCAA had a beneficial effect on hepatic encephalopathy (RR 0.73, 95% CI 0.61 to 0.88; 827 participants; 16 trials; high quality of evidence). We found no small-study effects and confirmed the beneficial effect of BCAA in a sensitivity analysis that only included trials with a low risk of bias (RR 0.71, 95% CI 0.52 to 0.96). The trial sequential analysis showed that firm evidence was reached. In a fixed-effect meta-analysis, we found that BCAA increased the risk of nausea and vomiting (RR 5.56; 2.93 to 10.55; moderate quality of evidence). We found no beneficial or detrimental effects of BCAA on nausea or vomiting in a random-effects meta-analysis or on quality of life or nutritional parameters. We did not identify predictors of the intervention effect in the subgroup, sensitivity, or meta-regression analyses. In sensitivity analyses that excluded trials with a lactulose or neomycin control, BCAA had a beneficial effect on hepatic encephalopathy (RR 0.76, 95% CI 0.63 to 0.92). Additional sensitivity analyses found no difference between BCAA and lactulose or neomycin (RR 0.66, 95% CI 0.34 to 1.30). In this updated review, we included five additional trials. The analyses showed that BCAA had a beneficial effect on hepatic encephalopathy. We found no effect on mortality, quality of life, or nutritional parameters, but we need additional trials to evaluate these outcomes. Likewise, we need additional randomised clinical trials to determine the effect of BCAA compared with interventions such as non-absorbable disaccharides, rifaximin, or other antibiotics.
Branched-chain amino acids for people with hepatic encephalopathy.
Gluud, Lise Lotte; Dam, Gitte; Les, Iñigo; Córdoba, Juan; Marchesini, Giulio; Borre, Mette; Aagaard, Niels Kristian; Vilstrup, Hendrik
2015-09-17
Hepatic encephalopathy is a brain dysfunction with neurological and psychiatric changes associated with liver insufficiency or portal-systemic shunting. The severity ranges from minor symptoms to coma. A Cochrane systematic review including 11 randomised clinical trials on branched-chain amino acids (BCAA) versus control interventions has evaluated if BCAA may benefit people with hepatic encephalopathy. To evaluate the beneficial and harmful effects of BCAA versus any control intervention for people with hepatic encephalopathy. We identified trials through manual and electronic searches in The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index (August 2015). We included randomised clinical trials, irrespective of the bias control, language, or publication status. The authors independently extracted data based on published reports and collected data from the primary investigators. We changed our primary outcomes in this update of the review to include mortality (all cause), hepatic encephalopathy (number of people without improved manifestations of hepatic encephalopathy), and adverse events. The analyses included random-effects and fixed-effect meta-analyses. We performed subgroup, sensitivity, regression, and trial sequential analyses to evaluate sources of heterogeneity (including intervention, and participant and trial characteristics), bias (using The Cochrane Hepato-Biliary Group method), small-study effects, and the robustness of the results after adjusting for sparse data and multiplicity. We graded the quality of the evidence using the GRADE approach. We found 16 randomised clinical trials including 827 participants with hepatic encephalopathy classed as overt (12 trials) or minimal (four trials). Eight trials assessed oral BCAA supplements and seven trials assessed intravenous BCAA. The control groups received placebo/no intervention (two trials), diets (10 trials), lactulose (two trials), or neomycin (two trials). In 15 trials, all participants had cirrhosis. We classed seven trials as low risk of bias and nine trials as high risk of bias (mainly due to lack of blinding or for-profit funding). In a random-effects meta-analysis of mortality, we found no difference between BCAA and controls (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.69 to 1.11; 760 participants; 15 trials; moderate quality of evidence). We found no evidence of small-study effects. Sensitivity analyses of trials with a low risk of bias found no beneficial or detrimental effect of BCAA on mortality. Trial sequential analysis showed that the required information size was not reached, suggesting that additional evidence was needed. BCAA had a beneficial effect on hepatic encephalopathy (RR 0.73, 95% CI 0.61 to 0.88; 827 participants; 16 trials; high quality of evidence). We found no small-study effects and confirmed the beneficial effect of BCAA in a sensitivity analysis that only included trials with a low risk of bias (RR 0.71, 95% CI 0.52 to 0.96). The trial sequential analysis showed that firm evidence was reached. In a fixed-effect meta-analysis, we found that BCAA increased the risk of nausea and vomiting (RR 5.56; 2.93 to 10.55; moderate quality of evidence). We found no beneficial or detrimental effects of BCAA on nausea or vomiting in a random-effects meta-analysis or on quality of life or nutritional parameters. We did not identify predictors of the intervention effect in the subgroup, sensitivity, or meta-regression analyses. In sensitivity analyses that excluded trials with a lactulose or neomycin control, BCAA had a beneficial effect on hepatic encephalopathy (RR 0.76, 95% CI 0.63 to 0.92). Additional sensitivity analyses found no difference between BCAA and lactulose or neomycin (RR 0.66, 95% CI 0.34 to 1.30). In this updated review, we included five additional trials. The analyses showed that BCAA had a beneficial effect on hepatic encephalopathy. We found no effect on mortality, quality of life, or nutritional parameters, but we need additional trials to evaluate these outcomes. Likewise, we need additional randomised clinical trials to determine the effect of BCAA compared with interventions such as non-absorbable disaccharides, rifaximin, or other antibiotics.
Smoking increases the risk of diabetic foot amputation: A meta-analysis.
Liu, Min; Zhang, Wei; Yan, Zhaoli; Yuan, Xiangzhen
2018-02-01
Accumulating evidence suggests that smoking is associated with diabetic foot amputation. However, the currently available results are inconsistent and controversial. Therefore, the present study performed a meta-analysis to systematically review the association between smoking and diabetic foot amputation and to investigate the risk factors of diabetic foot amputation. Public databases, including PubMed and Embase, were searched prior to 29th February 2016. The heterogeneity was assessed using the Cochran's Q statistic and the I 2 statistic, and odds ratio (OR) and 95% confidence interval (CI) were calculated and pooled appropriately. Sensitivity analysis was performed to evaluate the stability of the results. In addition, Egger's test was applied to assess any potential publication bias. Based on the research, a total of eight studies, including five cohort studies and three case control studies were included. The data indicated that smoking significantly increased the risk of diabetic foot amputation (OR=1.65; 95% CI, 1.09-2.50; P<0.0001) compared with non-smoking. Sensitivity analysis demonstrated that the pooled analysis did not vary substantially following the exclusion of any one study. Additionally, there was no evidence of publication bias (Egger's test, t=0.1378; P=0.8958). Furthermore, no significant difference was observed between the minor and major amputation groups in patients who smoked (OR=0.79; 95% CI, 0.24-2.58). The results of the present meta-analysis suggested that smoking is a notable risk factor for diabetic foot amputation. Smoking cessation appears to reduce the risk of diabetic foot amputation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Meera Jay
The purpose of this research was to develop sensitive LC-MS methods for enantiomeric separation and detection, and then apply these methods for determination of enantiomeric composition and for the study of pharmacokinetic and pharmacodynamic properties of a chiral nutraceutical. Our first study, evaluated the use of reverse phase and polar organic mode for chiral LC-API/MS method development. Reverse phase methods containing high water were found to decrease ionization efficiency in electrospray, while polar organic methods offered good compatibility and low limits of detection with ESI. The use of lower flow rates dramatically increased the sensitivity by an order of magnitude.more » Additionally, for rapid chiral screening, the coupled Chirobiotic column afforded great applicability for LC-MS method development. Our second study, continued with chiral LC-MS method development in this case for the normal phase mode. Ethoxynonafluorobutane, a fluorocarbon with low flammability and no flashpoint, was used as a substitute solvent for hexane/heptane mobile phases for LC-APCI/MS. Comparable chromatographic resolutions and selectivities were found using ENFB substituted mobile phase systems, although, peak efficiencies were significantly diminished. Limits of detection were either comparable or better for ENFB-MS over heptane-PDA detection. The miscibility of ENFB with a variety of commonly used organic modifiers provided for flexibility in method development. For APCI, lower flow rates did not increase sensitivity as significantly as was previously found for ESI-MS detection. The chiral analysis of native amino acids was evaluated using both APCI and ESI sources. For free amino acids and small peptides, APCI was found to have better sensitivities over ESI at high flow rates. For larger peptides, however, sensitivity was greatly improved with the use of electrospray. Additionally, sensitivity was enhanced with the use of non-volatile additives, This optimized method was then used to simultaneously separate all 19 native amino acids enantiomerically in less than 20 minutes, making it suitable for complex biological analysis. The previously developed amino acid method was then used to enantiomerically separate theanine, a free amino acid found in tea leaves. Native theanine was found to have lower limits of detection and better sensitivity over derivatized theanine samples. The native theanine method was then used to determine the enantiomeric composition of six commercially available L-theanine products. Five out of the six samples were found to be a racemic mixture of both D- and L-theanine. Concern over the efficacy of these theanine products led to our final study evaluating the pharmacokinetics and pharmacodynamics of theanine in rats using LC-ESI/MS. Rats were administered D-, L, and QL-theanine both orally and intra-peritoneally. Oral administration data demonstrated that intestinal absorption of L-theanine was greater than that of D-theanine, while i.p. data showed equal plasma uptake of both isomers. This suggested a possible competitive binding effect with respect to gut absorption. Additionally, it was found that regardless of administration method, the presence of the other enantiomer always decreased overall theanine plasma concentration. This indicated that D- and L- theanine exhibit competitive binding with respect to urinary reabsorption as well. The large quantities of D-theanine detected in the urine suggested that D-themine was eliminated with minimal metabolism, while L-theanine was preferentially reabsorbed and metabolized to ethylamine. Clearly, the metabolic fate of racemic theanine and its individual enantiomers was quite different, placing into doubt the utility of the commercial theanine products.« less
Nelson, S D; Nelson, R E; Cannon, G W; Lawrence, P; Battistone, M J; Grotzke, M; Rosenblum, Y; LaFleur, J
2014-12-01
This is a cost-effectiveness analysis of training rural providers to identify and treat osteoporosis. Results showed a slight cost savings, increase in life years, increase in treatment rates, and decrease in fracture incidence. However, the results were sensitive to small differences in effectiveness, being cost-effective in 70 % of simulations during probabilistic sensitivity analysis. We evaluated the cost-effectiveness of training rural providers to identify and treat veterans at risk for fragility fractures relative to referring these patients to an urban medical center for specialist care. The model evaluated the impact of training on patient life years, quality-adjusted life years (QALYs), treatment rates, fracture incidence, and costs from the perspective of the Department of Veterans Affairs. We constructed a Markov microsimulation model to compare costs and outcomes of a hypothetical cohort of veterans seen by rural providers. Parameter estimates were derived from previously published studies, and we conducted one-way and probabilistic sensitivity analyses on the parameter inputs. Base-case analysis showed that training resulted in no additional costs and an extra 0.083 life years (0.054 QALYs). Our model projected that as a result of training, more patients with osteoporosis would receive treatment (81.3 vs. 12.2 %), and all patients would have a lower incidence of fractures per 1,000 patient years (hip, 1.628 vs. 1.913; clinical vertebral, 0.566 vs. 1.037) when seen by a trained provider compared to an untrained provider. Results remained consistent in one-way sensitivity analysis and in probabilistic sensitivity analyses, training rural providers was cost-effective (less than $50,000/QALY) in 70 % of the simulations. Training rural providers to identify and treat veterans at risk for fragility fractures has a potential to be cost-effective, but the results are sensitive to small differences in effectiveness. It appears that provider education alone is not enough to make a significant difference in fragility fracture rates among veterans.
MEDIAN-BASED INCREMENTAL COST-EFFECTIVENESS RATIOS WITH CENSORED DATA
Bang, Heejung; Zhao, Hongwei
2016-01-01
Cost-effectiveness is an essential part of treatment evaluation, in addition to effectiveness. In the cost-effectiveness analysis, a measure called the incremental cost-effectiveness ratio (ICER) is widely utilized, and the mean cost and the mean (quality-adjusted) life years have served as norms to summarize cost and effectiveness for a study population. Recently, the median-based ICER was proposed for complementary or sensitivity analysis purposes. In this paper, we extend this method when some data are censored. PMID:26010599
Byrne, Barry; Stack, Edwina; Gilmartin, Niamh; O'Kennedy, Richard
2009-01-01
Antibody-based sensors permit the rapid and sensitive analysis of a range of pathogens and associated toxins. A critical assessment of the implementation of such formats is provided, with reference to their principles, problems and potential for ‘on-site’ analysis. Particular emphasis is placed on the detection of foodborne bacterial pathogens, such as Escherichia coli and Listeria monocytogenes, and additional examples relating to the monitoring of fungal pathogens, viruses, mycotoxins, marine toxins and parasites are also provided. PMID:22408533
D’Souza, Malcolm J.; Shuman, Kevin E.; Carter, Shannon E.; Kevill, Dennis N.
2008-01-01
Specific rates of solvolysis at 25 °C for p-nitrophenyl chloroformate (1) are analyzed using the extended (two-term) Grunwald-Winstein equation. For 39 solvents, the sensitivities (l = 1.68±0.06 and m = 0.46±0.04) towards changes in solvent nucleophilicity (l) and solvent ionizing power (m) obtained, are similar to those previously observed for phenyl chloroformate (2) and p-methoxyphenyl chloroformate (3). The observations incorporating new kinetic data in several fluoroalcohol-containing mixtures, are rationalized in terms of the reaction being sensitive to substituent effects and the mechanism of reaction involving the addition (association) step of an addition-elimination (association-dissociation) pathway being rate-determining. The l/m ratios obtained for 1, 2, and 3, are also compared to the previously published l/m ratios for benzyl chloroformate (4) and p-nitrobenzyl chloroformate (5). PMID:19330071
Sensitivity of Hyperdense Basilar Artery Sign on Non-Enhanced Computed Tomography.
Ernst, Marielle; Romero, Javier M; Buhk, Jan-Hendrik; Cheng, Bastian; Herrmann, Jochen; Fiehler, Jens; Groth, Michael
2015-01-01
The hyperdense basilar artery sign (HBAS) is an indicator of vessel occlusion on non contrast-enhanced computer tomography (NECT) in acute stroke patients. Since basilar artery occlusion (BAO) is associated with a high mortality and morbidity, its early detection is of great clinical value. We sought to analyze the influence of density measurement as well as a normalized ratio of Hounsfield unit/hematocrit (HU/Hct) ratio on the detection of BAO on NECT in patients with suspected BAO. 102 patients with clinically suspected BAO were examined with NECT followed immediately by Multidetector computed tomography Angiography. Two observers independently analyzed the images regarding the presence or absence of HBAS on NECT and performed HU measurements in the basilar artery. Receiver operating characteristic curve analysis was performed to determine the optimal density threshold for BAO using attenuation measurements or HU/Hct ratio. Sensitivity of visual detection of the HBAS on NECT was relatively low 81% (95%-CI, 54-95%) while specificity was high 91% (95%-CI, 82-96%). The highest sensitivity was achieved by the combination of visual assessment and additional quantitative attenuation measurements applying a cut-off value of 46.5 HU with 94% sensitivity and 81% specificity for BAO. A HU/Hct ratio >1.32 revealed sensitivity of 88% (95%-CI, 60-98%) and specificity of 84% (95%-CI, 74-90%). In patients with clinically suspected acute BAO the combination of visual assessment and additional attenuation measurement with a cut-off value of 46.5 HU is a reliable approach with high sensitivity in the detection of BAO on NECT.
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
Bermudo, R; Abia, D; Mozos, A; García-Cruz, E; Alcaraz, A; Ortiz, Á R; Thomson, T M; Fernández, P L
2011-01-01
Introduction: Currently, final diagnosis of prostate cancer (PCa) is based on histopathological analysis of needle biopsies, but this process often bears uncertainties due to small sample size, tumour focality and pathologist's subjective assessment. Methods: Prostate cancer diagnostic signatures were generated by applying linear discriminant analysis to microarray and real-time RT–PCR (qRT–PCR) data from normal and tumoural prostate tissue samples. Additionally, after removal of biopsy tissues, material washed off from transrectal biopsy needles was used for molecular profiling and discriminant analysis. Results: Linear discriminant analysis applied to microarray data for a set of 318 genes differentially expressed between non-tumoural and tumoural prostate samples produced 26 gene signatures, which classified the 84 samples used with 100% accuracy. To identify signatures potentially useful for the diagnosis of prostate biopsies, surplus material washed off from routine biopsy needles from 53 patients was used to generate qRT–PCR data for a subset of 11 genes. This analysis identified a six-gene signature that correctly assigned the biopsies as benign or tumoural in 92.6% of the cases, with 88.8% sensitivity and 96.1% specificity. Conclusion: Surplus material from prostate needle biopsies can be used for minimal-size gene signature analysis for sensitive and accurate discrimination between non-tumoural and tumoural prostates, without interference with current diagnostic procedures. This approach could be a useful adjunct to current procedures in PCa diagnosis. PMID:22009027
Smith, P A; Son, P S; Callaghan, P M; Jederberg, W W; Kuhlmann, K; Still, K R
1996-07-17
Components of colophony (rosin) resin acids are sensitizers through dermal and pulmonary exposure to heated and unheated material. Significant work in the literature identifies specific resin acids and their oxidation products as sensitizers. Pulmonary exposure to colophony sensitizers has been estimated indirectly through formaldehyde exposure. To assess pulmonary sensitization from airborne resin acids, direct measurement is desired, as the degree to which aldehyde exposure correlates with that of resin acids during colophony heating is undefined. Any analytical method proposed should be applicable to a range of compounds and should also identify specific compounds present in a breathing zone sample. This work adapts OSHA Sampling and Analytical Method 58, which is designed to provide airborne concentration data for coal tar pitch volatile solids by air filtration through a glass fiber filter, solvent extraction of the filter, and gravimetric analysis of the non-volatile extract residue. In addition to data regarding total soluble material captured, a portion of the extract may be subjected to compound-specific analysis. Levels of soluble solids found during personal breathing zone sampling during electronics soldering in a Naval Aviation Depot ranged from below the "reliable quantitation limit" reported in the method to 7.98 mg/m3. Colophony-spiked filters analyzed in accordance with the method (modified) produced a limit of detection for total solvent-soluble colophony solids of 10 micrograms/filter. High performance liquid chromatography was used to identify abietic acid present in a breathing zone sample.
Fang, Y G; Chen, N N; Cheng, Y B; Sun, S J; Li, H X; Sun, F; Xiang, Y
2015-12-01
Urinary neutrophil gelatinase-associated lipocalin (uNGAL) is relatively specific in lupus nephritis (LN) patients. However, its diagnostic value has not been evaluated. The aim of this review was to determine the value of uNGAL for diagnosis and estimating activity in LN. A comprehensive search was performed on PubMed, EMBASE, Web of Knowledge, Cochrane electronic databases through December 2014. Meta-analysis of sensitivity and specificity was performed with a random-effects model. Additionally, summary receiver operating characteristic (SROC) curves and area under the curve (AUC) values were calculated. Fourteen studies were selected for this review. With respect to diagnosing LN, the pooled sensitivity and specificity were 73.6% (95% confidence interval (CI), 61.9-83.3) and 78.1% (95% CI, 69.0-85.6), respectively. The SROC-AUC value was 0.8632. Regarding estimating LN activity, the pooled sensitivity and specificity were 66.2% (95% CI, 60.4-71.7) and 62.1% (95% CI, 57.9-66.3), respectively. The SROC-AUC value was 0.7583. In predicting renal flares, the pooled sensitivity and specificity were 77.5% (95% CI, 68.1-85.1) and 65.3% (95% CI, 60.0-70.3), respectively. The SROC-AUC value was 0.7756. In conclusion, this meta-analysis indicates that uNGAL has relatively fair sensitivity and specificity in diagnosing LN, estimating LN activity and predicting renal flares, suggesting that uNGAL is a potential biomarker in diagnosing LN and monitoring LN activity. © The Author(s) 2015.
Bertran, E A; Berlie, H D; Taylor, A; Divine, G; Jaber, L A
2017-02-01
To examine differences in the performance of HbA 1c for diagnosing diabetes in Arabs compared with Europeans. The PubMed, Embase and Cochrane library databases were searched for records published between 1998 and 2015. Estimates of sensitivity, specificity and log diagnostic odds ratios for an HbA 1c cut-point of 48 mmol/mol (6.5%) were compared between Arabs and Europeans, using a bivariate linear mixed-model approach. For studies reporting multiple cut-points, population-specific summary receiver operating characteristic (SROC) curves were constructed. In addition, sensitivity, specificity and Youden Index were estimated for strata defined by HbA 1c cut-point and population type. Database searches yielded 1912 unique records; 618 full-text articles were reviewed. Fourteen studies met the inclusion criteria; hand-searching yielded three additional eligible studies. Three Arab (N = 2880) and 16 European populations (N = 49 127) were included in the analysis. Summary sensitivity and specificity for a HbA 1c cut-point of 48 mmol/mol (6.5%) in both populations were 42% (33-51%), and 97% (95-98%). There was no difference in area under SROC curves between Arab and European populations (0.844 vs. 0.847; P = 0.867), suggesting no difference in HbA 1c diagnostic accuracy between populations. Multiple cut-point summary estimates stratified by population suggest that Arabs have lower sensitivity and higher specificity at a HbA 1c cut-point of 44 mmol/mol (6.2%) compared with European populations. Estimates also suggest similar test performance at cut-points of 44 mmol/mol (6.2%) and 48 mmol/mol (6.5%) for Arabs. Given the low sensitivity of HbA 1c in the high-risk Arab American population, we recommend a combination of glucose-based and HbA 1c testing to ensure an accurate and timely diagnosis of diabetes. © 2016 Diabetes UK.
Kondo, Takashi; Kobayashi, Daisuke; Mochizuki, Maki; Asanuma, Kouichi; Takahashi, Satoshi
2017-01-01
Background Recently developed reagents for the highly sensitive measurement of cardiac troponin I are useful for early diagnosis of acute coronary syndrome. However, differences in measured values between these new reagents and previously used reagents have not been well studied. In this study, we aimed to compare the values between ARCHITECT High-Sensitive Troponin I ST (newly developed reagents), ARCHITECT Troponin I ST and STACIA CLEIA cardiac troponin I (two previously developed reagent kits). Methods Gel filtration high-performance liquid chromatography was used to analyse the causes of differences in measured values. Results The measured values differed between ARCHITECT High-Sensitive Troponin I ST and STACIA CLEIA cardiac troponin I reagents (r = 0.82). Cross-reactivity tests using plasma with added skeletal-muscle troponin I resulted in higher reactivity (2.17-3.03%) for the STACIA CLEIA cardiac troponin I reagents compared with that for the ARCHITECT High-Sensitive Troponin I ST reagents (less than 0.014%). In addition, analysis of three representative samples using gel filtration high-performance liquid chromatography revealed reagent-specific differences in the reactivity against each cardiac troponin I complex; this could explain the differences in values observed for some of the samples. Conclusion The newly developed ARCHITECT High-Sensitive Troponin I ST reagents were not affected by the presence of skeletal-muscle troponin I in the blood and may be useful for routine examinations.
Role of Reward Sensitivity and Processing in Major Depressive and Bipolar Spectrum Disorders
Alloy, Lauren B.; Olino, Thomas; Freed, Rachel D.; Nusslock, Robin
2016-01-01
Since Costello’s (1972) seminal Behavior Therapy article on loss of reinforcers or reinforcer effectiveness in depression, the role of reward sensitivity and processing in both depression and bipolar disorder has become a central area of investigation. In this article, we review the evidence for a model of reward sensitivity in mood disorders, with unipolar depression characterized by reward hyposensitivity and bipolar disorders by reward hypersensitivity. We address whether aberrant reward sensitivity and processing are correlates of, mood-independent traits of, vulnerabilities for, and/or predictors of the course of depression and bipolar spectrum disorders, covering evidence from self-report, behavioral, neurophysiological, and neural levels of analysis. We conclude that substantial evidence documents that blunted reward sensitivity and processing are involved in unipolar depression and heightened reward sensitivity and processing are characteristic of hypomania/mania. We further conclude that aberrant reward sensitivity has a trait component, but more research is needed to clearly demonstrate that reward hyposensitivity and hypersensitivity are vulnerabilities for depression and bipolar disorder, respectively. Moreover, additional research is needed to determine whether bipolar depression is similar to unipolar depression and characterized by reward hyposensitivity, or whether like bipolar hypomania/mania, it involves reward hypersensitivity. PMID:27816074
Ciura, Viesha A; Brouwers, H Bart; Pizzolato, Raffaella; Ortiz, Claudia J; Rosand, Jonathan; Goldstein, Joshua N; Greenberg, Steven M; Pomerantz, Stuart R; Gonzalez, R Gilberto; Romero, Javier M
2014-11-01
The computed tomography angiography (CTA) spot sign is a validated biomarker for poor outcome and hematoma expansion in intracerebral hemorrhage. The spot sign has proven to be a dynamic entity, with multimodal imaging proving to be of additional value. We investigated whether the addition of a 90-second delayed CTA acquisition would capture additional intracerebral hemorrhage patients with the spot sign and increase the sensitivity of the spot sign. We prospectively enrolled consecutive intracerebral hemorrhage patients undergoing first pass and 90-second delayed CTA for 18 months at a single academic center. Univariate and multivariate logistic regression were performed to assess clinical and neuroimaging covariates for relationship with hematoma expansion and mortality. Sensitivity of the spot sign for hematoma expansion on first pass CTA was 55%, which increased to 64% if the spot sign was present on either CTA acquisition. In multivariate analysis the spot sign presence was associated with significant hematoma expansion: odds ratio, 17.7 (95% confidence interval, 3.7-84.2; P=0.0004), 8.3 (95% confidence interval, 2.0-33.4; P=0.004), and 12.0 (95% confidence interval, 2.9-50.5; P=0.0008) if present on first pass, delayed, or either CTA acquisition, respectively. Spot sign presence on either acquisitions was also significant for mortality. We demonstrate improved sensitivity for predicting hematoma expansion and poor outcome by adding a 90-second delayed CTA, which may enhance selection of patients who may benefit from hemostatic therapy. © 2014 American Heart Association, Inc.
Cost Effectiveness of Field Trauma Triage among Injured Adults Served by Emergency Medical Services
Newgard, Craig D; Yang, Zhuo; Nishijima, Daniel; McConnell, K John; Trent, Stacy; Holmes, James F; Daya, Mohamud; Mann, N Clay; Hsia, Renee Y; Rea, Tom; Wang, N Ewen; Staudenmayer, Kristan; Delgado, M Kit
2016-01-01
Background The American College of Surgeons Committee on Trauma sets national targets for the accuracy of field trauma triage at ≥ 95% sensitivity and ≥ 65% specificity, yet the cost-effectiveness of realizing these goals is unknown. We evaluated the cost-effectiveness of current field trauma triage practices compared to triage strategies consistent with the national targets. Study Design This was a cost-effectiveness analysis using data from 79,937 injured adults transported by 48 emergency medical services (EMS) agencies to 105 trauma and non-trauma hospitals in 6 regions of the Western U.S. from 2006 through 2008. Incremental differences in survival, quality adjusted life years (QALYs), costs, and the incremental cost-effectiveness ratio (ICER; costs per QALY gained) were estimated for each triage strategy over a 1-year and lifetime horizon using a decision analytic Markov model. We considered an ICER threshold of less than $100,000 to be cost-effective. Results For these 6 regions, a high sensitivity triage strategy consistent with national trauma policy (sensitivity 98.6%, specificity 17.1%) would cost $1,317,333 per QALY gained, while current triage practices (sensitivity 87.2%, specificity 64.0%) cost $88,000 per QALY gained compared to a moderate sensitivity strategy (sensitivity 71.2%, specificity 66.5%). Refining EMS transport patterns by triage status improved cost-effectiveness. At the trauma system level, a high-sensitivity triage strategy would save 3.7 additional lives per year at a 1-year cost of $8.78 million, while a moderate sensitivity approach would cost 5.2 additional lives and save $781,616 each year. Conclusions A high-sensitivity approach to field triage consistent with national trauma policy is not cost effective. The most cost effective approach to field triage appears closely tied to triage specificity and adherence to triage-based EMS transport practices. PMID:27178369
Using demography and movement behavior to predict range expansion of the southern sea otter.
Tinker, M.T.; Doak, D.F.; Estes, J.A.
2008-01-01
In addition to forecasting population growth, basic demographic data combined with movement data provide a means for predicting rates of range expansion. Quantitative models of range expansion have rarely been applied to large vertebrates, although such tools could be useful for restoration and management of many threatened but recovering populations. Using the southern sea otter (Enhydra lutris nereis) as a case study, we utilized integro-difference equations in combination with a stage-structured projection matrix that incorporated spatial variation in dispersal and demography to make forecasts of population recovery and range recolonization. In addition to these basic predictions, we emphasize how to make these modeling predictions useful in a management context through the inclusion of parameter uncertainty and sensitivity analysis. Our models resulted in hind-cast (1989–2003) predictions of net population growth and range expansion that closely matched observed patterns. We next made projections of future range expansion and population growth, incorporating uncertainty in all model parameters, and explored the sensitivity of model predictions to variation in spatially explicit survival and dispersal rates. The predicted rate of southward range expansion (median = 5.2 km/yr) was sensitive to both dispersal and survival rates; elasticity analysis indicated that changes in adult survival would have the greatest potential effect on the rate of range expansion, while perturbation analysis showed that variation in subadult dispersal contributed most to variance in model predictions. Variation in survival and dispersal of females at the south end of the range contributed most of the variance in predicted southward range expansion. Our approach provides guidance for the acquisition of further data and a means of forecasting the consequence of specific management actions. Similar methods could aid in the management of other recovering populations.
Prevalence of and risk factors for latex sensitization in patients with spina bifida.
Bernardini, R; Novembre, E; Lombardi, E; Mezzetti, P; Cianferoni, A; Danti, A D; Mercurella, A; Vierucci, A
1998-11-01
We determined the prevalence of and risk factors for latex sensitization in patients with spina bifida. A total of 59 consecutive subjects 2 to 40 years old with spina bifida answered a questionnaire, and underwent a latex skin prick test and determination of serum IgE specific for latex by RAST CAP radioimmunoassay. We also noted the relationships of total serum IgE skin prick tests to common air and food allergens. In addition, skin prick plus prick tests were also done with fresh foods, including kiwi, pear, orange, almond, pineapple, apple, tomato and banana. Latex sensitization was present in 15 patients (25%) according to the presence of IgE specific to latex, as detected by a skin prick test in 9 and/or RAST CAP in 13. Five latex sensitized patients (33.3%) had clinical manifestations, such as urticaria, conjuctivitis, angioedema, rhinitis and bronchial asthma, while using a latex glove and inflating a latex balloon. Atopy was present in 21 patients (35.6%). In 14 patients (23%) 1 or more skin tests were positive for fresh foods using a prick plus prick technique. Tomato, kiwi, and pear were the most common skin test positive foods. Univariate analysis revealed that a history of 5 or more operations, atopy and positive prick plus prick tests results for pear and kiwi were significantly associated with latex sensitization. Multivariate analysis demonstrated that only atopy and a history of 5 or more operations were significantly and independently associated with latex sensitization. A fourth of the patients with spina bifida were sensitized to latex. Atopy and an elevated number of operations were significant and independent predictors of latex sensitization in these cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cacuci, Dan G.; Favorite, Jeffrey A.
This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less
Cacuci, Dan G.; Favorite, Jeffrey A.
2018-04-06
This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less
Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G.; Ultsch, Alfred
2018-01-01
Abstract The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models. PMID:28700537
Single-tube analysis of DNA methylation with silica superparamagnetic beads.
Bailey, Vasudev J; Zhang, Yi; Keeley, Brian P; Yin, Chao; Pelosky, Kristen L; Brock, Malcolm; Baylin, Stephen B; Herman, James G; Wang, Tza-Huei
2010-06-01
DNA promoter methylation is a signature for the silencing of tumor suppressor genes. Most widely used methods to detect DNA methylation involve 3 separate, independent processes: DNA extraction, bisulfite conversion, and methylation detection via a PCR method, such as methylation-specific PCR (MSP). This method includes many disconnected steps with associated losses of material, potentially reducing the analytical sensitivity required for analysis of challenging clinical samples. Methylation on beads (MOB) is a new technique that integrates DNA extraction, bisulfite conversion, and PCR in a single tube via the use of silica superparamagnetic beads (SSBs) as a common DNA carrier for facilitating cell debris removal and buffer exchange throughout the entire process. In addition, PCR buffer is used to directly elute bisulfite-treated DNA from SSBs for subsequent target amplifications. The diagnostic sensitivity of MOB was evaluated by methylation analysis of the CDKN2A [cyclin-dependent kinase inhibitor 2A (melanoma, p16, inhibits CDK4); also known as p16(INK4a)] promoter in serum DNA of lung cancer patients and compared with that of conventional methods. Methylation analysis consisting of DNA extraction followed by bisulfite conversion and MSP was successfully carried out within 9 h in a single tube. The median pre-PCR DNA yield was 6.61-fold higher with the MOB technique than with conventional techniques. Furthermore, MOB increased the diagnostic sensitivity in our analysis of the CDKN2A promoter in patient serum by successfully detecting methylation in 74% of cancer patients, vs the 45% detection rate obtained with conventional techniques. The MOB technique successfully combined 3 processes into a single tube, thereby allowing ease in handling and an increased detection throughput. The increased pre-PCR yield in MOB allowed efficient, diagnostically sensitive methylation detection.
NASA Astrophysics Data System (ADS)
Zamani, P.; Borzouei, M.
2016-12-01
This paper addresses issue of sensitivity of efficiency classification of variable returns to scale (VRS) technology for enhancing the credibility of data envelopment analysis (DEA) results in practical applications when an additional decision making unit (DMU) needs to be added to the set being considered. It also develops a structured approach to assisting practitioners in making an appropriate selection of variation range for inputs and outputs of additional DMU so that this DMU be efficient and the efficiency classification of VRS technology remains unchanged. This stability region is simply specified by the concept of defining hyperplanes of production possibility set of VRS technology and the corresponding halfspaces. Furthermore, this study determines a stability region for the additional DMU within which, in addition to efficiency classification, the efficiency score of a specific inefficient DMU is preserved and also using a simulation method, a region in which some specific efficient DMUs become inefficient is provided.
On Learning Cluster Coefficient of Private Networks
Wang, Yue; Wu, Xintao; Zhu, Jun; Xiang, Yang
2013-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as clustering coefficient or modularity often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we treat a graph statistics as a function f and develop a divide and conquer approach to enforce differential privacy. The basic procedure of this approach is to first decompose the target computation f into several less complex unit computations f1, …, fm connected by basic mathematical operations (e.g., addition, subtraction, multiplication, division), then perturb the output of each fi with Laplace noise derived from its own sensitivity value and the distributed privacy threshold εi, and finally combine those perturbed fi as the perturbed output of computation f. We examine how various operations affect the accuracy of complex computations. When unit computations have large global sensitivity values, we enforce the differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We illustrate our approach by using clustering coefficient, which is a popular statistics used in social network analysis. Empirical evaluations on five real social networks and various synthetic graphs generated from three random graph models show the developed divide and conquer approach outperforms the direct approach. PMID:24429843
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-01-01
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds. PMID:26364642
Diagnosis of Middle Atmosphere Climate Sensitivity by the Climate Feedback Response Analysis Method
NASA Technical Reports Server (NTRS)
Zhu, Xun; Yee, Jeng-Hwa; Cai, Ming; Swartz, William H.; Coy, Lawrence; Aquila, Valentina; Talaat, Elsayed R.
2014-01-01
We present a new method to diagnose the middle atmosphere climate sensitivity by extending the Climate Feedback-Response Analysis Method (CFRAM) for the coupled atmosphere-surface system to the middle atmosphere. The Middle atmosphere CFRAM (MCFRAM) is built on the atmospheric energy equation per unit mass with radiative heating and cooling rates as its major thermal energy sources. MCFRAM preserves the CFRAM unique feature of an additive property for which the sum of all partial temperature changes due to variations in external forcing and feedback processes equals the observed temperature change. In addition, MCFRAM establishes a physical relationship of radiative damping between the energy perturbations associated with various feedback processes and temperature perturbations associated with thermal responses. MCFRAM is applied to both measurements and model output fields to diagnose the middle atmosphere climate sensitivity. It is found that the largest component of the middle atmosphere temperature response to the 11-year solar cycle (solar maximum vs. solar minimum) is directly from the partial temperature change due to the variation of the input solar flux. Increasing CO2 always cools the middle atmosphere with time whereas partial temperature change due to O3 variation could be either positive or negative. The partial temperature changes due to different feedbacks show distinctly different spatial patterns. The thermally driven globally averaged partial temperature change due to all radiative processes is approximately equal to the observed temperature change, ranging from 0.5 K near 70 km from the near solar maximum to the solar minimum.
Seligman, D A; Pullinger, A G
2000-01-01
Confusion about the relationship of occlusion to temporomandibular disorders (TMD) persists. This study attempted to identify occlusal and attrition factors plus age that would characterize asymptomatic normal female subjects. A total of 124 female patients with intracapsular TMD were compared with 47 asymptomatic female controls for associations to 9 occlusal factors, 3 attrition severity measures, and age using classification tree, multiple stepwise logistic regression, and univariate analyses. Models were tested for accuracy (sensitivity and specificity) and total contribution to the variance. The classification tree model had 4 terminal nodes that used only anterior attrition and age. "Normals" were mainly characterized by low attrition levels, whereas patients had higher attrition and tended to be younger. The tree model was only moderately useful (sensitivity 63%, specificity 94%) in predicting normals. The logistic regression model incorporated unilateral posterior crossbite and mediotrusive attrition severity in addition to the 2 factors in the tree, but was slightly less accurate than the tree (sensitivity 51%, specificity 90%). When only occlusal factors were considered in the analysis, normals were additionally characterized by a lack of anterior open bite, smaller overjet, and smaller RCP-ICP slides. The log likelihood accounted for was similar for both the tree (pseudo R(2) = 29.38%; mean deviance = 0.95) and the multiple logistic regression (Cox Snell R(2) = 30.3%, mean deviance = 0.84) models. The occlusal and attrition factors studied were only moderately useful in differentiating normals from TMD patients.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wittman, Richard S.
2013-09-20
This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-11
... responsible for making sure that your comment does not include any sensitive health information, like medical records or other individually identifiable health information. In addition, do not include any ``[t]rade... overnight service. Visit the Commission Web site at http://www.ftc.gov to read this Notice and the news...
Eom, Han Young; Park, So-Young; Kim, Min Kyung; Suh, Joon Hyuk; Yeom, Hyesun; Min, Jung Won; Kim, Unyong; Lee, Jeongmi; Youm, Jeong-Rok; Han, Sang Beom
2010-06-25
Saikosaponins are triterpene saponins derived from the roots of Bupleurum falcatum L. (Umbelliferae), which has been traditionally used to treat fever, inflammation, liver diseases, and nephritis. It is difficult to analyze saikosaponins using HPLC-UV due to the lack of chromophores. Therefore, evaporative light scattering detection (ELSD) is used as a valuable alternative to UV detection. More recently, a charged aerosol detection (CAD) method has been developed to improve the sensitivity and reproducibility of ELSD. In this study, we compared CAD and ELSD methods in the simultaneous analysis of 10 saikosaponins, including saikosaponins-A, -B(1), -B(2), -B(3), -B(4), -C, -D, -G, -H and -I. A mixture of the 10 saikosaponins was injected into the Ascentis Express C18 column (100 mm x 4.6 mm, 2.7 microm) with gradient elution and detection with CAD and ELSD by splitting. We examined various factors that could affect the sensitivity of the detectors including various concentrations of additives, pH and flow rate of the mobile phase, purity of nitrogen gas and the CAD range. The sensitivity was determined based on the signal-to-noise ratio. The best sensitivity for CAD was achieved with 0.1 mM ammonium acetate at pH 4.0 in the mobile phase with a flow rate of 1.0 mL/min, and the CAD range at 100 pA, whereas that for ELSD was achieved with 0.01% acetic acid in the mobile phase with a flow rate at 0.8 mL/min. The purity of the nitrogen gas had only minor effects on the sensitivities of both detectors. Finally, the sensitivity for CAD was two to six times better than that of ELSD. Taken together, these results suggest that CAD provides a more sensitive analysis of the 10 saikosaponins than does ELSD. Copyright 2010 Elsevier B.V. All rights reserved.
Song, Xinxin; Wu, Yanjie; Wu, Lin; Hu, Yufang; Li, Wenrou; Guo, Zhiyong; Su, Xiurong; Jiang, Xiaohua
2017-01-01
A developed Christmas-tree derived immunosensor based on a gold label silver stain (GLSS) technique was fabricated for a highly sensitive analysis of Vibrio parahaemolyticu (VP). In this strategy, captured VP antibody (cAb) was immobilized on a solid substrate; then, the VPs were sequentially tagged with a signal probe by incubating the assay with a detection VP antibody (dAb) conjugated gold nanoparticles (AuNPs)-labeled graphite-like carbon nitride (g-C 3 N 4 ). Finally, the attached signal probe could harvest a visible signal by the silver meal deposition, and then followed by homebrew Matlab 6.0 as a grey value acquisition. In addition, the overall design of the biosensor was established in abundant AuNPs and g-C 3 N 4 with a two-dimensional structure, affording a bulb-decorated Christmas-tree model. Moreover, with the optimized conditions, the detection limit of the as-proposed biosensor is as low as 10 2 CFU (Colony-Forming Units) mL -1 , exhibiting an increase of two orders of magnitude compared with the traditional immune-gold method. Additionally, the developed visible immunosensor was also successfully applied to the analysis of complicated samples.
Analytical performance of the various acquisition modes in Orbitrap MS and MS/MS.
Kaufmann, Anton
2018-04-30
Quadrupole Orbitrap instruments (Q Orbitrap) permit high-resolution mass spectrometry (HRMS)-based full scan acquisitions and have a number of acquisition modes where the quadrupole isolates a particular mass range prior to a possible fragmentation and HRMS-based acquisition. Selecting the proper acquisition mode(s) is essential if trace analytes are to be quantified in complex matrix extracts. Depending on the particular requirements, such as sensitivity, selectivity of detection, linear dynamic range, and speed of analysis, different acquisition modes may have to be chosen. This is particularly important in the field of multi-residue analysis (e.g., pesticides or veterinary drugs in food samples) where a large number of analytes within a complex matrix have to be detected and reliably quantified. Meeting the specific detection and quantification performance criteria for every targeted compound may be challenging. It is the aim of this paper to describe the strengths and the limitations of the currently available Q Orbitrap acquisition modes. In addition, the incorporation of targeted acquisitions between full scan experiments is discussed. This approach is intended to integrate compounds that require an additional degree of sensitivity or selectivity into multi-residue methods. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Skeie, R. B.; Berntsen, T.; Aldrin, M.; Holden, M.; Myhre, G.
2012-04-01
A key question in climate science is to quantify the sensitivity of the climate system to perturbation in the radiative forcing (RF). This sensitivity is often represented by the equilibrium climate sensitivity, but this quantity is poorly constrained with significant probabilities for high values. In this work the equilibrium climate sensitivity (ECS) is estimated based on observed near-surface temperature change from the instrumental record, changes in ocean heat content and detailed RF time series. RF time series from pre-industrial times to 2010 for all main anthropogenic and natural forcing mechanisms are estimated and the cloud lifetime effect and the semi-direct effect, which are not RF mechanisms in a strict sense, are included in the analysis. The RF time series are linked to the observations of ocean heat content and temperature change through an energy balance model and a stochastic model, using a Bayesian approach to estimate the ECS from the data. The posterior mean of the ECS is 1.9˚C with 90% credible interval (C.I.) ranging from 1.2 to 2.9˚C, which is tighter than previously published estimates. Observational data up to and including year 2010 are used in this study. This is at least ten additional years compared to the majority of previously published studies that have used the instrumental record in attempts to constrain the ECS. We show that the additional 10 years of data, and especially 10 years of additional ocean heat content data, have significantly narrowed the probability density function of the ECS. If only data up to and including year 2000 are used in the analysis, the 90% C.I. is 1.4 to 10.6˚C with a pronounced heavy tail in line with previous estimates of ECS constrained by observations in the 20th century. Also the transient climate response (TCR) is estimated in this study. Using observational data up to and including year 2010 gives a 90% C.I. of 1.0 to 2.1˚C, while the 90% C.I. is significantly broader ranging from 1.1 to 3.4 ˚C if only data up to and including year 2000 is used.
Significance of dual polarized long wavelength radar for terrain analysis
NASA Technical Reports Server (NTRS)
Macdonald, H. C.; Waite, W. P.
1978-01-01
Long wavelength systems with improved penetration capability have been considered to have the potential for minimizing the vegetation contribution and enhancing the surface return variations. L-band imagery of the Arkansas geologic test site provides confirmatory evidence of this effect. However, the increased wavelength increases the sensitivity to larger scale structure at relatively small incidence angles. The regularity of agricultural and urban scenes provides large components in the low frequency-large scale portion of the roughness spectrum that are highly sensitive to orientation. The addition of a cross polarized channel is shown to enable the interpreter to distinguish vegetation and orientational perturbations in the surface return.
Thermo-elastic optical coherence tomography.
Wang, Tianshi; Pfeiffer, Tom; Wu, Min; Wieser, Wolfgang; Amenta, Gaetano; Draxinger, Wolfgang; van der Steen, Antonius F W; Huber, Robert; Soest, Gijs van
2017-09-01
The absorption of nanosecond laser pulses induces rapid thermo-elastic deformation in tissue. A sub-micrometer scale displacement occurs within a few microseconds after the pulse arrival. In this Letter, we investigate the laser-induced thermo-elastic deformation using a 1.5 MHz phase-sensitive optical coherence tomography (OCT) system. A displacement image can be reconstructed, which enables a new modality of phase-sensitive OCT, called thermo-elastic OCT. An analysis of the results shows that the optical absorption is a dominating factor for the displacement. Thermo-elastic OCT is capable of visualizing inclusions that do not appear on the structural OCT image, providing additional tissue type information.
Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs
NASA Astrophysics Data System (ADS)
Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.
1982-12-01
Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.
Guo, Cheng; Li, Xiaofen; Wang, Rong; Yu, Jiekai; Ye, Minfeng; Mao, Lingna; Zhang, Suzhan; Zheng, Shu
2016-01-01
Oxidative DNA damage plays crucial roles in the pathogenesis of numerous diseases including cancer. 8-hydroxy-2′-deoxyguanosine (8-OHdG) is the most representative product of oxidative modifications of DNA, and urinary 8-OHdG is potentially the best non-invasive biomarker of oxidative damage to DNA. Herein, we developed a sensitive, specific and accurate method for quantification of 8-OHdG in human urine. The urine samples were pretreated using off-line solid-phase extraction (SPE), followed by ultrahigh performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) analysis. By the use of acetic acid as an additive to the mobile phase, we improved the UPLC-MS/MS detection of 8-OHdG by 2.7−5.3 times. Using the developed strategy, we measured the contents of 8-OHdG in urine samples from 142 healthy volunteers and 84 patients with colorectal cancer (CRC). We observed increased levels of urinary 8-OHdG in patients with CRC and patients with tumor metastasis, compared to healthy controls and patients without tumor metastasis, respectively. Additionally, logistic regression analysis and receiver operator characteristic (ROC) curve analysis were performed. Our findings implicate that oxidative stress plays important roles in the development of CRC and the marked increase of urinary 8-OHdG may serve as a potential liquid biomarker for the risk estimation, early warning and detection of CRC. PMID:27585556
NASA Astrophysics Data System (ADS)
Guo, Cheng; Li, Xiaofen; Wang, Rong; Yu, Jiekai; Ye, Minfeng; Mao, Lingna; Zhang, Suzhan; Zheng, Shu
2016-09-01
Oxidative DNA damage plays crucial roles in the pathogenesis of numerous diseases including cancer. 8-hydroxy-2‧-deoxyguanosine (8-OHdG) is the most representative product of oxidative modifications of DNA, and urinary 8-OHdG is potentially the best non-invasive biomarker of oxidative damage to DNA. Herein, we developed a sensitive, specific and accurate method for quantification of 8-OHdG in human urine. The urine samples were pretreated using off-line solid-phase extraction (SPE), followed by ultrahigh performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) analysis. By the use of acetic acid as an additive to the mobile phase, we improved the UPLC-MS/MS detection of 8-OHdG by 2.7-5.3 times. Using the developed strategy, we measured the contents of 8-OHdG in urine samples from 142 healthy volunteers and 84 patients with colorectal cancer (CRC). We observed increased levels of urinary 8-OHdG in patients with CRC and patients with tumor metastasis, compared to healthy controls and patients without tumor metastasis, respectively. Additionally, logistic regression analysis and receiver operator characteristic (ROC) curve analysis were performed. Our findings implicate that oxidative stress plays important roles in the development of CRC and the marked increase of urinary 8-OHdG may serve as a potential liquid biomarker for the risk estimation, early warning and detection of CRC.
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kamiński, M.; Supeł, Ł.
2016-02-01
It is widely known that lateral-torsional buckling of a member under bending and warping restraints of its cross-sections in the steel structures are crucial for estimation of their safety and durability. Although engineering codes for steel and aluminum structures support the designer with the additional analytical expressions depending even on the boundary conditions and internal forces diagrams, one may apply alternatively the traditional Finite Element or Finite Difference Methods (FEM, FDM) to determine the so-called critical moment representing this phenomenon. The principal purpose of this work is to compare three different ways of determination of critical moment, also in the context of structural sensitivity analysis with respect to the structural element length. Sensitivity gradients are determined by the use of both analytical and the central finite difference scheme here and contrasted also for analytical, FEM as well as FDM approaches. Computational study is provided for the entire family of the steel I- and H - beams available for the practitioners in this area, and is a basis for further stochastic reliability analysis as well as durability prediction including possible corrosion progress.
Schmitt, Michael; Heib, Florian
2013-10-07
Drop shape analysis is one of the most important and frequently used methods to characterise surfaces in the scientific and industrial communities. An especially large number of studies, which use contact angle measurements to analyse surfaces, are characterised by incorrect or misdirected conclusions such as the determination of surface energies from poorly performed contact angle determinations. In particular, the characterisation of surfaces, which leads to correlations between the contact angle and other effects, must be critically validated for some publications. A large number of works exist concerning the theoretical and thermodynamic aspects of two- and tri-phase boundaries. The linkage between theory and experiment is generally performed by an axisymmetric drop shape analysis, that is, simulations of the theoretical drop profiles by numerical integration onto a number of points of the drop meniscus (approximately 20). These methods work very well for axisymmetric profiles such as those obtained by pendant drop measurements, but in the case of a sessile drop onto real surfaces, additional unknown and misunderstood effects on the dependence of the surface must be considered. We present a special experimental and practical investigation as another way to transition from experiment to theory. This procedure was developed to be especially sensitive to small variations in the dependence of the dynamic contact angle on the surface; as a result, this procedure will allow the properties of the surface to be monitored with a higher precession and sensitivity. In this context, water drops onto a 111 silicon wafer are dynamically measured by video recording and by inclining the surface, which results in a sequence of non-axisymmetric drops. The drop profiles are analysed by commercial software and by the developed and presented high-precision drop shape analysis. In addition to the enhanced sensitivity for contact angle determination, this analysis technique, in combination with innovative fit algorithms and data presentations, can result in enhanced reproducibility and comparability of the contact angle measurements in terms of the material characterisation in a comprehensible way.
NASA Astrophysics Data System (ADS)
Schmitt, Michael; Heib, Florian
2013-10-01
Drop shape analysis is one of the most important and frequently used methods to characterise surfaces in the scientific and industrial communities. An especially large number of studies, which use contact angle measurements to analyse surfaces, are characterised by incorrect or misdirected conclusions such as the determination of surface energies from poorly performed contact angle determinations. In particular, the characterisation of surfaces, which leads to correlations between the contact angle and other effects, must be critically validated for some publications. A large number of works exist concerning the theoretical and thermodynamic aspects of two- and tri-phase boundaries. The linkage between theory and experiment is generally performed by an axisymmetric drop shape analysis, that is, simulations of the theoretical drop profiles by numerical integration onto a number of points of the drop meniscus (approximately 20). These methods work very well for axisymmetric profiles such as those obtained by pendant drop measurements, but in the case of a sessile drop onto real surfaces, additional unknown and misunderstood effects on the dependence of the surface must be considered. We present a special experimental and practical investigation as another way to transition from experiment to theory. This procedure was developed to be especially sensitive to small variations in the dependence of the dynamic contact angle on the surface; as a result, this procedure will allow the properties of the surface to be monitored with a higher precession and sensitivity. In this context, water drops onto a 111 silicon wafer are dynamically measured by video recording and by inclining the surface, which results in a sequence of non-axisymmetric drops. The drop profiles are analysed by commercial software and by the developed and presented high-precision drop shape analysis. In addition to the enhanced sensitivity for contact angle determination, this analysis technique, in combination with innovative fit algorithms and data presentations, can result in enhanced reproducibility and comparability of the contact angle measurements in terms of the material characterisation in a comprehensible way.
Boscaini, Camile; Pellanda, Lucia Campos
2015-01-01
Studies have shown associations of birth weight with increased concentrations of high sensitivity C-reactive protein. This study assessed the relationship between birth weight, anthropometric and metabolic parameters during childhood, and high sensitivity C-reactive protein. A total of 612 Brazilian school children aged 5-13 years were included in the study. High sensitivity C-reactive protein was measured by particle-enhanced immunonephelometry. Nutritional status was assessed by body mass index, waist circumference, and skinfolds. Total cholesterol and fractions, triglycerides, and glucose were measured by enzymatic methods. Insulin sensitivity was determined by the homeostasis model assessment method. Statistical analysis included chi-square test, General Linear Model, and General Linear Model for Gamma Distribution. Body mass index, waist circumference, and skinfolds were directly associated with birth weight (P < 0.001, P = 0.001, and P = 0.015, resp.). Large for gestational age children showed higher high sensitivity C-reactive protein levels (P < 0.001) than small for gestational age. High birth weight is associated with higher levels of high sensitivity C-reactive protein, body mass index, waist circumference, and skinfolds. Large for gestational age altered high sensitivity C-reactive protein and promoted additional risk factor for atherosclerosis in these school children, independent of current nutritional status.
Digital Correlation Microwave Polarimetry: Analysis and Demonstration
NASA Technical Reports Server (NTRS)
Piepmeier, J. R.; Gasiewski, A. J.; Krebs, Carolyn A. (Technical Monitor)
2000-01-01
The design, analysis, and demonstration of a digital-correlation microwave polarimeter for use in earth remote sensing is presented. We begin with an analysis of three-level digital correlation and develop the correlator transfer function and radiometric sensitivity. A fifth-order polynomial regression is derived for inverting the digital correlation coefficient into the analog statistic. In addition, the effects of quantizer threshold asymmetry and hysteresis are discussed. A two-look unpolarized calibration scheme is developed for identifying correlation offsets. The developed theory and calibration method are verified using a 10.7 GHz and a 37.0 GHz polarimeter. The polarimeters are based upon 1-GS/s three-level digital correlators and measure the first three Stokes parameters. Through experiment, the radiometric sensitivity is shown to approach the theoretical as derived earlier in the paper and the two-look unpolarized calibration method is successfully compared with results using a polarimetric scheme. Finally, sample data from an aircraft experiment demonstrates that the polarimeter is highly-useful for ocean wind-vector measurement.
Field, Christopher R.; Lubrano, Adam; Woytowitz, Morgan; Giordano, Braden C.; Rose-Pehrsson, Susan L.
2014-01-01
The direct liquid deposition of solution standards onto sorbent-filled thermal desorption tubes is used for the quantitative analysis of trace explosive vapor samples. The direct liquid deposition method yields a higher fidelity between the analysis of vapor samples and the analysis of solution standards than using separate injection methods for vapors and solutions, i.e., samples collected on vapor collection tubes and standards prepared in solution vials. Additionally, the method can account for instrumentation losses, which makes it ideal for minimizing variability and quantitative trace chemical detection. Gas chromatography with an electron capture detector is an instrumentation configuration sensitive to nitro-energetics, such as TNT and RDX, due to their relatively high electron affinity. However, vapor quantitation of these compounds is difficult without viable vapor standards. Thus, we eliminate the requirement for vapor standards by combining the sensitivity of the instrumentation with a direct liquid deposition protocol to analyze trace explosive vapor samples. PMID:25145416
Field, Christopher R; Lubrano, Adam; Woytowitz, Morgan; Giordano, Braden C; Rose-Pehrsson, Susan L
2014-07-25
The direct liquid deposition of solution standards onto sorbent-filled thermal desorption tubes is used for the quantitative analysis of trace explosive vapor samples. The direct liquid deposition method yields a higher fidelity between the analysis of vapor samples and the analysis of solution standards than using separate injection methods for vapors and solutions, i.e., samples collected on vapor collection tubes and standards prepared in solution vials. Additionally, the method can account for instrumentation losses, which makes it ideal for minimizing variability and quantitative trace chemical detection. Gas chromatography with an electron capture detector is an instrumentation configuration sensitive to nitro-energetics, such as TNT and RDX, due to their relatively high electron affinity. However, vapor quantitation of these compounds is difficult without viable vapor standards. Thus, we eliminate the requirement for vapor standards by combining the sensitivity of the instrumentation with a direct liquid deposition protocol to analyze trace explosive vapor samples.
Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah
2015-01-01
Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hagopian, Louis P.; Rooker, Griffin W.; Zarcone, Jennifer R.; Bonner, Andrew C.; Arevalo, Alexander R.
2017-01-01
Hagopian, Rooker, and Zarcone (2015) evaluated a model for subtyping automatically reinforced self-injurious behavior (SIB) based on its sensitivity to changes in functional analysis conditions and the presence of self-restraint. The current study tested the generality of the model by applying it to all datasets of automatically reinforced SIB published from 1982 to 2015. We identified 49 datasets that included sufficient data to permit subtyping. Similar to the original study, Subtype-1 SIB was generally amenable to treatment using reinforcement alone, whereas Subtype-2 SIB was not. Conclusions could not be drawn about Subtype-3 SIB due to the small number of datasets. Nevertheless, the findings support the generality of the model and suggest that sensitivity of SIB to disruption by alternative reinforcement is an important dimension of automatically reinforced SIB. Findings also suggest that automatically reinforced SIB should no longer be considered a single category and that additional research is needed to better understand and treat Subtype-2 SIB. PMID:28032344
García-Isla, Guadalupe; Olivares, Andy Luis; Silva, Etelvino; Nuñez-Garcia, Marta; Butakoff, Constantine; Sanchez-Quintana, Damian; G Morales, Hernán; Freixa, Xavier; Noailly, Jérôme; De Potter, Tom; Camara, Oscar
2018-05-08
The left atrial appendage (LAA) is a complex and heterogeneous protruding structure of the left atrium (LA). In atrial fibrillation patients, it is the location where 90% of the thrombi are formed. However, the role of the LAA in thrombus formation is not fully known yet. The main goal of this work is to perform a sensitivity analysis to identify the most relevant LA and LAA morphological parameters in atrial blood flow dynamics. Simulations were run on synthetic ellipsoidal left atria models where different parameters were individually studied: pulmonary veins and mitral valve dimensions; LAA shape; and LA volume. Our computational analysis confirmed the relation between large LAA ostia, low blood flow velocities and thrombus formation. Additionally, we found that pulmonary vein configuration exerted a critical influence on LAA blood flow patterns. These findings contribute to a better understanding of the LAA and to support clinical decisions for atrial fibrillation patients. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kaewunruen, Sakdirat; Remennikov, Alex M.
2006-11-01
The vibration of in situ concrete sleepers in a railway track structure is a major factor causing cracking of prestressed concrete sleepers and excessive railway track maintenance cost. Not only does the ballast interact with the sleepers, but the rail pads also take part in affecting their free vibration characteristics. This paper presents a sensitivity analysis of free vibration behaviors of an in situ railway concrete sleeper (standard gauge sleeper), incorporating sleeper/ballast interaction, subjected to the variations of rail pad properties. Through finite element analysis, Timoshenko-beam and spring elements were used in the in situ railway concrete sleeper modeling. This model highlights the influence of rail pad parameters on the free vibration characteristics of in situ sleepers. In addition, information on the first five flexural vibration modes indicates the dynamic performance of railway track when using different types of rail pads, as it plays a vital role in the cracking deterioration of concrete sleepers.
Study of effects of fuel properties in turbine-powered business aircraft
NASA Technical Reports Server (NTRS)
Powell, F. D.; Biegen, R. J.; Weitz, P. G., Jr.; Duke, A. M.
1984-01-01
Increased interest in research and technology concerning aviation turbine fuels and their properties was prompted by recent changes in the supply and demand situation of these fuels. The most obvious change is the rapid increase in fuel price. For commercial airplanes, fuel costs now approach 50 percent of the direct operating costs. In addition, there were occasional local supply disruptions and gradual shifts in delivered values of certain fuel properties. Dwindling petroleum reserves and the politically sensitive nature of the major world suppliers make the continuation of these trends likely. A summary of the principal findings, and conclusions are presented. Much of the material, especially the tables and graphs, is considered in greater detail later. The economic analysis and examination of operational considerations are described. Because some of the assumptions on which the economic analysis is founded are not easily verified, the sensitivity of the analysis to alternates for these assumptions is examined. The data base on which the analyses are founded is defined in a set of appendices.
Cheng, Ji; Gao, Jinbo; Shuai, Xiaoming; Wang, Guobin; Tao, Kaixiong
2016-06-28
Bariatric surgery has emerged as a competitive strategy for obese patients. However, its comparative efficacy against non-surgical treatments remains ill-defined, especially among nonseverely obese crowds. Therefore, we implemented a systematic review and meta-analysis in order for an academic addition to current literatures. Literatures were retrieved from databases of PubMed, Web of Science, EMBASE and Cochrane Library. Randomized trials comparing surgical with non-surgical therapies for obesity were included. A Revised Jadad's Scale and Risk of Bias Summary were employed for methodological assessment. Subgroups analysis, sensitivity analysis and publication bias assessment were respectively performed in order to find out the source of heterogeneity, detect the outcome stability and potential publication bias. 25 randomized trials were eligibly included, totally comprising of 1194 participants. Both groups displayed well comparability concerning baseline parameters (P > 0.05). The pooled results of primary endpoints (weight loss and diabetic remission) revealed a significant advantage among surgical patients rather than those receiving non-surgical treatments (P < 0.05). Furthermore, except for certain cardiovascular indicators, bariatric surgery was superior to conventional arms in terms of metabolic secondary parameters (P < 0.05). Additionally, the pooled outcomes were confirmed to be stable by sensitivity analysis. Although Egger's test (P < 0.01) and Begg's test (P<0.05) had reported the presence of publication bias among included studies, "Trim-and-Fill" method verified that the pooled outcomes remained stable. Bariatric surgery is a better therapeutic option for weight loss, irrespective of follow-up duration, surgical techniques and obesity levels.
Grisham, Rachel N.; Sylvester, Brooke E.; Won, Helen; McDermott, Gregory; DeLair, Deborah; Ramirez, Ricardo; Yao, Zhan; Shen, Ronglai; Dao, Fanny; Bogomolniy, Faina; Makker, Vicky; Sala, Evis; Soumerai, Tara E.; Hyman, David M.; Socci, Nicholas D.; Viale, Agnes; Gershenson, David M.; Farley, John; Levine, Douglas A.; Rosen, Neal; Berger, Michael F.; Spriggs, David R.; Aghajanian, Carol A.; Solit, David B.; Iyer, Gopa
2015-01-01
Purpose No effective systemic therapy exists for patients with metastatic low-grade serous (LGS) ovarian cancers. BRAF and KRAS mutations are common in serous borderline (SB) and LGS ovarian cancers, and MEK inhibition has been shown to induce tumor regression in a minority of patients; however, no correlation has been observed between mutation status and clinical response. With the goal of identifying biomarkers of sensitivity to MEK inhibitor treatment, we performed an outlier analysis of a patient who experienced a complete, durable, and ongoing (> 5 years) response to selumetinib, a non-ATP competitive MEK inhibitor. Patients and Methods Next-generation sequencing was used to analyze this patient's tumor as well as an additional 28 SB/LGS tumors. Functional characterization of an identified novel alteration of interest was performed. Results Analysis of the extraordinary responder's tumor identified a 15-nucleotide deletion in the negative regulatory helix of the MAP2K1 gene encoding for MEK1. Functional characterization demonstrated that this mutant induced extracellular signal-regulated kinase pathway activation, promoted anchorage-independent growth and tumor formation in mice, and retained sensitivity to selumetinib. Analysis of additional LGS/SB tumors identified mutations predicted to induce extracellular signal-regulated kinase pathway activation in 82% (23 of 28), including two patients with BRAF fusions, one of whom achieved an ongoing complete response to MEK inhibitor–based combination therapy. Conclusion Alterations affecting the mitogen-activated protein kinase pathway are present in the majority of patients with LGS ovarian cancer. Next-generation sequencing analysis revealed deletions and fusions that are not detected by older sequencing approaches. These findings, coupled with the observation that a subset of patients with recurrent LGS ovarian cancer experienced dramatic and durable responses to MEK inhibitor therapy, support additional clinical studies of MEK inhibitors in this disease. PMID:26324360
Bignardi, Chiara; Cavazza, Antonella; Laganà, Carmen; Salvadeo, Paola; Corradini, Claudio
2018-01-01
The interest towards "substances of emerging concerns" referred to objects intended to come into contact with food is recently growing. Such substances can be found in traces in simulants and in food products put in contact with plastic materials. In this context, it is important to set up analytical systems characterized by high sensitivity and to improve detection parameters to enhance signals. This work was aimed at optimizing a method based on UHPLC coupled to high resolution mass spectrometry to quantify the most common plastic additives, and able to detect the presence of polymers degradation products and coloring agents migrating from plastic re-usable containers. The optimization of mass spectrometric parameter settings for quantitative analysis of additives has been achieved by a chemometric approach, using a full factorial and d-optimal experimental designs, allowing to evaluate possible interactions between the investigated parameters. Results showed that the optimized method was characterized by improved features in terms of sensitivity respect to existing methods and was successfully applied to the analysis of a complex model food system such as chocolate put in contact with 14 polycarbonate tableware samples. A new procedure for sample pre-treatment was carried out and validated, showing high reliability. Results reported, for the first time, the presence of several molecules migrating to chocolate, in particular belonging to plastic additives, such Cyasorb UV5411, Tinuvin 234, Uvitex OB, and oligomers, whose amount was found to be correlated to age and degree of damage of the containers. Copyright © 2017 John Wiley & Sons, Ltd.
Robust and Sensitive Analysis of Mouse Knockout Phenotypes
Karp, Natasha A.; Melvin, David; Mott, Richard F.
2012-01-01
A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student’s t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene’s function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained. PMID:23300663
Application of uncertainty and sensitivity analysis to the air quality SHERPA modelling tool
NASA Astrophysics Data System (ADS)
Pisoni, E.; Albrecht, D.; Mara, T. A.; Rosati, R.; Tarantola, S.; Thunis, P.
2018-06-01
Air quality has significantly improved in Europe over the past few decades. Nonetheless we still find high concentrations in measurements mainly in specific regions or cities. This dimensional shift, from EU-wide to hot-spot exceedances, calls for a novel approach to regional air quality management (to complement EU-wide existing policies). The SHERPA (Screening for High Emission Reduction Potentials on Air quality) modelling tool was developed in this context. It provides an additional tool to be used in support to regional/local decision makers responsible for the design of air quality plans. It is therefore important to evaluate the quality of the SHERPA model, and its behavior in the face of various kinds of uncertainty. Uncertainty and sensitivity analysis techniques can be used for this purpose. They both reveal the links between assumptions and forecasts, help in-model simplification and may highlight unexpected relationships between inputs and outputs. Thus, a policy steered SHERPA module - predicting air quality improvement linked to emission reduction scenarios - was evaluated by means of (1) uncertainty analysis (UA) to quantify uncertainty in the model output, and (2) by sensitivity analysis (SA) to identify the most influential input sources of this uncertainty. The results of this study provide relevant information about the key variables driving the SHERPA output uncertainty, and advise policy-makers and modellers where to place their efforts for an improved decision-making process.
Comparison of the efficiency control of mycotoxins by some optical immune biosensors
NASA Astrophysics Data System (ADS)
Slyshyk, N. F.; Starodub, N. F.
2013-11-01
It was compared the efficiency of patulin control at the application of such optical biosensors which were based on the surface plasmon resonance (SPR) and nano-porous silicon (sNPS). In last case the intensity of the immune reaction was registered by measuring level of chemiluminescence (ChL) or photocurrent of nPS. The sensitivity of this mycotoxin determination by first type of immune biosensor was 0.05-10 mg/L Approximately the same sensitivity as well as the overall time analysis were demonstrated by the immune biosensor based on the nPS too. Nevertheless, the last type of biosensor was simpler in technical aspect and the cost of analysis was cheapest. That is why, it was recommend the nPS based immune biosensor for wide screening application and SPR one for some additional control or verification of preliminary obtained results. In this article a special attention was given to condition of sample preparation for analysis, in particular, micotoxin extraction from potao and some juices. Moreover, it was compared the efficiency of the above mentioned immune biosensors with such traditional approach of mycotoxin determination as the ELISA-method. In the result of investigation and discussion of obtained data it was concluded that both type of the immune biosensors are able to fulfill modern practice demand in respect sensitivity, rapidity, simplicity and cheapness of analysis.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Ghanegolmohammadi, Farzan; Yoshida, Mitsunori; Ohnuki, Shinsuke; Sukegawa, Yuko; Okada, Hiroki; Obara, Keisuke; Kihara, Akio; Suzuki, Kuninori; Kojima, Tetsuya; Yachie, Nozomu; Hirata, Dai; Ohya, Yoshikazu
2017-01-01
We investigated the global landscape of Ca2+ homeostasis in budding yeast based on high-dimensional chemical-genetic interaction profiles. The morphological responses of 62 Ca2+-sensitive (cls) mutants were quantitatively analyzed with the image processing program CalMorph after exposure to a high concentration of Ca2+. After a generalized linear model was applied, an analysis of covariance model was used to detect significant Ca2+–cls interactions. We found that high-dimensional, morphological Ca2+–cls interactions were mixed with positive (86%) and negative (14%) chemical-genetic interactions, whereas one-dimensional fitness Ca2+–cls interactions were all negative in principle. Clustering analysis with the interaction profiles revealed nine distinct gene groups, six of which were functionally associated. In addition, characterization of Ca2+–cls interactions revealed that morphology-based negative interactions are unique signatures of sensitized cellular processes and pathways. Principal component analysis was used to discriminate between suppression and enhancement of the Ca2+-sensitive phenotypes triggered by inactivation of calcineurin, a Ca2+-dependent phosphatase. Finally, similarity of the interaction profiles was used to reveal a connected network among the Ca2+ homeostasis units acting in different cellular compartments. Our analyses of high-dimensional chemical-genetic interaction profiles provide novel insights into the intracellular network of yeast Ca2+ homeostasis. PMID:28566553
Cost-Effectiveness Analysis of Three Leprosy Case Detection Methods in Northern Nigeria
Ezenduka, Charles; Post, Erik; John, Steven; Suraj, Abdulkarim; Namadi, Abdulahi; Onwujekwe, Obinna
2012-01-01
Background Despite several leprosy control measures in Nigeria, child proportion and disability grade 2 cases remain high while new cases have not significantly reduced, suggesting continuous spread of the disease. Hence, there is the need to review detection methods to enhance identification of early cases for effective control and prevention of permanent disability. This study evaluated the cost-effectiveness of three leprosy case detection methods in Northern Nigeria to identify the most cost-effective approach for detection of leprosy. Methods A cross-sectional study was carried out to evaluate the additional benefits of using several case detection methods in addition to routine practice in two north-eastern states of Nigeria. Primary and secondary data were collected from routine practice records and the Nigerian Tuberculosis and Leprosy Control Programme of 2009. The methods evaluated were Rapid Village Survey (RVS), Household Contact Examination (HCE) and Traditional Healers incentive method (TH). Effectiveness was measured as number of new leprosy cases detected and cost-effectiveness was expressed as cost per case detected. Costs were measured from both providers' and patients' perspectives. Additional costs and effects of each method were estimated by comparing each method against routine practise and expressed as incremental cost-effectiveness ratio (ICER). All costs were converted to the U.S. dollar at the 2010 exchange rate. Univariate sensitivity analysis was used to evaluate uncertainties around the ICER. Results The ICER for HCE was $142 per additional case detected at all contact levels and it was the most cost-effective method. At ICER of $194 per additional case detected, THs method detected more cases at a lower cost than the RVS, which was not cost-effective at $313 per additional case detected. Sensitivity analysis showed that varying the proportion of shared costs and subsistent wage for valuing unpaid time did not significantly change the results. Conclusion Complementing routine practice with household contact examination is the most cost-effective approach to identify new leprosy cases and we recommend that, depending on acceptability and feasibility, this intervention is introduced for improved case detection in Northern Nigeria. PMID:23029580
Space station integrated wall design and penetration damage control
NASA Technical Reports Server (NTRS)
Coronado, A. R.; Gibbins, M. N.; Wright, M. A.; Stern, P. H.
1987-01-01
The analysis code BUMPER executes a numerical solution to the problem of calculating the probability of no penetration (PNP) of a spacecraft subject to man-made orbital debris or meteoroid impact. The codes were developed on a DEC VAX 11/780 computer that uses the Virtual Memory System (VMS) operating system, which is written in FORTRAN 77 with no VAX extensions. To help illustrate the steps involved, a single sample analysis is performed. The example used is the space station reference configuration. The finite element model (FEM) of this configuration is relatively complex but demonstrates many BUMPER features. The computer tools and guidelines are described for constructing a FEM for the space station under consideration. The methods used to analyze the sensitivity of PNP to variations in design, are described. Ways are suggested for developing contour plots of the sensitivity study data. Additional BUMPER analysis examples are provided, including FEMs, command inputs, and data outputs. The mathematical theory used as the basis for the code is described, and illustrates the data flow within the analysis.
Palmer, N. D.; Langefeld, C. D.; Ziegler, J. T.; Hsu, F.; Haffner, S. M.; Fingerlin, T.; Norris, J. M.; Chen, Y. I.; Rich, S. S.; Haritunians, T.; Taylor, K. D.; Bergman, R. N.; Rotter, J. I.; Bowden, D. W.
2009-01-01
Aims/Hypothesis —The majority of type 2 diabetes Genome Wide Association Studies (GWAS) to date have been performed in European-derived populations and have identified few variants that mediate their effect through insulin resistance. The aim of this study was to evaluate two quantitative, directly assessed measures of insulin resistance (SI and DI) in Hispanic Americans using an agnostic, high-density SNP scan and validate these findings in additional samples. Methods —A two-stage GWAS was performed in IRAS-FS Hispanic-American samples. In Stage 1, 317K single nucleotide polymorphisms (SNPs) were assessed 229 DNA samples. SNPs with evidence of association with glucose homeostasis and adiposity traits were then genotyped on the entire set of Hispanic-American samples (n=1190). This report focuses on the glucose homeostasis traits: insulin sensitivity index (SI) and disposition index (DI). Results —Although evidence of association did not reach genome-wide significance (P=5×10−7), in the combined analysis SNPs had admixture-adjusted PADD=0.00010–0.0020 with 8–41% differences in genotypic means for SI and DI. Conclusions/Interpretation —Several candidate loci have been identified which are nominally associated with SI and/or DI in Hispanic Americans. Replication of these findings in independent cohorts and additional focused analysis of these loci is warranted. PMID:19902172
Quantitative and sensitive analysis of CN molecules using laser induced low pressure He plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pardede, Marincan; Hedwig, Rinda; Abdulmadjid, Syahrun Nur
2015-03-21
We report the results of experimental study on CN 388.3 nm and C I 247.8 nm emission characteristics using 40 mJ laser irradiation with He and N{sub 2} ambient gases. The results obtained with N{sub 2} ambient gas show undesirable interference effect between the native CN emission and the emission of CN molecules arising from the recombination of native C ablated from the sample with the N dissociated from the ambient gas. This problem is overcome by the use of He ambient gas at low pressure of 2 kPa, which also offers the additional advantages of cleaner and stronger emission lines. The resultmore » of applying this favorable experimental condition to emission spectrochemical measurement of milk sample having various protein concentrations is shown to yield a close to linear calibration curve with near zero extrapolated intercept. Additionally, a low detection limit of 5 μg/g is found in this experiment, making it potentially applicable for quantitative and sensitive CN analysis. The visibility of laser induced breakdown spectroscopy with low pressure He gas is also demonstrated by the result of its application to spectrochemical analysis of fossil samples. Furthermore, with the use of CO{sub 2} ambient gas at 600 Pa mimicking the Mars atmosphere, this technique also shows promising applications to exploration in Mars.« less
Jebaseelan, D Davidson; Jebaraj, C; Yoganandan, Narayan; Rajasekaran, S; Kanna, Rishi M
2012-05-01
The objective of the study was to determine the sensitivity of material properties of the juvenile spine to its external and internal responses using a finite element model under compression, and flexion-extension bending moments. The methodology included exercising the 8-year-old juvenile lumbar spine using parametric procedures. The model included the vertebral centrum, growth plates, laminae, pedicles, transverse processes and spinous processes; disc annulus and nucleus; and various ligaments. The sensitivity analysis was conducted by varying the modulus of elasticity for various components. The first simulation was done using mean material properties. Additional simulations were done for each component corresponding to low and high material property variations. External displacement/rotation and internal stress-strain responses were determined under compression and flexion-extension bending. Results indicated that, under compression, disc properties were more sensitive than bone properties, implying an elevated role of the disc under this mode. Under flexion-extension moments, ligament properties were more dominant than the other components, suggesting that various ligaments of the juvenile spine play a key role in modulating bending behaviors. Changes in the growth plate stress associated with ligament properties explained the importance of the growth plate in the pediatric spine with potential implications in progressive deformities.
Wang, Qian; Subramanian, Palaniappan; Schechter, Alex; Teblum, Eti; Yemini, Reut; Nessim, Gilbert Daniel; Vasilescu, Alina; Li, Musen; Boukherroub, Rabah; Szunerits, Sabine
2016-04-20
The number of patients suffering from inflammatory bowel disease (IBD) is increasing worldwide. The development of noninvasive tests that are rapid, sensitive, specific, and simple would allow preventing patient discomfort, delay in diagnosis, and the follow-up of the status of the disease. Herein, we show the interest of vertically aligned nitrogen-doped carbon nanotube (VA-NCNT) electrodes for the required sensitive electrochemical detection of lysozyme in serum, a protein that is up-regulated in IBD. To achieve selective lysozyme detection, biotinylated lysozyme aptamers were covalently immobilized onto the VA-NCNTs. Detection of lysozyme in serum was achieved by measuring the decrease in the peak current of the Fe(CN)6(3-/4-) redox couple by differential pulse voltammetry upon addition of the analyte. We achieved a detection limit as low as 100 fM with a linear range up to 7 pM, in line with the required demands for the determination of lysozyme level in patients suffering from IBD. We attained the sensitive detection of biomarkers in clinical samples of healthy patients and individuals suffering from IBD and compared the results to a classical turbidimetric assay. The results clearly indicate that the newly developed sensor allows for a reliable and efficient analysis of lysozyme in serum.
System implications of aperture-shade design for the SIRTF Observatory
NASA Technical Reports Server (NTRS)
Lee, J. H.; Brooks, W. F.; Maa, S.
1987-01-01
The 1-m-aperture Space Infrared Telescope Facility (SIRTF) will operate with a sensitivity limited only by the zodiacal background. This sensitivity requirement places severe restrictions on the amount of stray light which can reach the focal plane from off-axis sources such as the sun or earth limb. In addition, radiation from these sources can degrade the lifetime of the telescope and instrument cryogenic system which is now planned for two years before the first servicing. Since the aperture of the telescope represents a break in the telescope insulation system and is effectively the first element in the optical train, the aperture shade is a key system component. The mass, length, and temperature of the shade should be minimized to reduce system cost while maximizing the telescope lifetime and stray light performance. The independent geometric parameters that characterize an asymmetrical shade for a 600 km, 28 deg orbit were identified, and the system sensitivity to the three most important shade parameters were explored. Despite the higher heat loads compared to previously studied polar orbit missions, the analysis determined that passive radiators of a reasonable size are sufficient to meet the system requirements. An optimized design for the SIRTF mission, based on the sensitivity analysis, is proposed.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn 011 many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Financial analysis of technology acquisition using fractionated lasers as a model.
Jutkowitz, Eric; Carniol, Paul J; Carniol, Alan R
2010-08-01
Ablative fractional lasers are among the most advanced and costly devices on the market. Yet, there is a dearth of published literature on the cost and potential return on investment (ROI) of such devices. The objective of this study was to provide a methodological framework for physicians to evaluate ROI. To facilitate this analysis, we conducted a case study on the potential ROI of eight ablative fractional lasers. In the base case analysis, a 5-year lease and a 3-year lease were assumed as the purchase option with a $0 down payment and 3-month payment deferral. In addition to lease payments, service contracts, labor cost, and disposables were included in the total cost estimate. Revenue was estimated as price per procedure multiplied by total number of procedures in a year. Sensitivity analyses were performed to account for variability in model assumptions. Based on the assumptions of the model, all lasers had higher ROI under the 5-year lease agreement compared with that for the 3-year lease agreement. When comparing results between lasers, those with lower operating and purchase cost delivered a higher ROI. Sensitivity analysis indicates the model is most sensitive to purchase method. If physicians opt to purchase the device rather than lease, they can significantly enhance ROI. ROI analysis is an important tool for physicians who are considering making an expensive device acquisition. However, physicians should not rely solely on ROI and must also consider the clinical benefits of a laser. (c) Thieme Medical Publishers.
A model-based analysis of decision making under risk in obsessive-compulsive and hoarding disorders.
Aranovich, Gabriel J; Cavagnaro, Daniel R; Pitt, Mark A; Myung, Jay I; Mathews, Carol A
2017-07-01
Attitudes towards risk are highly consequential in clinical disorders thought to be prone to "risky behavior", such as substance dependence, as well as those commonly associated with excessive risk aversion, such as obsessive-compulsive disorder (OCD) and hoarding disorder (HD). Moreover, it has recently been suggested that attitudes towards risk may serve as a behavioral biomarker for OCD. We investigated the risk preferences of participants with OCD and HD using a novel adaptive task and a quantitative model from behavioral economics that decomposes risk preferences into outcome sensitivity and probability sensitivity. Contrary to expectation, compared to healthy controls, participants with OCD and HD exhibited less outcome sensitivity, implying less risk aversion in the standard economic framework. In addition, risk attitudes were strongly correlated with depression, hoarding, and compulsion scores, while compulsion (hoarding) scores were associated with more (less) "rational" risk preferences. These results demonstrate how fundamental attitudes towards risk relate to specific psychopathology and thereby contribute to our understanding of the cognitive manifestations of mental disorders. In addition, our findings indicate that the conclusion made in recent work that decision making under risk is unaltered in OCD is premature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Da Costa, Caitlyn; Reynolds, James C; Whitmarsh, Samuel; Lynch, Tom; Creaser, Colin S
2013-01-01
RATIONALE Chemical additives are incorporated into commercial lubricant oils to modify the physical and chemical properties of the lubricant. The quantitative analysis of additives in oil-based lubricants deposited on a surface without extraction of the sample from the surface presents a challenge. The potential of desorption electrospray ionization mass spectrometry (DESI-MS) for the quantitative surface analysis of an oil additive in a complex oil lubricant matrix without sample extraction has been evaluated. METHODS The quantitative surface analysis of the antioxidant additive octyl (4-hydroxy-3,5-di-tert-butylphenyl)propionate in an oil lubricant matrix was carried out by DESI-MS in the presence of 2-(pentyloxy)ethyl 3-(3,5-di-tert-butyl-4-hydroxyphenyl)propionate as an internal standard. A quadrupole/time-of-flight mass spectrometer fitted with an in-house modified ion source enabling non-proximal DESI-MS was used for the analyses. RESULTS An eight-point calibration curve ranging from 1 to 80 µg/spot of octyl (4-hydroxy-3,5-di-tert-butylphenyl)propionate in an oil lubricant matrix and in the presence of the internal standard was used to determine the quantitative response of the DESI-MS method. The sensitivity and repeatability of the technique were assessed by conducting replicate analyses at each concentration. The limit of detection was determined to be 11 ng/mm2 additive on spot with relative standard deviations in the range 3–14%. CONCLUSIONS The application of DESI-MS to the direct, quantitative surface analysis of a commercial lubricant additive in a native oil lubricant matrix is demonstrated. © 2013 The Authors. Rapid Communications in Mass Spectrometry published by John Wiley & Sons, Ltd. PMID:24097398
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Marzulli, F; Maguire, H C
1982-02-01
Several guinea-pig predictive test methods were evaluated by comparison of results with those obtained with human predictive tests, using ten compounds that have been used in cosmetics. The method involves the statistical analysis of the frequency with which guinea-pig tests agree with the findings of tests in humans. In addition, the frequencies of false positive and false negative predictive findings are considered and statistically analysed. The results clearly demonstrate the superiority of adjuvant tests (complete Freund's adjuvant) in determining skin sensitizers and the overall superiority of the guinea-pig maximization test in providing results similar to those obtained by human testing. A procedure is suggested for utilizing adjuvant and non-adjuvant test methods for characterizing compounds as of weak, moderate or strong sensitizing potential.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less
Tiffin, Nicki; Meintjes, Ayton; Ramesar, Rajkumar; Bajic, Vladimir B.; Rayner, Brian
2010-01-01
Multiple factors underlie susceptibility to essential hypertension, including a significant genetic and ethnic component, and environmental effects. Blood pressure response of hypertensive individuals to salt is heterogeneous, but salt sensitivity appears more prevalent in people of indigenous African origin. The underlying genetics of salt-sensitive hypertension, however, are poorly understood. In this study, computational methods including text- and data-mining have been used to select and prioritize candidate aetiological genes for salt-sensitive hypertension. Additionally, we have compared allele frequencies and copy number variation for single nucleotide polymorphisms in candidate genes between indigenous Southern African and Caucasian populations, with the aim of identifying candidate genes with significant variability between the population groups: identifying genetic variability between population groups can exploit ethnic differences in disease prevalence to aid with prioritisation of good candidate genes. Our top-ranking candidate genes include parathyroid hormone precursor (PTH) and type-1angiotensin II receptor (AGTR1). We propose that the candidate genes identified in this study warrant further investigation as potential aetiological genes for salt-sensitive hypertension. PMID:20886000
Kiseleva, Irina; Larionova, Natalie; Fedorova, Ekaterina; Bazhenova, Ekaterina; Dubrovina, Irina; Isakova-Sivak, Irina; Rudenko, Larisa
2014-01-01
Live attenuated influenza vaccine (LAIV) represent reassortant viruses with hemagglutinin (HA) and neuraminidase (NA) gene segments inherited from circulating wild-type (WT) parental influenza viruses recommended for inclusion into seasonal vaccine formulation, and the 6 internal protein-encoding gene segments from cold-adapted attenuated master donor viruses (genome composition 6:2). In this study, we describe the obstacles in developing LAIV strains while taking into account the phenotypic peculiarities of WT viruses used for reassortment. Genomic composition analysis of 849 seasonal LAIV reassortants revealed that over 80% of reassortants based on inhibitor-resistant WT viruses inherited WT NA, compared to 26% of LAIV reassortants based on inhibitor-sensitive WT viruses. In addition, the highest percentage of LAIV genotype reassortants was achieved when WT parental viruses were resistant to non-specific serum inhibitors. We demonstrate that NA may play a role in influenza virus sensitivity to non-specific serum inhibitors. Replacing NA of inhibitor-sensitive WT virus with the NA of inhibitor-resistant master donor virus significantly decreased the sensitivity of the resulting reassortant virus to serum heat-stable inhibitors. PMID:25132869
Characterization of Metal Powders Used for Additive Manufacturing.
Slotwinski, J A; Garboczi, E J; Stutzman, P E; Ferraris, C F; Watson, S S; Peltz, M A
2014-01-01
Additive manufacturing (AM) techniques can produce complex, high-value metal parts, with potential applications as critical parts, such as those found in aerospace components. The production of AM parts with consistent and predictable properties requires input materials (e.g., metal powders) with known and repeatable characteristics, which in turn requires standardized measurement methods for powder properties. First, based on our previous work, we assess the applicability of current standardized methods for powder characterization for metal AM powders. Then we present the results of systematic studies carried out on two different powder materials used for additive manufacturing: stainless steel and cobalt-chrome. The characterization of these powders is important in NIST efforts to develop appropriate measurements and standards for additive materials and to document the property of powders used in a NIST-led additive manufacturing material round robin. An extensive array of characterization techniques was applied to these two powders, in both virgin and recycled states. The physical techniques included laser diffraction particle size analysis, X-ray computed tomography for size and shape analysis, and optical and scanning electron microscopy. Techniques sensitive to structure and chemistry, including X-ray diffraction, energy dispersive analytical X-ray analysis using the X-rays generated during scanning electron microscopy, and X-Ray photoelectron spectroscopy were also employed. The results of these analyses show how virgin powder changes after being exposed to and recycled from one or more Direct Metal Laser Sintering (DMLS) additive manufacturing build cycles. In addition, these findings can give insight into the actual additive manufacturing process.
Savage, M K; Reed, D J
1994-11-15
Treatment of isolated mitochondria with calcium and inorganic phosphate induces inner membrane permeability that is thought to be mediated through a non-selective, calcium-dependent pore. The inner membrane permeability results in the rapid efflux of small matrix solutes such as glutathione and calcium, loss of coupled functions, and large amplitude swelling. We have identified conditions of permeability transition without large amplitude swelling, a parameter often used to assess inner membrane permeability. The addition of either oligomycin, antimycin, or sulfide to incubation buffer containing calcium and inorganic phosphate abolished large-amplitude swelling of mitochondria but did not prevent inner membrane permeability as demonstrated by the release of mitochondrial glutathione and calcium. The release of both glutathione and calcium was inhibited by the addition of cyclosporin A, a potent inhibitor of permeability transition. Transmission electron microscopy analysis, combined with the glutathione and calcium release data, indicate that permeability transition can be observed in the absence of large-amplitude swelling. Permeability transition occurring both with and without large-amplitude swelling was accompanied by a collapse of the membrane potential. We conclude that cyclosporin A-sensitive permeability transition can occur without obvious morphological changes such as large-amplitude swelling. Monitoring the cyclosporin A-sensitive release of concentrated endogenous matrix solutes, such as GSH, may be a sensitive and useful indicator of permeability transition.
Fujito, Yuka; Hayakawa, Yoshihiro; Izumi, Yoshihiro; Bamba, Takeshi
2017-07-28
Supercritical fluid chromatography/mass spectrometry (SFC/MS) has great potential in high-throughput and the simultaneous analysis of a wide variety of compounds, and it has been widely used in recent years. The use of MS for detection provides the advantages of high sensitivity and high selectivity. However, the sensitivity of MS detection depends on the chromatographic conditions and MS parameters. Thus, optimization of MS parameters corresponding to the SFC condition is mandatory for maximizing performance when connecting SFC to MS. The aim of this study was to reveal a way to decide the optimum composition of the mobile phase and the flow rate of the make-up solvent for MS detection in a wide range of compounds. Additionally, we also showed the basic concept for determination of the optimum values of the MS parameters focusing on the MS detection sensitivity in SFC/MS analysis. To verify the versatility of these findings, a total of 441 pesticides with a wide polarity range (logP ow from -4.21 to 7.70) and pKa (acidic, neutral and basic). In this study, a new SFC-MS interface was used, which can transfer the entire volume of eluate into the MS by directly coupling the SFC with the MS. This enabled us to compare the sensitivity or optimum MS parameters for MS detection between LC/MS and SFC/MS for the same sample volume introduced into the MS. As a result, it was found that the optimum values of some MS parameters were completely different from those of LC/MS, and that SFC/MS-specific optimization of the analytical conditions is required. Lastly, we evaluated the sensitivity of SFC/MS using fully optimized analytical conditions. As a result, we confirmed that SFC/MS showed much higher sensitivity than LC/MS when the analytical conditions were fully optimized for SFC/MS; and the high sensitivity also increase the number of the compounds that can be detected with good repeatability in real sample analysis. This result indicates that SFC/MS has potential for practical use in the multiresidue analysis of a wide range of compounds that requires high sensitivity. Copyright © 2017 Elsevier B.V. All rights reserved.
Mahan, Alison E; Tedesco, Jacquelynne; Dionne, Kendall; Baruah, Kavitha; Cheng, Hao D; De Jager, Philip L; Barouch, Dan H; Suscovich, Todd; Ackerman, Margaret; Crispin, Max; Alter, Galit
2015-02-01
The N-glycan of the IgG constant region (Fc) plays a central role in tuning and directing multiple antibody functions in vivo, including antibody-dependent cellular cytotoxicity, complement deposition, and the regulation of inflammation, among others. However, traditional methods of N-glycan analysis, including HPLC and mass spectrometry, are technically challenging and ill suited to handle the large numbers of low concentration samples analyzed in clinical or animal studies of the N-glycosylation of polyclonal IgG. Here we describe a capillary electrophoresis-based technique to analyze plasma-derived polyclonal IgG-glycosylation quickly and accurately in a cost-effective, sensitive manner that is well suited for high-throughput analyses. Additionally, because a significant fraction of polyclonal IgG is glycosylated on both Fc and Fab domains, we developed an approach to separate and analyze domain-specific glycosylation in polyclonal human, rhesus and mouse IgGs. Overall, this protocol allows for the rapid, accurate, and sensitive analysis of Fc-specific IgG glycosylation, which is critical for population-level studies of how antibody glycosylation may vary in response to vaccination or infection, and across disease states ranging from autoimmunity to cancer in both clinical and animal studies. Copyright © 2014 Elsevier B.V. All rights reserved.
Aiba, Toshiki; Saito, Toshiyuki; Hayashi, Akiko; Sato, Shinji; Yunokawa, Harunobu; Maruyama, Toru; Fujibuchi, Wataru; Kurita, Hisaka; Tohyama, Chiharu; Ohsako, Seiichiroh
2017-03-09
It has been pointed out that environmental factors or chemicals can cause diseases that are developmental in origin. To detect abnormal epigenetic alterations in DNA methylation, convenient and cost-effective methods are required for such research, in which multiple samples are processed simultaneously. We here present methylated site display (MSD), a unique technique for the preparation of DNA libraries. By combining it with amplified fragment length polymorphism (AFLP) analysis, we developed a new method, MSD-AFLP. Methylated site display libraries consist of only DNAs derived from DNA fragments that are CpG methylated at the 5' end in the original genomic DNA sample. To test the effectiveness of this method, CpG methylation levels in liver, kidney, and hippocampal tissues of mice were compared to examine if MSD-AFLP can detect subtle differences in the levels of tissue-specific differentially methylated CpGs. As a result, many CpG sites suspected to be tissue-specific differentially methylated were detected. Nucleotide sequences adjacent to these methyl-CpG sites were identified and we determined the methylation level by methylation-sensitive restriction endonuclease (MSRE)-PCR analysis to confirm the accuracy of AFLP analysis. The differences of the methylation level among tissues were almost identical among these methods. By MSD-AFLP analysis, we detected many CpGs showing less than 5% statistically significant tissue-specific difference and less than 10% degree of variability. Additionally, MSD-AFLP analysis could be used to identify CpG methylation sites in other organisms including humans. MSD-AFLP analysis can potentially be used to measure slight changes in CpG methylation level. Regarding the remarkable precision, sensitivity, and throughput of MSD-AFLP analysis studies, this method will be advantageous in a variety of epigenetics-based research.
Basin-scale geothermal model calibration: experience from the Perth Basin, Australia
NASA Astrophysics Data System (ADS)
Wellmann, Florian; Reid, Lynn
2014-05-01
The calibration of large-scale geothermal models for entire sedimentary basins is challenging as direct measurements of rock properties and subsurface temperatures are commonly scarce and the basal boundary conditions poorly constrained. Instead of the often applied "trial-and-error" manual model calibration, we examine here if we can gain additional insight into parameter sensitivities and model uncertainty with a model analysis and calibration study. Our geothermal model is based on a high-resolution full 3-D geological model, covering an area of more than 100,000 square kilometers and extending to a depth of 55 kilometers. The model contains all major faults (>80 ) and geological units (13) for the entire basin. This geological model is discretised into a rectilinear mesh with a lateral resolution of 500 x 500 m, and a variable resolution at depth. The highest resolution of 25 m is applied to a depth range of 1000-3000 m where most temperature measurements are available. The entire discretised model consists of approximately 50 million cells. The top thermal boundary condition is derived from surface temperature measurements on land and ocean floor. The base of the model extents below the Moho, and we apply the heat flux over the Moho as a basal heat flux boundary condition. Rock properties (thermal conductivity, porosity, and heat production) have been compiled from several existing data sets. The conductive geothermal forward simulation is performed with SHEMAT, and we then use the stand-alone capabilities of iTOUGH2 for sensitivity analysis and model calibration. Simulated temperatures are compared to 130 quality weighted bottom hole temperature measurements. The sensitivity analysis provided a clear insight into the most sensitive parameters and parameter correlations. This proved to be of value as strong correlations, for example between basal heat flux and heat production in deep geological units, can significantly influence the model calibration procedure. The calibration resulted in a better determination of subsurface temperatures, and, in addition, provided an insight into model quality. Furthermore, a detailed analysis of the measurements used for calibration highlighted potential outliers, and limitations with the model assumptions. Extending the previously existing large-scale geothermal simulation with iTOUGH2 provided us with a valuable insight into the sensitive parameters and data in the model, which would clearly not be possible with a simple trial-and-error calibration method. Using the gained knowledge, future work will include more detailed studies on the influence of advection and convection.
Sensitivity of Hyperdense Basilar Artery Sign on Non-Enhanced Computed Tomography
Ernst, Marielle; Romero, Javier M.; Buhk, Jan-Hendrik; Cheng, Bastian; Herrmann, Jochen; Fiehler, Jens; Groth, Michael
2015-01-01
Purpose The hyperdense basilar artery sign (HBAS) is an indicator of vessel occlusion on non contrast-enhanced computer tomography (NECT) in acute stroke patients. Since basilar artery occlusion (BAO) is associated with a high mortality and morbidity, its early detection is of great clinical value. We sought to analyze the influence of density measurement as well as a normalized ratio of Hounsfield unit/hematocrit (HU/Hct) ratio on the detection of BAO on NECT in patients with suspected BAO. Materials and Methods 102 patients with clinically suspected BAO were examined with NECT followed immediately by Multidetector computed tomography Angiography. Two observers independently analyzed the images regarding the presence or absence of HBAS on NECT and performed HU measurements in the basilar artery. Receiver operating characteristic curve analysis was performed to determine the optimal density threshold for BAO using attenuation measurements or HU/Hct ratio. Results Sensitivity of visual detection of the HBAS on NECT was relatively low 81% (95%-CI, 54–95%) while specificity was high 91% (95%-CI, 82–96%). The highest sensitivity was achieved by the combination of visual assessment and additional quantitative attenuation measurements applying a cut-off value of 46.5 HU with 94% sensitivity and 81% specificity for BAO. A HU/Hct ratio >1.32 revealed sensitivity of 88% (95%-CI, 60–98%) and specificity of 84% (95%-CI, 74–90%). Conclusion In patients with clinically suspected acute BAO the combination of visual assessment and additional attenuation measurement with a cut-off value of 46.5 HU is a reliable approach with high sensitivity in the detection of BAO on NECT. PMID:26479718
Zhang, Wenming; Zhu, Sha; Bai, Yunping; Xi, Ning; Wang, Shaoyang; Bian, Yang; Li, Xiaowei; Zhang, Yucang
2015-05-20
The temperature/pH dual sensitivity reed hemicellulose-based hydrogels have been prepared through glow discharge electrolysis plasma (GDEP). The effect of different discharge voltages on the temperature and pH response performance of reed hemicellulose-based hydrogels was inspected, and the formation mechanism, deswelling behaviors of reed hemicellulose-based hydrogels were also discussed. At the same time, infrared spectroscopy (FT-IR), scanning differential thermal analysis (DSC) and scanning electron microscope (SEM) were adopted to characterize the structure, phase transformation behaviors and microstructure of hydrogels. It turned out to be that all reed hemicellulose-based hydrogels had a double sensitivity to temperature and pH, and their phase transition temperatures were all approximately 33 °C, as well as the deswelling dynamics met the first model. In addition, the hydrogel (TPRH-3), under discharge voltage 600 V, was more sensitive to temperature and pH and had higher deswelling ratio. Copyright © 2015 Elsevier Ltd. All rights reserved.
Results and lessons learned from MODIS polarization sensitivity characterization
NASA Astrophysics Data System (ADS)
Sun, J.; Xiong, X.; Wang, X.; Qiu, S.; Xiong, S.; Waluschka, E.
2006-08-01
In addition to radiometric, spatial, and spectral calibration requirements, MODIS design specifications include polarization sensitivity requirements of less than 2% for all Reflective Solar Bands (RSB) except for the band centered at 412nm. To the best of our knowledge, MODIS was the first imaging radiometer that went through comprehensive system level (end-to-end) polarization characterization. MODIS polarization sensitivity was measured pre-launch at a number of sensor view angles using a laboratory Polarization Source Assembly (PSA) that consists of a rotatable source, a polarizer (Ahrens prism design), and a collimator. This paper describes MODIS polarization characterization approaches used by MODIS Characterization Support Team (MCST) at NASA/GSFC and addresses issues and concerns in the measurements. Results (polarization factor and phase angle) using different analyzing methods are discussed. Also included in this paper is a polarization characterization comparison between Terra and Aqua MODIS. Our previous and recent analysis of MODIS RSB polarization sensitivity could provide useful information for future Earth-observing sensor design, development, and characterization.
Geier, Johannes; Lessmann, Holger; Hillen, Uwe; Skudlik, Christoph; Jappe, Uta
2016-02-01
Epoxy resin systems (ERSs), consisting of resins, reactive diluents, and hardeners, are indispensable in many branches of industry. In order to develop less sensitizing ERS formulations, knowledge of the sensitizing properties of single components is mandatory. To analyse the frequency of sensitization in the patients concerned, as one integral part of a research project on the sensitizing potency of epoxy resin compounds (FP-0324). A retrospective analysis of data from the Information Network of Departments of Dermatology (IVDK), 2002-2011, and a comparison of reaction frequencies with (surrogate) exposure data, were performed. Almost half of the patients sensitized to epoxy resin were additionally sensitized to reactive diluents or hardeners. Among the reactive diluents, 1,6-hexanediol diglycidyl ether was the most frequent allergen, followed by 1,4-butanediol diglycidyl ether, phenyl glycidyl ether, and p-tert-butylphenyl glycidyl ether. Among the hardeners, m-xylylene diamine (MXDA) and isophorone diamine (IPDA) were the most frequent allergens. According to the calculated exposure-related frequency of sensitization, MXDA seems to be a far more important sensitizer than IPDA. Up to 60% of the patients sensitized to hardeners and 15-20% of those sensitized to reactive diluents do not react to epoxy resin. In cases of suspected contact allergy to an ERS, a complete epoxy resin series must be patch tested from the start. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Garway-Heath, David F; Quartilho, Ana; Prah, Philip; Crabb, David P; Cheng, Qian; Zhu, Haogang
2017-08-01
To evaluate the ability of various visual field (VF) analysis methods to discriminate treatment groups in glaucoma clinical trials and establish the value of time-domain optical coherence tomography (TD OCT) imaging as an additional outcome. VFs and retinal nerve fibre layer thickness (RNFLT) measurements (acquired by TD OCT) from 373 glaucoma patients in the UK Glaucoma Treatment Study (UKGTS) at up to 11 scheduled visits over a 2 year interval formed the cohort to assess the sensitivity of progression analysis methods. Specificity was assessed in 78 glaucoma patients with up to 11 repeated VF and OCT RNFLT measurements over a 3 month interval. Growth curve models assessed the difference in VF and RNFLT rate of change between treatment groups. Incident progression was identified by 3 VF-based methods: Guided Progression Analysis (GPA), 'ANSWERS' and 'PoPLR', and one based on VFs and RNFLT: 'sANSWERS'. Sensitivity, specificity and discrimination between treatment groups were evaluated. The rate of VF change was significantly faster in the placebo, compared to active treatment, group (-0.29 vs +0.03 dB/year, P <.001); the rate of RNFLT change was not different (-1.7 vs -1.1 dB/year, P =.14). After 18 months and at 95% specificity, the sensitivity of ANSWERS and PoPLR was similar (35%); sANSWERS achieved a sensitivity of 70%. GPA, ANSWERS and PoPLR discriminated treatment groups with similar statistical significance; sANSWERS did not discriminate treatment groups. Although the VF progression-detection method including VF and RNFLT measurements is more sensitive, it does not improve discrimination between treatment arms.
Suh, Chong Hyun; Yun, Seong Jong; Jin, Wook; Lee, Sun Hwa; Park, So Young; Ryu, Chang-Woo
2018-07-01
To assess the sensitivity and specificity of quantitative assessment of the apparent diffusion coefficient (ADC) for differentiating benign and malignant vertebral bone marrow lesions (BMLs) and compression fractures (CFs) METHODS: An electronic literature search of MEDLINE and EMBASE was conducted. Bivariate modelling and hierarchical summary receiver operating characteristic modelling were performed to evaluate the diagnostic performance of ADC for differentiating vertebral BMLs. Subgroup analysis was performed for differentiating benign and malignant vertebral CFs. Meta-regression analyses according to subject, study and diffusion-weighted imaging (DWI) characteristics were performed. Twelve eligible studies (748 lesions, 661 patients) were included. The ADC exhibited a pooled sensitivity of 0.89 (95% confidence interval [CI] 0.80-0.94) and a pooled specificity of 0.87 (95% CI 0.78-0.93) for differentiating benign and malignant vertebral BMLs. In addition, the pooled sensitivity and specificity for differentiating benign and malignant CFs were 0.92 (95% CI 0.82-0.97) and 0.91 (95% CI 0.87-0.94), respectively. In the meta-regression analysis, the DWI slice thickness was a significant factor affecting heterogeneity (p < 0.01); thinner slice thickness (< 5 mm) showed higher specificity (95%) than thicker slice thickness (81%). Quantitative assessment of ADC is a useful diagnostic tool for differentiating benign and malignant vertebral BMLs and CFs. • Quantitative assessment of ADC is useful in differentiating vertebral BMLs. • Quantitative ADC assessment for BMLs had sensitivity of 89%, specificity of 87%. • Quantitative ADC assessment for CFs had sensitivity of 92%, specificity of 91%. • The specificity is highest (95%) with thinner (< 5 mm) DWI slice thickness.
NASA Astrophysics Data System (ADS)
Ojeda, David; Le Rolle, Virginie; Tse Ve Koon, Kevin; Thebault, Christophe; Donal, Erwan; Hernández, Alfredo I.
2013-11-01
In this paper, lumped-parameter models of the cardiovascular system, the cardiac electrical conduction system and a pacemaker are coupled to generate mitral ow pro les for di erent atrio-ventricular delay (AVD) con gurations, in the context of cardiac resynchronization therapy (CRT). First, we perform a local sensitivity analysis of left ventricular and left atrial parameters on mitral ow characteristics, namely E and A wave amplitude, mitral ow duration, and mitral ow time integral. Additionally, a global sensitivity analysis over all model parameters is presented to screen for the most relevant parameters that a ect the same mitral ow characteristics. Results provide insight on the in uence of left ventricle and atrium in uence on mitral ow pro les. This information will be useful for future parameter estimation of the model that could reproduce the mitral ow pro les and cardiovascular hemodynamics of patients undergoing AVD optimization during CRT.
NASA Astrophysics Data System (ADS)
Muhlen, Luis S. W.; Najafi, Behzad; Rinaldi, Fabio; Marchesi, Renzo
2014-04-01
Solar troughs are amongst the most commonly used technologies for collecting solar thermal energy and any attempt to increase the performance of these systems is welcomed. In the present study a parabolic solar trough is simulated using a one dimensional finite element model in which the energy balances for the fluid, the absorber and the envelope in each element are performed. The developed model is then validated using the available experimental data . A sensitivity analysis is performed in the next step in order to study the effect of changing the type of the working fluid and the corresponding Reynolds number on the overall performance of the system. The potential improvement due to the addition of a shield on the upper half of the annulus and enhancing the convection coefficient of the heat transfer fluid is also studied.
Temperature-independent fiber-Bragg-grating-based atmospheric pressure sensor
NASA Astrophysics Data System (ADS)
Zhang, Zhiguo; Shen, Chunyan; Li, Luming
2018-03-01
Atmospheric pressure is an important way to achieve a high degree of measurement for modern aircrafts, moreover, it is also an indispensable parameter in the meteorological telemetry system. With the development of society, people are increasingly concerned about the weather. Accurate and convenient atmospheric pressure parameters can provide strong support for meteorological analysis. However, electronic atmospheric pressure sensors currently in application suffer from several shortcomings. After an analysis and discussion, we propose an innovative structural design, in which a vacuum membrane box and a temperature-independent strain sensor based on an equal strength cantilever beam structure and fiber Bragg grating (FBG) sensors are used. We provide experimental verification of that the atmospheric pressure sensor device has the characteristics of a simple structure, lack of an external power supply, automatic temperature compensation, and high sensitivity. The sensor system has good sensitivity, which can be up to 100 nm/MPa, and repeatability. In addition, the device exhibits desired hysteresis.
Economic assessments of small-scale drinking-water interventions in pursuit of MDG target 7C.
Cameron, John; Jagals, Paul; Hunter, Paul R; Pedley, Steve; Pond, Katherine
2011-12-01
This paper uses an applied rural case study of a safer water intervention in South Africa to illustrate how three levels of economic assessment can be used to understand the impact of the intervention on people's well-being. It is set in the context of Millennium Development Goal 7 which sets a target (7C) for safe drinking-water provision and the challenges of reaching people in remote rural areas with relatively small-scale schemes. The assessment moves from cost efficiency to cost effectiveness to a full social cost-benefit analysis (SCBA) with an associated sensitivity test. In addition to demonstrating techniques of analysis, the paper brings out many of the challenges in understanding how safer drinking-water impacts on people's livelihoods. The SCBA shows the case study intervention is justified economically, though the sensitivity test suggests 'downside' vulnerability. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Miller, D. O.; Brune, W. H.
2017-12-01
Accurate estimates of secondary organic aerosol (SOA) from atmospheric models is a major research challenge due to the complexity of the chemical and physical processes involved in the SOA formation and continuous aging. The primary uncertainties of SOA models include those associated with the formation of gas-phase products, the conversion between gas phase and particle phase, the aging mechanisms of SOA, and other processes related to the heterogeneous and particle-phase reactions. To address this challenge, we us a modular modeling framework that combines both simple and near-explicit gas-phase reactions and a two-dimensional volatility basis set (2D-VBS) to simulate the formation and evolution of SOA. Global sensitivity analysis is used to assess the relative importance of the model input parameters. In addition, the model is compared to the measurements from the Focused Isoprene eXperiment at the California Institute of Technology (FIXCIT).
Lu, Chi-Yu; Wu, Hsin-Lung; Chen, Su-Hwei; Kou, Hwang-Shang; Wu, Shou-Mei
2002-01-02
A highly sensitive high-performance liquid chromatography (HPLC) method is described for the simultaneous determination of some important saturated and unsaturated fatty acids in milk, including lauric (dodecanoic), myristic (tetradecanoic), palmitic (hexadecanoic), stearic (octadecanoic), palmitoleic (hexadecenoic), oleic (octadecenoic), and linoleic acids (octadecadienoic acids). The fatty acids were fluorogenically derivatized with 2-(2-naphthoxy)ethyl 2-(piperidino)ethanesulfonate (NOEPES) as their naphthoxyethyl derivatives. The resulting derivatives were separated by isocratic HPLC and monitored with a fluorometric detector (lambdaex = 235 nm, lambdaem = 350 nm). The fatty acids in milk were extracted with toluene, and the extract with the fatty acids was directly derivatized with NOEPES without solvent replacement. Determination of long-chain free fatty acids in milk is feasible by a standard addition method. A small amount of milk product, 10 microL, is sufficient for the analysis.
NASA Technical Reports Server (NTRS)
Wheeler, D. R.
1978-01-01
The principles of ESCA (electron spectroscopy for chemical analysis) are described by comparison with other spectroscopic techniques. The advantages and disadvantages of ESCA as compared to other surface sensitive analytical techniques are evaluated. The use of ESCA is illustrated by actual applications to oxidation of steel and Rene 41, the chemistry of lubricant additives on steel, and the composition of sputter deposited hard coatings. Finally, a bibliography of material that is useful for further study of ESCA is presented and commented upon.
PHOTOTROPISM OF GERMINATING MYCELIA OF SOME PARASITIC FUNGI
uredinales on young wheat plants; Distribution and significance of the phototropism of germinating mycelia -- confirmation of older data, examination of...eight additional uredinales, probable meaning of negative phototropism for the occurrence of infection; Analysis of the stimulus physiology of the...reaction -- the minimum effective illumination intensity, the effective special region, inversion of the phototropic reaction in liquid paraffin, the negative light- growth reaction, the light-sensitive zone.
Updated Estimates of the Average Financial Return on Master's Degree Programs in the United States
ERIC Educational Resources Information Center
Gándara, Denisa; Toutkoushian, Robert K.
2017-01-01
In this study, we provide updated estimates of the private and social financial return on enrolling in a master's degree program in the United States. In addition to returns for all fields of study, we show estimated returns to enrolling in master's degree programs in business and education, specifically. We also conduct a sensitivity analysis to…
Determination of EGFR and KRAS mutational status in Greek non-small-cell lung cancer patients
PAPADOPOULOU, EIRINI; TSOULOS, NIKOLAOS; TSIRIGOTI, ANGELIKI; APESSOS, ANGELA; AGIANNITOPOULOS, KONSTANTINOS; METAXA-MARIATOU, VASILIKI; ZAROGOULIDIS, KONSTANTINOS; ZAROGOULIDIS, PAVLOS; KASARAKIS, DIMITRIOS; KAKOLYRIS, STYLIANOS; DAHABREH, JUBRAIL; VLASTOS, FOTIS; ZOUBLIOS, CHARALAMPOS; RAPTI, AGGELIKI; PAPAGEORGIOU, NIKI GEORGATOU; VELDEKIS, DIMITRIOS; GAGA, MINA; ARAVANTINOS, GERASIMOS; KARAVASILIS, VASILEIOS; KARAGIANNIDIS, NAPOLEON; NASIOULAS, GEORGE
2015-01-01
It has been reported that certain patients with non-small-cell lung cancer (NSCLC) that harbor activating somatic mutations within the tyrosine kinase domain of the epidermal growth factor receptor (EGFR) gene may be effectively treated using targeted therapy. The use of EGFR inhibitors in patient therapy has been demonstrated to improve response and survival rates; therefore, it was suggested that clinical screening for EGFR mutations should be performed for all patients. Numerous clinicopathological factors have been associated with EGFR and Kirsten-rat sarcoma oncogene homolog (KRAS) mutational status including gender, smoking history and histology. In addition, it was reported that EGFR mutation frequency in NSCLC patients was ethnicity-dependent, with an incidence rate of ~30% in Asian populations and ~15% in Caucasian populations. However, limited data has been reported on intra-ethnic differences throughout Europe. The present study aimed to investigate the frequency and spectrum of EGFR mutations in 1,472 Greek NSCLC patients. In addition, KRAS mutation analysis was performed in patients with known smoking history in order to determine the correlation of type and mutation frequency with smoking. High-resolution melting curve (HRM) analysis followed by Sanger sequencing was used to identify mutations in exons 18–21 of the EGFR gene and in exon 2 of the KRAS gene. A sensitive next-generation sequencing (NGS) technology was also employed to classify samples with equivocal results. The use of sensitive mutation detection techniques in a large study population of Greek NSCLC patients in routine diagnostic practice revealed an overall EGFR mutation frequency of 15.83%. This mutation frequency was comparable to that previously reported in other European populations. Of note, there was a 99.8% concordance between the HRM method and Sanger sequencing. NGS was found to be the most sensitive method. In addition, female non-smokers demonstrated a high prevalence of EGFR mutations. Furthermore, KRAS mutation analysis in patients with a known smoking history revealed no difference in mutation frequency according to smoking status; however, a different mutation spectrum was observed. PMID:26622815
What we miss if standard panel is used for skin prick testing?
Cavkaytar, Ozlem; Buyuktiryaki, Betul; Sag, Erdal; Soyer, Ozge; Sekerel, Bulent E
2015-09-01
Although standard skin prick test (SPT) panels are crucial for routine investigation of sensitization in daily clinical practice, it has limitations in terms of missing allergens. To find out sensitization rates (SR)s to additional panel of allergens and their relative contributions in allergic diseases. SPTs with a battery of aeroallergens [tree pollen (A.glutinosa, C.arizonica, J.communis, T.platyphyllos, R.pseudoacacia), weed pollen (R.acetosa, U.dioica, A.artemisifolia), smut mix, yeast mix, storage mites (SM) (B.tropicalis, L.destructor, T.putrescentiae, A.siro), mouse and budgerigar epithelia], were performed to 318 participants (6-18 years) who were previously identified to be sensitized to at least one of the aeroallergens found in standard battery. Forty percent of participants were sensitized to at least one additional aerollergen. Three most frequent sensitizations were to B.tropicalis (11.3%), R.pseudoacacia (9.7%) and L.destructor (8.2%). SR for tree pollen increased from 6.9% to 19.8%, for mites increased from 26.3% to 31.6% and for moulds increased from 5.3% to 9.4% with addition of respective group of other allergens to battery. Furthermore, higher rates for additional tree pollen sensitization was found among patients with "only AR" (21%) compared to patients with "only asthma" (4.6%, p =0.006), contrarily higher rates for SM sensitization was found among patients with "only asthma" (20%) compared to patients with "only AR" (3.2%, p =0.003) CONCLUSIONS: Though some of sensitizations may occur due to cross-reactivity, almost 40% of sensitized children were also co-sensitized to the additional allergens tested. Physicians should consider further steps when a negative or inconsistent result is achieved through a standard skin test panel.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Improved Sensitivity for Molecular Detection of Bacterial and Candida Infections in Blood
Bacconi, Andrea; Richmond, Gregory S.; Baroldi, Michelle A.; Laffler, Thomas G.; Blyn, Lawrence B.; Carolan, Heather E.; Frinder, Mark R.; Toleno, Donna M.; Metzgar, David; Gutierrez, Jose R.; Massire, Christian; Rounds, Megan; Kennel, Natalie J.; Rothman, Richard E.; Peterson, Stephen; Carroll, Karen C.; Wakefield, Teresa; Ecker, David J.
2014-01-01
The rapid identification of bacteria and fungi directly from the blood of patients with suspected bloodstream infections aids in diagnosis and guides treatment decisions. The development of an automated, rapid, and sensitive molecular technology capable of detecting the diverse agents of such infections at low titers has been challenging, due in part to the high background of genomic DNA in blood. PCR followed by electrospray ionization mass spectrometry (PCR/ESI-MS) allows for the rapid and accurate identification of microorganisms but with a sensitivity of about 50% compared to that of culture when using 1-ml whole-blood specimens. Here, we describe a new integrated specimen preparation technology that substantially improves the sensitivity of PCR/ESI-MS analysis. An efficient lysis method and automated DNA purification system were designed for processing 5 ml of whole blood. In addition, PCR amplification formulations were optimized to tolerate high levels of human DNA. An analysis of 331 specimens collected from patients with suspected bloodstream infections resulted in 35 PCR/ESI-MS-positive specimens (10.6%) compared to 18 positive by culture (5.4%). PCR/ESI-MS was 83% sensitive and 94% specific compared to culture. Replicate PCR/ESI-MS testing from a second aliquot of the PCR/ESI-MS-positive/culture-negative specimens corroborated the initial findings in most cases, resulting in increased sensitivity (91%) and specificity (99%) when confirmed detections were considered true positives. The integrated solution described here has the potential to provide rapid detection and identification of organisms responsible for bloodstream infections. PMID:24951806
Paykin, Gabriel; O'Reilly, Gerard; Ackland, Helen M; Mitra, Biswadev
2017-05-01
The National Emergency X-Radiography Utilization Study (NEXUS) criteria are used to assess the need for imaging to evaluate cervical spine integrity after injury. The aim of this study was to assess the sensitivity of the NEXUS criteria in older blunt trauma patients. Patients aged 65 years or older presenting between 1st July 2010 and 30th June 2014 and diagnosed with cervical spine fractures were identified from the institutional trauma registry. Clinical examination findings were extracted from electronic medical records. Data on the NEXUS criteria were collected and sensitivity of the rule to exclude a fracture was calculated. Over the study period 231,018 patients presented to The Alfred Emergency & Trauma Centre, of whom 14,340 met the institutional trauma registry inclusion criteria and 4035 were aged ≥65years old. Among these, 468 patients were diagnosed with cervical spine fractures, of whom 21 were determined to be NEXUS negative. The NEXUS criteria performed with a sensitivity of 94.8% [95% CI: 92.1%-96.7%] on complete case analysis in older blunt trauma patients. One-way sensitivity analysis resulted in a maximum sensitivity limit of 95.5% [95% CI: 93.2%-97.2%]. Compared with the general adult blunt trauma population, the NEXUS criteria are less sensitive in excluding cervical spine fractures in older blunt trauma patients. We therefore suggest that liberal imaging be considered for older patients regardless of history or examination findings and that the addition of an age criterion to the NEXUS criteria be investigated in future studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Sensitivity of a phase-sensitive optical time-domain reflectometer with a semiconductor laser source
NASA Astrophysics Data System (ADS)
Alekseev, A. E.; Tezadov, Ya A.; Potapov, V. T.
2018-06-01
In the present paper we perform, for the first time, an analysis of the average sensitivity of a coherent phase-sensitive optical time-domain reflectometer (phase-OTDR) with a semiconductor laser source to external actions. The sensitivity of this OTDR can be defined in a conventional manner via average SNR at its output, which in turn is defined by the average useful signal power and the average intensity noise power in the OTDR spatial channels in the bandwidth defined by the OTDR sampling frequency. The average intensity noise power is considered in detail in a previous paper. In the current paper we examine the average useful signal power at the output of a phase-OTDR. The analysis of the average useful signal power of a phase-OTDR is based on the study of a fiber scattered-light interferometer (FSLI) which is treated as a constituent part of a phase- OTDR. In the analysis, one of the conventional phase-OTDR schemes with a rectangular dual-pulse probe signal is considered. The FSLI which corresponds to this OTDR scheme has two scattering fiber segments with additional time delay, introduced between backscattered fields. The average useful signal power and the resulting average SNR at the output of this FSLI are determined by the degree of coherence of the semiconductor laser source, the length of the scattering fiber segments, and by the additional time delay between the scattering fiber segments. The average useful signal power characteristic of the corresponding phase-OTDR is determined by analogous parameters: the source coherence, the time durations of the parts constituting the dual-pulse, and the time interval which separates these parts. In the paper an expression for the average useful signal power of a phase-OTDR is theoretically derived and experimentally verified. Based on the found average useful signal power of a phase-OTDR and the average intensity noise power, derived in the previous paper, the average SNR of a phase-OTDR is defined. Setting the average signal SNR to 1, at a defined spectral band the minimum detectable external action amplitude for our particular phase-OTDR setup is determined. We also derive a simple relation for the average useful signal power and the average SNR which results when making the assumption that the laser source coherence is high. The results of the paper can serve as the basis for further development of the concept of phase-OTDR sensitivity.
Treatment strategies for pelvic organ prolapse: a cost-effectiveness analysis.
Hullfish, Kathie L; Trowbridge, Elisa R; Stukenborg, George J
2011-05-01
To compare the relative cost effectiveness of treatment decision alternatives for post-hysterectomy pelvic organ prolapse (POP). A Markov decision analysis model was used to assess and compare the relative cost effectiveness of expectant management, use of a pessary, and surgery for obtaining months of quality-adjusted life over 1 year. Sensitivity analysis was conducted to determine whether the results depended on specific estimates of patient utilities for pessary use, probabilities for complications and other events, and estimated costs. Only two treatment alternatives were found to be efficient choices: initial pessary use and vaginal reconstructive surgery (VRS). Pessary use (including patients that eventually transitioned to surgery) achieved 10.4 quality-adjusted months, at a cost of $10,000 per patient, while VRS obtained 11.4 quality-adjusted months, at $15,000 per patient. Sensitivity analysis demonstrated that these baseline results depended on several key estimates in the model. This analysis indicates that pessary use and VRS are the most cost-effective treatment alternatives for treating post-hysterectomy vaginal prolapse. Additional research is needed to standardize POP outcomes and complications, so that healthcare providers can best utilize cost information in balancing the risks and benefits of their treatment decisions.
Shi, Kan; Chen, Gong; Pistolozzi, Marco; Xia, Fenggeng; Wu, Zhenqiang
2016-09-01
Monascus pigments, a mixture of azaphilones mainly composed of red, orange and yellow pigments, are usually prepared in aqueous ethanol and analysed by ultraviolet-visible (UV-Vis) spectroscopy. The pH of aqueous ethanol used during sample preparation and analysis has never been considered a key parameter to control; however, this study shows that the UV-Vis spectra and colour characteristics of the six major pigments are strongly influenced by the pH of the solvent employed. In addition, the increase of solvent pH results in a remarkable increase of the amination reaction of orange pigments with amino compounds, and at higher pH (≥ 6.0) a significant amount of orange pigment derivatives rapidly form. The consequent impact of these pH-sensitive properties on pigment analysis is further discussed. Based on the presented results, we propose that the sample preparation and analysis of Monascus pigments should be uniformly performed at low pH (≤ 2.5) to avoid variations of UV-Vis spectra and the creation of artefacts due to the occurrence of amination reactions, and ensure an accurate analysis that truly reflects pigment characteristics in the samples.
NASA Technical Reports Server (NTRS)
Kirshen, N.; Mill, T.
1973-01-01
The effect of formulation components and the addition of fire retardants on the impact sensitivity of Viton B fluoroelastomer in liquid oxygen was studied with the objective of developing a procedure for reliably reducing this sensitivity. Component evaluation, carried out on more than 40 combinations of components and cure cycles, showed that almost all the standard formulation agents, including carbon, MgO, Diak-3, and PbO2, will sensitize the Viton stock either singly or in combinations, some combinations being much more sensitive than others. Cure and postcure treatments usually reduced the sensitivity of a given formulation, often dramatically, but no formulated Viton was as insensitive as the pure Viton B stock. Coating formulated Viton with a thin layer of pure Viton gave some indication of reduced sensitivity, but additional tests are needed. It is concluded that sensitivity in formulated Viton arises from a variety of sources, some physical and some chemical in origin. Elemental analyses for all the formulated Vitons are reported as are the results of a literature search on the subject of LOX impact sensitivity.
Margin and sensitivity methods for security analysis of electric power systems
NASA Astrophysics Data System (ADS)
Greene, Scott L.
Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.
El Allaki, Farouk; Harrington, Noel; Howden, Krista
2016-11-01
The objectives of this study were (1) to estimate the annual sensitivity of Canada's bTB surveillance system and its three system components (slaughter surveillance, export testing and disease investigation) using a scenario tree modelling approach, and (2) to identify key model parameters that influence the estimates of the surveillance system sensitivity (SSSe). To achieve these objectives, we designed stochastic scenario tree models for three surveillance system components included in the analysis. Demographic data, slaughter data, export testing data, and disease investigation data from 2009 to 2013 were extracted for input into the scenario trees. Sensitivity analysis was conducted to identify key influential parameters on SSSe estimates. The median annual SSSe estimates generated from the study were very high, ranging from 0.95 (95% probability interval [PI]: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). Median annual sensitivity estimates for the slaughter surveillance component ranged from 0.95 (95% PI: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). This shows that slaughter surveillance to be the major contributor to overall surveillance system sensitivity with a high probability to detect M. bovis infection if present at a prevalence of 0.00028% or greater during the study period. The export testing and disease investigation components had extremely low component sensitivity estimates-the maximum median sensitivity estimates were 0.02 (95% PI: 0.014-0.023) and 0.0061 (95% PI: 0.0056-0.0066) respectively. The three most influential input parameters on the model's output (SSSe) were the probability of a granuloma being detected at slaughter inspection, the probability of a granuloma being present in older animals (≥12 months of age), and the probability of a granuloma sample being submitted to the laboratory. Additional studies are required to reduce the levels of uncertainty and variability associated with these three parameters influencing the surveillance system sensitivity. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Dearman, Rebecca J; Betts, Catherine J; Farr, Craig; McLaughlin, James; Berdasco, Nancy; Wiench, Karin; Kimber, Ian
2007-10-01
There are currently available no systematic experimental data on the skin sensitizing properties of acrylates that are of relevance in occupational settings. Limited information from previous guinea-pig tests or from the local lymph node assay (LLNA) is available; however, these data are incomplete and somewhat contradictory. For those reasons, we have examined in the LLNA 4 acrylates: butyl acrylate (BA), ethyl acrylate (EA), methyl acrylate (MA), and ethylhexyl acrylate (EHA). The LLNA data indicated that all 4 compounds have some potential to cause skin sensitization. In addition, the relative potencies of these acrylates were measured by derivation from LLNA dose-response analyses of EC3 values (the effective concentration of chemical required to induce a threefold increase in proliferation of draining lymph node cells compared with control values). On the basis of 1 scheme for the categorization of skin sensitization potency, BA, EA, and MA were each classified as weak sensitizers. Using the same scheme, EHA was considered a moderate sensitizer. However, it must be emphasized that the EC3 value for this chemical of 9.7% is on the borderline between moderate (<10%) and weak (>10%) categories. Thus, the judicious view is that all 4 chemicals possess relatively weak skin sensitizing potential.
Chrysin and silibinin sensitize human glioblastoma cells for arsenic trioxide.
Gülden, Michael; Appel, Daniel; Syska, Malin; Uecker, Stephanie; Wages, Franziska; Seibert, Hasso
2017-07-01
Arsenic trioxide (ATO) is highly efficient in treating acute promyelocytic leukemia. Other malignancies, however, are often less sensitive. Searching for compounds sensitizing arsenic resistant tumours for ATO the plant polyphenols, chrysin and silibinin, and the ATP binding cassette (ABC) transporter inhibitor MK-571, respectively, were investigated in human glioblastoma A-172 cells. The sensitivity of A-172 cells to ATO was characterized by a median cytotoxic concentration of 6 μM ATO. Subcytotoxic concentrations of chrysin, silibinin and MK-571, respectively, remarkably increased the sensitivity of the cells to ATO by factors of 4-6. Isobolographic analysis revealed synergistic interaction of the polyphenols and MK-571, respectively, with ATO. Sensitization by chrysin was associated with depletion of cellular glutathione and increased accumulation of arsenic. In contrast, silibinin and also MK-571 increased the accumulation of arsenic more strongly but without affecting the glutathione level. The increase of arsenic accumulation could be attributed to a decreased rate of arsenic export and, additionally, in the case of silibinin and MK-571, to an increasing amount of irreversibly accumulated arsenic. Direct interaction with ABC transporters stimulating export of glutathione and inhibiting export of arsenic, respectively, are discussed as likely mechanisms of the sensitizing activity of chrysin and silibinin. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Johnston, John D.; Parrish, Keith; Howard, Joseph M.; Mosier, Gary E.; McGinnis, Mark; Bluth, Marcel; Kim, Kevin; Ha, Hong Q.
2004-01-01
This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal- optical, often referred to as "STOP", analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. The paper begins an overview of multi-disciplinary engineering analysis, or integrated modeling, which is a critical element of the JWST mission. The STOP analysis process is then described. This process consists of the following steps: thermal analysis, structural analysis, and optical analysis. Temperatures predicted using geometric and thermal math models are mapped to the structural finite element model in order to predict thermally-induced deformations. Motions and deformations at optical surfaces are input to optical models and optical performance is predicted using either an optical ray trace or WFE estimation techniques based on prior ray traces or first order optics. Following the discussion of the analysis process, results based on models representing the design at the time of the System Requirements Review. In addition to baseline performance predictions, sensitivity studies are performed to assess modeling uncertainties. Of particular interest is the sensitivity of optical performance to uncertainties in temperature predictions and variations in metal properties. The paper concludes with a discussion of modeling uncertainty as it pertains to STOP analysis.
Harrington, Rachel; Lee, Edward; Yang, Hongbo; Wei, Jin; Messali, Andrew; Azie, Nkechi; Wu, Eric Q; Spalding, James
2017-01-01
Invasive aspergillosis (IA) is associated with a significant clinical and economic burden. The phase III SECURE trial demonstrated non-inferiority in clinical efficacy between isavuconazole and voriconazole. No studies have evaluated the cost-effectiveness of isavuconazole compared to voriconazole. The objective of this study was to evaluate the costs and cost-effectiveness of isavuconazole vs. voriconazole for the first-line treatment of IA from the US hospital perspective. An economic model was developed to assess the costs and cost-effectiveness of isavuconazole vs. voriconazole in hospitalized patients with IA. The time horizon was the duration of hospitalization. Length of stay for the initial admission, incidence of readmission, clinical response, overall survival rates, and experience of adverse events (AEs) came from the SECURE trial. Unit costs were from the literature. Total costs per patient were estimated, composed of drug costs, costs of AEs, and costs of hospitalizations. Incremental costs per death avoided and per additional clinical responders were reported. Deterministic and probabilistic sensitivity analyses (DSA and PSA) were conducted. Base case analysis showed that isavuconazole was associated with a $7418 lower total cost per patient than voriconazole. In both incremental costs per death avoided and incremental costs per additional clinical responder, isavuconazole dominated voriconazole. Results were robust in sensitivity analysis. Isavuconazole was cost saving and dominant vs. voriconazole in most DSA. In PSA, isavuconazole was cost saving in 80.2% of the simulations and cost-effective in 82.0% of the simulations at the $50,000 willingness to pay threshold per additional outcome. Isavuconazole is a cost-effective option for the treatment of IA among hospitalized patients. Astellas Pharma Global Development, Inc.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Yang, Zhen; Zhi, Shaotao; Feng, Zhu; Lei, Chong; Zhou, Yong
2018-01-01
A sensitive and innovative assay system based on a micro-MEMS-fluxgate sensor and immunomagnetic beads-labels was developed for the rapid analysis of C-reactive proteins (CRP). The fluxgate sensor presented in this study was fabricated through standard micro-electro-mechanical system technology. A multi-loop magnetic core made of Fe-based amorphous ribbon was employed as the sensing element, and 3-D solenoid copper coils were used to control the sensing core. Antibody-conjugated immunomagnetic microbeads were strategically utilized as signal tags to label the CRP via the specific conjugation of CRP to polyclonal CRP antibodies. Separate Au film substrates were applied as immunoplatforms to immobilize CRP-beads labels through classical sandwich assays. Detection and quantification of the CRP at different concentrations were implemented by detecting the stray field of CRP labeled magnetic beads using the newly-developed micro-fluxgate sensor. The resulting system exhibited the required sensitivity, stability, reproducibility, and selectivity. A detection limit as low as 0.002 μg/mL CRP with a linearity range from 0.002 μg/mL to 10 μg/mL was achieved, and this suggested that the proposed biosystem possesses high sensitivity. In addition to the extremely low detection limit, the proposed method can be easily manipulated and possesses a quick response time. The response time of our sensor was less than 5 s, and the entire detection period for CRP analysis can be completed in less than 30 min using the current method. Given the detection performance and other advantages such as miniaturization, excellent stability and specificity, the proposed biosensor can be considered as a potential candidate for the rapid analysis of CRP, especially for point-of-care platforms. PMID:29601593
Louthrenoo, Worawit; Jatuworapruk, Kanon; Lhakum, Panomkorn; Pattamapaspong, Nuttaya
2017-05-01
To evaluate the sensitivity and specificity of the 2015 American College of Rheumatology/European League Against Rheumatism (ACR/EULAR) gout classification criteria in Thai patients presenting with acute arthritis in a real-life setting. Data were analyzed on consecutive patients presenting with arthritis of less than 2 weeks duration. Sensitivity and specificity were calculated by using the presence of monosodium urate (MSU) crystals in the synovial fluid or tissue aspirate as gold standard for gout diagnosis. Subgroup analysis was performed in patients with early disease (≤2 years), established disease (>2 years), and those without tophus. Additional analysis also was performed in non-tophaceous gout patients, and patients with acute calcium pyrophosphate dihydrate crystal arthritis were used as controls. One hundred and nine gout and 74 non-gout patients participated in this study. Full ACR/EULAR classification criteria had sensitivity and specificity of 90.2 and 90.0%, respectively; and 90.2 and 85.0%, respectively, when synovial fluid microscopy was excluded. Clinical-only criteria yielded sensitivity and specificity of 79.8 and 87.8%, respectively. The criteria performed well among patients with early and non-tophaceous disease, but had lower specificity in patients with established disease. The variation of serum uric acid level was a major limitation of the classification criteria. The ACR/EULAR classification criteria had high sensitivity and specificity in Thai patients presenting with acute arthritis, even when clinical criteria alone were used.
Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario
2015-01-01
Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.
A Versatile PDMS/Paper Hybrid Microfluidic Platform for Sensitive Infectious Disease Diagnosis
2015-01-01
Bacterial meningitis is a serious health concern worldwide. Given that meningitis can be fatal and many meningitis cases occurred in high-poverty areas, a simple, low-cost, highly sensitive method is in great need for immediate and early diagnosis of meningitis. Herein, we report a versatile and cost-effective polydimethylsiloxane (PDMS)/paper hybrid microfluidic device integrated with loop-mediated isothermal amplification (LAMP) for the rapid, sensitive, and instrument-free detection of the main meningitis-causing bacteria, Neisseria meningitidis (N. meningitidis). The introduction of paper into the microfluidic device for LAMP reactions enables stable test results over a much longer period of time than a paper-free microfluidic system. This hybrid system also offers versatile functions, by providing not only on-site qualitative diagnostic analysis (i.e., a yes or no answer), but also confirmatory testing and quantitative analysis in laboratory settings. The limit of detection of N. meningitidis is about 3 copies per LAMP zone within 45 min, close to single-bacterium detection sensitivity. In addition, we have achieved simple pathogenic microorganism detection without a laborious sample preparation process and without the use of centrifuges. This low-cost hybrid microfluidic system provides a simple and highly sensitive approach for fast instrument-free diagnosis of N. meningitidis in resource-limited settings. This versatile PDMS/paper microfluidic platform has great potential for the point of care (POC) diagnosis of a wide range of infectious diseases, especially for developing nations. PMID:25019330
Evaluation and construction of diagnostic criteria for inclusion body myositis
Mammen, Andrew L.; Amato, Anthony A.; Weiss, Michael D.; Needham, Merrilee
2014-01-01
Objective: To use patient data to evaluate and construct diagnostic criteria for inclusion body myositis (IBM), a progressive disease of skeletal muscle. Methods: The literature was reviewed to identify all previously proposed IBM diagnostic criteria. These criteria were applied through medical records review to 200 patients diagnosed as having IBM and 171 patients diagnosed as having a muscle disease other than IBM by neuromuscular specialists at 2 institutions, and to a validating set of 66 additional patients with IBM from 2 other institutions. Machine learning techniques were used for unbiased construction of diagnostic criteria. Results: Twenty-four previously proposed IBM diagnostic categories were identified. Twelve categories all performed with high (≥97%) specificity but varied substantially in their sensitivities (11%–84%). The best performing category was European Neuromuscular Centre 2013 probable (sensitivity of 84%). Specialized pathologic features and newly introduced strength criteria (comparative knee extension/hip flexion strength) performed poorly. Unbiased data-directed analysis of 20 features in 371 patients resulted in construction of higher-performing data-derived diagnostic criteria (90% sensitivity and 96% specificity). Conclusions: Published expert consensus–derived IBM diagnostic categories have uniformly high specificity but wide-ranging sensitivities. High-performing IBM diagnostic category criteria can be developed directly from principled unbiased analysis of patient data. Classification of evidence: This study provides Class II evidence that published expert consensus–derived IBM diagnostic categories accurately distinguish IBM from other muscle disease with high specificity but wide-ranging sensitivities. PMID:24975859
Evaluation of peak-picking algorithms for protein mass spectrometry.
Bauer, Chris; Cramer, Rainer; Schuchhardt, Johannes
2011-01-01
Peak picking is an early key step in MS data analysis. We compare three commonly used approaches to peak picking and discuss their merits by means of statistical analysis. Methods investigated encompass signal-to-noise ratio, continuous wavelet transform, and a correlation-based approach using a Gaussian template. Functionality of the three methods is illustrated and discussed in a practical context using a mass spectral data set created with MALDI-TOF technology. Sensitivity and specificity are investigated using a manually defined reference set of peaks. As an additional criterion, the robustness of the three methods is assessed by a perturbation analysis and illustrated using ROC curves.
Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Jörg
2016-01-01
This work presents a computer-based approach to analyze the two-dimensional vocal fold dynamics of endoscopic high-speed videos, and constitutes an extension and generalization of a previously proposed wavelet-based procedure. While most approaches aim for analyzing sustained phonation conditions, the proposed method allows for a clinically adequate analysis of both dynamic as well as sustained phonation paradigms. The analysis procedure is based on a spatio-temporal visualization technique, the phonovibrogram, that facilitates the documentation of the visible laryngeal dynamics. From the phonovibrogram, a low-dimensional set of features is computed using a principle component analysis strategy that quantifies the type of vibration patterns, irregularity, lateral symmetry and synchronicity, as a function of time. Two different test bench data sets are used to validate the approach: (I) 150 healthy and pathologic subjects examined during sustained phonation. (II) 20 healthy and pathologic subjects that were examined twice: during sustained phonation and a glissando from a low to a higher fundamental frequency. In order to assess the discriminative power of the extracted features, a Support Vector Machine is trained to distinguish between physiologic and pathologic vibrations. The results for sustained phonation sequences are compared to the previous approach. Finally, the classification performance of the stationary analyzing procedure is compared to the transient analysis of the glissando maneuver. For the first test bench the proposed procedure outperformed the previous approach (proposed feature set: accuracy: 91.3%, sensitivity: 80%, specificity: 97%, previous approach: accuracy: 89.3%, sensitivity: 76%, specificity: 96%). Comparing the classification performance of the second test bench further corroborates that analyzing transient paradigms provides clear additional diagnostic value (glissando maneuver: accuracy: 90%, sensitivity: 100%, specificity: 80%, sustained phonation: accuracy: 75%, sensitivity: 80%, specificity: 70%). The incorporation of parameters describing the temporal evolvement of vocal fold vibration clearly improves the automatic identification of pathologic vibration patterns. Furthermore, incorporating a dynamic phonation paradigm provides additional valuable information about the underlying laryngeal dynamics that cannot be derived from sustained conditions. The proposed generalized approach provides a better overall classification performance than the previous approach, and hence constitutes a new advantageous tool for an improved clinical diagnosis of voice disorders. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Baker, John; Thorpe, Ira
2012-01-01
Thoroughly studied classic space-based gravitational-wave missions concepts such as the Laser Interferometer Space Antenna (LISA) are based on laser-interferometry techniques. Ongoing developments in atom-interferometry techniques have spurred recently proposed alternative mission concepts. These different approaches can be understood on a common footing. We present an comparative analysis of how each type of instrument responds to some of the noise sources which may limiting gravitational-wave mission concepts. Sensitivity to laser frequency instability is essentially the same for either approach. Spacecraft acceleration reference stability sensitivities are different, allowing smaller spacecraft separations in the atom interferometry approach, but acceleration noise requirements are nonetheless similar. Each approach has distinct additional measurement noise issues.
Pricing policy for declining demand using item preservation technology.
Khedlekar, Uttam Kumar; Shukla, Diwakar; Namdeo, Anubhav
2016-01-01
We have designed an inventory model for seasonal products in which deterioration can be controlled by item preservation technology investment. Demand for the product is considered price sensitive and decreases linearly. This study has shown that the profit is a concave function of optimal selling price, replenishment time and preservation cost parameter. We simultaneously determined the optimal selling price of the product, the replenishment cycle and the cost of item preservation technology. Additionally, this study has shown that there exists an optimal selling price and optimal preservation investment to maximize the profit for every business set-up. Finally, the model is illustrated by numerical examples and sensitive analysis of the optimal solution with respect to major parameters.
Expendable vs reusable propulsion systems cost sensitivity
NASA Technical Reports Server (NTRS)
Hamaker, Joseph W.; Dodd, Glenn R.
1989-01-01
One of the key trade studies that must be considered when studying any new space transportation hardware is whether to go reusable or expendable. An analysis is presented here for such a trade relative to a proposed Liquid Rocket Booster which is being studied at MSFC. The assumptions or inputs to the trade were developed and integrated into a model that compares the Life-Cycle Costs of both a reusable LRB and an expendable LRB. Sensitivities were run by varying the input variables to see their effect on total cost. In addition a Monte-Carlo simulation was run to determine the amount of cost risk that may be involved in a decision to reuse or expend.
NASA Astrophysics Data System (ADS)
Rivers, Thane D.
1992-06-01
An Automated Scanning Monochromator was developed using: an Acton Research Corporation (ARC) Monochromator, Ealing Photomultiplier Tube and a Macintosh PC in conjunction with LabVIEW software. The LabVIEW Virtual Instrument written to operate the ARC Monochromator is a mouse driven user friendly program developed for automated spectral data measurements. Resolution and sensitivity of the Automated Scanning Monochromator System were determined experimentally. The Automated monochromator was then used for spectral measurements of a Platinum Lamp. Additionally, the reflectivity curve for a BaSO4 coated screen has been measured. Reflectivity measurements indicate a large discrepancy with expected results. Further analysis of the reflectivity experiment is required for conclusive results.
Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.
2014-01-01
Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210
Role of Co-Sensitizers in Dye-Sensitized Solar Cells.
Krishna, Narra Vamsi; Krishna, Jonnadula Venkata Suman; Mrinalini, Madoori; Prasanthkumar, Seelam; Giribabu, Lingamallu
2017-12-08
Co-sensitization is a popular route towards improved efficiency and stability of dye-sensitized solar cells (DSSCs). In this context, the power conversion efficiency (PCE) values of DSSCs incorporating Ru- and porphyrin-based dyes can be improved from 8-11 % to 11-14 % after the addition of additives, co-adsorbents, and co-sensitizers that reduce aggregation and charge recombination in the device. Among the three supporting material types, co-sensitizers play a major role to enhance the performance and stability of DSSCs, which is requried for commercialization. In this Minireview, we highlight the role co-sensitizers play in improving photovoltaic performance of devices containing Ru- and porphyrin-based sensitizers. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Araki, Kenichiro; Shirabe, Ken; Watanabe, Akira; Kubo, Norio; Sasaki, Shigeru; Suzuki, Hideki; Asao, Takayuki; Kuwano, Hiroyuki
2017-01-01
Although single-incision laparoscopic cholecystectomy is now widely performed in patients with cholecystitis, some cases require an additional port to complete the procedure. In this study, we focused on risk factor of additional port in this surgery. We performed single-incision cholecystectomy in 75 patients with acute cholecystitis or after cholecystitis between 2010 and 2014 at Gunma University Hospital. Surgical indications followed the TG13 guidelines. Our standard procedure for single-incision cholecystectomy routinely uses two needlescopic devices. We used logistic regression analysis to identify the risk factors associated with use of an additional full-size port (5 or 10 mm). Surgical outcome was acceptable without biliary injury. Nine patients (12.0%) required an additional port, and one patient (1.3%) required conversion to open cholecystectomy because of severe adhesions around the cystic duct and common bile duct. In multivariate analysis, high C-reactive protein (CRP) values (>7.0 mg/dl) during cholecystitis attacks were significantly correlated with the need for an additional port (P = 0.009), with a sensitivity of 55.6%, specificity of 98.5%, and accuracy of 93.3%. This study indicated that the severe inflammation indicated by high CRP values during cholecystitis attacks predicts the need for an additional port. J. Med. Invest. 64: 245-249, August, 2017.
NASA Astrophysics Data System (ADS)
Yue, Xiao-li; Wang, Zhao-qing; Li, Chao-rui; Yang, Zheng-yin
2018-03-01
In this paper, a simple naphthalene-based derivative (HL) has been designed and synthesized as a Al3 +-selective fluorescent chemosensor based on the PET mechanism. HL exhibited high selectivity and sensitivity towards Al3 + over other commonly coexisting metal ions in ethanol with a detection limit of 2.72 nM. The 1:1 binding stoichiometry of the complex (HL-Al3 +) was determined from the Job's plot based on fluorescence titrations and the ESI-MS spectrum data. Moreover, the binding site of HL with Al3 + was assured by the 1H NMR titration experiment. The binding constant (Ka) of the complex (HL-Al3 +) was calculated to be 5.06 × 104 M- 1 according to the Benesi-Hildebrand equation. In addition, the recognizing process of HL towards Al3 + was chemically reversible by adding Na2EDTA. Importantly, HL could directly and rapidly detect aluminum ion through the filter paper without resorting to additional instrumental analysis.
Klapötke, Thomas M; Stierstorfer, Jörg
2008-08-07
The highly energetic compound 1,3,5-triaminoguanidinium dinitramide (1) was prepared in high yield (82%) according to a new synthesis by the reaction of potassium dinitramide and triaminoguanidinium perchlorate. The heat of formation was calculated in an extensive computational study (CBS-4M). With this the detonation parameters of compound were computed using the EXPLO5 software: D = 8796 m s(-1), p = 299 kbar. In addition, a full characterization of the chemical properties (single X-ray diffraction, IR and Raman spectroscopy, multinuclear NMR spectroscopy, mass spectrometry and elemental analysis) as well as of the energetic characteristics (differential scanning calorimetry, thermal safety calorimetry, impact, friction and electrostatic tests) is given in this work. Due to the high impact (2 J) and friction sensitivity (24 N) several attempts to reduce these sensitivities were performed by the addition of wax. The performance of was tested applying a "Koenen" steel sleeve test resulting in a critical diameter of > or =10 mm.
Yoshioka, Toshiaki; Nagatomi, Yasushi; Harayama, Koichi; Bamba, Takeshi
2018-07-01
Polycyclic aromatic hydrocarbons (PAHs) are carcinogenic substances that are mainly generated during heating in food; therefore, the European Union (EU) has regulated the amount of benzo[a]pyrene and PAH4 in various types of food. In addition, the Scientific Committee on Food of the EU and the Joint Food and Agriculture Organization/World Health Organization Expert Committee on Food Additives have recommended that 16 PAHs should be monitored. Since coffee beverages and dark beer are roasted during manufacture, monitoring these 16 PAHs is of great importance. On the other hand, supercritical fluid chromatography (SFC) is a separation method that has garnered attention in recent years as a complement for liquid and gas chromatography. Therefore, we developed a rapid high-sensitivity analytical method for the above-mentioned 16 PAHs in coffee beverages and dark beer involving supercritical fluid chromatography/atmospheric pressure chemical ionization-mass spectrometry (SFC/APCI-MS) and simple sample preparation. In this study, we developed a novel analytical technique that increased the sensitivity of MS detection by varying the back-pressure in SFC depending on the elution of PAHs. In addition, analysis of commercially available coffee and dark beer samples in Japan showed that the risk of containing the 16 PAHs may be low. Copyright © 2018 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sheikholeslami, R.; Hosseini, N.; Razavi, S.
2016-12-01
Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).
Shtessel, Maria; Lobell, Elizabeth; Hudes, Golda; Rosenstreich, David; de Vos, Gabriele
2017-01-01
Background: Allergists commonly perform intradermal skin testing (IDST) after negative skin-prick testing (SPT) to comprehensively diagnose environmental allergic sensitization. However, with the availability of modern methods to detect serum-specific immunoglobulin E (ssIgE), it is unclear if ssIgE testing could substitute for IDST. Objective: To determine the efficacy of ssIgE testing and IDST when added to SPT in diagnosing environmental allergic sensitizations. Methods: SPT, IDST, and ssIgE testing to nine common environmental allergens were analyzed in 75 patients with oculonasal symptoms who presented to our allergy clinics in the Bronx, New York, between January 2014 and May 2015. Results: A total of 651 SPT and 499 ssIgE tests were independently performed and revealed 162 (25%) and 127 (25%) sensitizations, respectively. When SPT results were negative, IDST results revealed 108 of 452 additional sensitizations (24%). In contrast, when SPT results were negative, ssIgE test results only revealed 9% additional sensitizations. When both SPT and IDST results were negative, ssIgE testing only detected 3% of additional sensitizations, and ssIgE levels were typically low in these cases (median, 1.25 kU/L; range, 0.357–4.47 kU/L). When both SPT and ssIgE test results were negative, IDST results detected 15% additional sensitizations. Conclusion: IDST detected more additional environmental sensitizations compared with ssIgE testing. IDST, therefore, may be useful when the SPT and/or ssIgE testing results were negative, but the exposure history indicated relevant allergic sensitization. Serology added only a little more information if both SPT and IDST results were negative but may be useful in combination with SPT if IDST cannot be performed. PMID:28583228
Ferastraoaru, Denisa; Shtessel, Maria; Lobell, Elizabeth; Hudes, Golda; Rosenstreich, David; de Vos, Gabriele
2017-06-01
Allergists commonly perform intradermal skin testing (IDST) after negative skin-prick testing (SPT) to comprehensively diagnose environmental allergic sensitization. However, with the availability of modern methods to detect serum-specific immunoglobulin E (ssIgE), it is unclear if ssIgE testing could substitute for IDST. To determine the efficacy of ssIgE testing and IDST when added to SPT in diagnosing environmental allergic sensitizations. SPT, IDST, and ssIgE testing to nine common environmental allergens were analyzed in 75 patients with oculonasal symptoms who presented to our allergy clinics in the Bronx, New York, between January 2014 and May 2015. A total of 651 SPT and 499 ssIgE tests were independently performed and revealed 162 (25%) and 127 (25%) sensitizations, respectively. When SPT results were negative, IDST results revealed 108 of 452 additional sensitizations (24%). In contrast, when SPT results were negative, ssIgE test results only revealed 9% additional sensitizations. When both SPT and IDST results were negative, ssIgE testing only detected 3% of additional sensitizations, and ssIgE levels were typically low in these cases (median, 1.25 kU/L; range, 0.357-4.47 kU/L). When both SPT and ssIgE test results were negative, IDST results detected 15% additional sensitizations. IDST detected more additional environmental sensitizations compared with ssIgE testing. IDST, therefore, may be useful when the SPT and/or ssIgE testing results were negative, but the exposure history indicated relevant allergic sensitization. Serology added only a little more information if both SPT and IDST results were negative but may be useful in combination with SPT if IDST cannot be performed.
Construction of MoS2/Si nanowire array heterojunction for ultrahigh-sensitivity gas sensor
NASA Astrophysics Data System (ADS)
Wu, Di; Lou, Zhenhua; Wang, Yuange; Xu, Tingting; Shi, Zhifeng; Xu, Junmin; Tian, Yongtao; Li, Xinjian
2017-10-01
Few-layer MoS2 thin films were synthesized by a two-step thermal decomposition process. In addition, MoS2/Si nanowire array (SiNWA) heterojunctions exhibiting excellent gas sensing properties were constructed and investigated. Further analysis reveals that such MoS2/SiNWA heterojunction devices are highly sensitive to nitric oxide (NO) gas under reverse voltages at room temperature (RT). The gas sensor demonstrated a minimum detection limit of 10 ppb, which represents the lowest value obtained for MoS2-based sensors, as well as an ultrahigh response of 3518% (50 ppm NO, ˜50% RH), with good repeatability and selectivity of the MoS2/SiNWA heterojunction. The sensing mechanisms were also discussed. The performance of the MoS2/SiNWA heterojunction gas sensors is superior to previous results, revealing that they have great potential in applications relating to highly sensitive gas sensors.
Construction of MoS2/Si nanowire array heterojunction for ultrahigh-sensitivity gas sensor.
Wu, Di; Lou, Zhenhua; Wang, Yuange; Xu, Tingting; Shi, Zhifeng; Xu, Junmin; Tian, Yongtao; Li, Xinjian
2017-10-27
Few-layer MoS 2 thin films were synthesized by a two-step thermal decomposition process. In addition, MoS 2 /Si nanowire array (SiNWA) heterojunctions exhibiting excellent gas sensing properties were constructed and investigated. Further analysis reveals that such MoS 2 /SiNWA heterojunction devices are highly sensitive to nitric oxide (NO) gas under reverse voltages at room temperature (RT). The gas sensor demonstrated a minimum detection limit of 10 ppb, which represents the lowest value obtained for MoS 2 -based sensors, as well as an ultrahigh response of 3518% (50 ppm NO, ∼50% RH), with good repeatability and selectivity of the MoS 2 /SiNWA heterojunction. The sensing mechanisms were also discussed. The performance of the MoS 2 /SiNWA heterojunction gas sensors is superior to previous results, revealing that they have great potential in applications relating to highly sensitive gas sensors.
Johnson, Mitchell E; Landers, James P
2004-11-01
Laser-induced fluorescence is an extremely sensitive method for detection in chemical separations. In addition, it is well-suited to detection in small volumes, and as such is widely used for capillary electrophoresis and microchip-based separations. This review explores the detailed instrumental conditions required for sub-zeptomole, sub-picomolar detection limits. The key to achieving the best sensitivity is to use an excitation and emission volume that is matched to the separation system and that, simultaneously, will keep scattering and luminescence background to a minimum. We discuss how this is accomplished with confocal detection, 90 degrees on-capillary detection, and sheath-flow detection. It is shown that each of these methods have their advantages and disadvantages, but that all can be used to produce extremely sensitive detectors for capillary- or microchip-based separations. Analysis of these capabilities allows prediction of the optimal means of achieving ultrasensitive detection on microchips.
Acute vestibular syndrome: clinical head impulse test versus video head impulse test.
Celebisoy, Nese
2018-03-05
HINTS battery involving head impulse test (HIT), nystagmus, and test of skew is the critical bedside examination to differentiate acute unilateral peripheral vestibulopathy from posterior circulation stroke (PCS) in acute vestibular syndrome (AVS). The highest sensitivity component of the battery has been reported to be the horizontal HIT, whereas skew deviation is defined as the most specific but non-sensitive sign for PCS. Video-oculography-based HIT (vHIT) may have an additional power in making the differentiation. If vHIT is undertaken, then both gain and gain asymmetry should be taken into account as anterior inferior cerebellar artery (AICA) strokes are at risk of being misclassified based on VOR gain alone. Further refinement in video technology, increased operator proficiency and incorporation with saccade analysis will increase the sensitivity of vHIT for PCS diagnosis. For the time being, clinical examination seems adequate in frontline diagnostic evaluation of AVS.
Analysis of image formation in optical coherence elastography using a multiphysics approach
Chin, Lixin; Curatolo, Andrea; Kennedy, Brendan F.; Doyle, Barry J.; Munro, Peter R. T.; McLaughlin, Robert A.; Sampson, David D.
2014-01-01
Image formation in optical coherence elastography (OCE) results from a combination of two processes: the mechanical deformation imparted to the sample and the detection of the resulting displacement using optical coherence tomography (OCT). We present a multiphysics model of these processes, validated by simulating strain elastograms acquired using phase-sensitive compression OCE, and demonstrating close correspondence with experimental results. Using the model, we present evidence that the approximation commonly used to infer sample displacement in phase-sensitive OCE is invalidated for smaller deformations than has been previously considered, significantly affecting the measurement precision, as quantified by the displacement sensitivity and the elastogram signal-to-noise ratio. We show how the precision of OCE is affected not only by OCT shot-noise, as is usually considered, but additionally by phase decorrelation due to the sample deformation. This multiphysics model provides a general framework that could be used to compare and contrast different OCE techniques. PMID:25401007
Colorado River basin sensitivity to disturbance impacts
NASA Astrophysics Data System (ADS)
Bennett, K. E.; Urrego-Blanco, J. R.; Jonko, A. K.; Vano, J. A.; Newman, A. J.; Bohn, T. J.; Middleton, R. S.
2017-12-01
The Colorado River basin is an important river for the food-energy-water nexus in the United States and is projected to change under future scenarios of increased CO2emissions and warming. Streamflow estimates to consider climate impacts occurring as a result of this warming are often provided using modeling tools which rely on uncertain inputs—to fully understand impacts on streamflow sensitivity analysis can help determine how models respond under changing disturbances such as climate and vegetation. In this study, we conduct a global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the Variable Infiltration Capacity (VIC) hydrologic model to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in VIC. Additionally, we examine sensitivities of basin-wide model simulations using an approach that incorporates changes in temperature, precipitation and vegetation to consider impact responses for snow-dominated headwater catchments, low elevation arid basins, and for the upper and lower river basins. We find that for the Colorado River basin, snow-dominated regions are more sensitive to uncertainties. New parameter sensitivities identified include runoff/evapotranspiration sensitivity to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI). Basin-wide streamflow sensitivities to precipitation, temperature and vegetation are variable seasonally and also between sub-basins; with the largest sensitivities for smaller, snow-driven headwater systems where forests are dense. For a major headwater basin, a 1ºC of warming equaled a 30% loss of forest cover, while a 10% precipitation loss equaled a 90% forest cover decline. Scenarios utilizing multiple disturbances led to unexpected results where changes could either magnify or diminish extremes, such as low and peak flows and streamflow timing, dependent on the strength and direction of the forcing. These results indicate the importance of understanding model sensitivities under disturbance impacts to manage these shifts; plan for future water resource changes and determine how the impacts will affect the sustainability and adaptability of food-energy-water systems.
Discrimination Enhancement with Transient Feature Analysis of a Graphene Chemical Sensor.
Nallon, Eric C; Schnee, Vincent P; Bright, Collin J; Polcha, Michael P; Li, Qiliang
2016-01-19
A graphene chemical sensor is subjected to a set of structurally and chemically similar hydrocarbon compounds consisting of toluene, o-xylene, p-xylene, and mesitylene. The fractional change in resistance of the sensor upon exposure to these compounds exhibits a similar response magnitude among compounds, whereas large variation is observed within repetitions for each compound, causing a response overlap. Therefore, traditional features depending on maximum response change will cause confusion during further discrimination and classification analysis. More robust features that are less sensitive to concentration, sampling, and drift variability would provide higher quality information. In this work, we have explored the advantage of using transient-based exponential fitting coefficients to enhance the discrimination of similar compounds. The advantages of such feature analysis to discriminate each compound is evaluated using principle component analysis (PCA). In addition, machine learning-based classification algorithms were used to compare the prediction accuracies when using fitting coefficients as features. The additional features greatly enhanced the discrimination between compounds while performing PCA and also improved the prediction accuracy by 34% when using linear discrimination analysis.
2007-01-01
multi-disciplinary optimization with uncertainty. Robust optimization and sensitivity analysis is usually used when an optimization model has...formulation is introduced in Section 2.3. We briefly discuss several definitions used in the sensitivity analysis in Section 2.4. Following in...2.5. 2.4 SENSITIVITY ANALYSIS In this section, we discuss several definitions used in Chapter 5 for Multi-Objective Sensitivity Analysis . Inner
Palbociclib in hormone receptor positive advanced breast cancer: A cost-utility analysis.
Raphael, J; Helou, J; Pritchard, K I; Naimark, D M
2017-11-01
The addition of palbociclib to letrozole improves progression-free survival in the first-line treatment of hormone receptor positive advanced breast cancer (ABC). This study assesses the cost-utility of palbociclib from the Canadian healthcare payer perspective. A probabilistic discrete event simulation (DES) model was developed and parameterised with data from the PALOMA 1 and 2 trials and other sources. The incremental cost per quality-adjusted life-month (QALM) gained for palbociclib was calculated. A time horizon of 15 years was used in the base case with costs and effectiveness discounted at 5% annually. Time-to- progression and time-to-death were derived from a Weibull and exponential distribution. Expected costs were based on Ontario fees and other sources. Probabilistic sensitivity analyses were conducted to account for parameter uncertainty. Compared to letrozole, the addition of palbociclib provided an additional 14.7 QALM at an incremental cost of $161,508. The resulting incremental cost-effectiveness ratio was $10,999/QALM gained. Assuming a willingness-to-pay (WTP) of $4167/QALM, the probability of palbociclib to be cost-effective was 0%. Cost-effectiveness acceptability curves derived from a probabilistic sensitivity analysis showed that at a WTP of $11,000/QALM gained, the probability of palbociclib to be cost-effective was 50%. The addition of palbociclib to letrozole is unlikely to be cost-effective for the treatment of ABC from a Canadian healthcare perspective with its current price. While ABC patients derive a meaningful clinical benefit from palbociclib, considerations should be given to increase the WTP threshold and reduce the drug pricing, to render this strategy more affordable. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effects of iron on optical properties of dissolved organic matter.
Poulin, Brett A; Ryan, Joseph N; Aiken, George R
2014-09-02
Iron is a source of interference in the spectroscopic analysis of dissolved organic matter (DOM); however, its effects on commonly employed ultraviolet and visible (UV-vis) light adsorption and fluorescence measurements are poorly defined. Here, we describe the effects of iron(II) and iron(III) on the UV-vis absorption and fluorescence of solutions containing two DOM fractions and two surface water samples. In each case, regardless of DOM composition, UV-vis absorption increased linearly with increasing iron(III). Correction factors were derived using iron(III) absorption coefficients determined at wavelengths commonly used to characterize DOM. Iron(III) addition increased specific UV absorbances (SUVA) and decreased the absorption ratios (E2:E3) and spectral slope ratios (SR) of DOM samples. Both iron(II) and iron(III) quenched DOM fluorescence at pH 6.7. The degree and region of fluorescence quenching varied with the iron:DOC concentration ratio, DOM composition, and pH. Regions of the fluorescence spectra associated with greater DOM conjugation were more susceptible to iron quenching, and DOM fluorescence indices were sensitive to the presence of both forms of iron. Analyses of the excitation-emission matrices using a 7- and 13-component parallel factor analysis (PARAFAC) model showed low PARAFAC sensitivity to iron addition.
Vicarious Social Touch Biases Gazing at Faces and Facial Emotions.
Schirmer, Annett; Ng, Tabitha; Ebstein, Richard P
2018-02-01
Research has suggested that interpersonal touch promotes social processing and other-concern, and that women may respond to it more sensitively than men. In this study, we asked whether this phenomenon would extend to third-party observers who experience touch vicariously. In an eye-tracking experiment, participants (N = 64, 32 men and 32 women) viewed prime and target images with the intention of remembering them. Primes comprised line drawings of dyadic interactions with and without touch. Targets comprised two faces shown side-by-side, with one being neutral and the other being happy or sad. Analysis of prime fixations revealed that faces in touch interactions attracted longer gazing than faces in no-touch interactions. In addition, touch enhanced gazing at the area of touch in women but not men. Analysis of target fixations revealed that touch priming increased looking at both faces immediately after target onset, and subsequently, at the emotional face in the pair. Sex differences in target processing were nonsignificant. Together, the present results imply that vicarious touch biases visual attention to faces and promotes emotion sensitivity. In addition, they suggest that, compared with men, women are more aware of tactile exchanges in their environment. As such, vicarious touch appears to share important qualities with actual physical touch. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Effects of iron on optical properties of dissolved organic matter
Poulin, Brett; Ryan, Joseph N.; Aiken, George R.
2014-01-01
Iron is a source of interference in the spectroscopic analysis of dissolved organic matter (DOM); however, its effects on commonly employed ultraviolet and visible (UV–vis) light adsorption and fluorescence measurements are poorly defined. Here, we describe the effects of iron(II) and iron(III) on the UV–vis absorption and fluorescence of solutions containing two DOM fractions and two surface water samples. In each case, regardless of DOM composition, UV–vis absorption increased linearly with increasing iron(III). Correction factors were derived using iron(III) absorption coefficients determined at wavelengths commonly used to characterize DOM. Iron(III) addition increased specific UV absorbances (SUVA) and decreased the absorption ratios (E2:E3) and spectral slope ratios (SR) of DOM samples. Both iron(II) and iron(III) quenched DOM fluorescence at pH 6.7. The degree and region of fluorescence quenching varied with the iron:DOC concentration ratio, DOM composition, and pH. Regions of the fluorescence spectra associated with greater DOM conjugation were more susceptible to iron quenching, and DOM fluorescence indices were sensitive to the presence of both forms of iron. Analyses of the excitation–emission matrices using a 7- and 13-component parallel factor analysis (PARAFAC) model showed low PARAFAC sensitivity to iron addition.
NASA Astrophysics Data System (ADS)
Al Okab, Riyad Ahmed
2013-02-01
Green analytical methods using Cisapride (CPE) as green analytical reagent was investigated in this work. Rapid, simple, and sensitive spectrophotometric methods for the determination of bromate in water sample, bread and flour additives were developed. The proposed methods based on the oxidative coupling between phenoxazine and Cisapride in the presence of bromate to form red colored product with max at 520 nm. Phenoxazine and Cisapride and its reaction products were found to be environmentally friendly under the optimum experimental condition. The method obeys beers law in concentration range 0.11-4.00 g ml-1 and molar absorptivity 1.41 × 104 L mol-1 cm-1. All variables have been optimized and the presented reaction sequences were applied to the analysis of bromate in water, bread and flour additive samples. The performance of these method was evaluated in terms of Student's t-test and variance ratio F-test to find out the significance of proposed methods over the reference method. The combination of pharmaceutical drugs reagents with low concentration create some unique green chemical analyses.
NASA Astrophysics Data System (ADS)
Yu, Maolin; Du, R.
2005-08-01
Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.
Mapping anhedonia onto reinforcement learning: a behavioural meta-analysis
2013-01-01
Background Depression is characterised partly by blunted reactions to reward. However, tasks probing this deficiency have not distinguished insensitivity to reward from insensitivity to the prediction errors for reward that determine learning and are putatively reported by the phasic activity of dopamine neurons. We attempted to disentangle these factors with respect to anhedonia in the context of stress, Major Depressive Disorder (MDD), Bipolar Disorder (BPD) and a dopaminergic challenge. Methods Six behavioural datasets involving 392 experimental sessions were subjected to a model-based, Bayesian meta-analysis. Participants across all six studies performed a probabilistic reward task that used an asymmetric reinforcement schedule to assess reward learning. Healthy controls were tested under baseline conditions, stress or after receiving the dopamine D2 agonist pramipexole. In addition, participants with current or past MDD or BPD were evaluated. Reinforcement learning models isolated the contributions of variation in reward sensitivity and learning rate. Results MDD and anhedonia reduced reward sensitivity more than they affected the learning rate, while a low dose of the dopamine D2 agonist pramipexole showed the opposite pattern. Stress led to a pattern consistent with a mixed effect on reward sensitivity and learning rate. Conclusion Reward-related learning reflected at least two partially separable contributions. The first related to phasic prediction error signalling, and was preferentially modulated by a low dose of the dopamine agonist pramipexole. The second related directly to reward sensitivity, and was preferentially reduced in MDD and anhedonia. Stress altered both components. Collectively, these findings highlight the contribution of model-based reinforcement learning meta-analysis for dissecting anhedonic behavior. PMID:23782813
Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes
NASA Astrophysics Data System (ADS)
Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris
2017-12-01
Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.
Xu, Li-Qian; Yang, Yun-Mei; Tong, Hong; Xu, Chang-Fu
2018-04-01
Although cardiac troponin is the cornerstone in diagnosis of acute myocardial infarction (AMI), the accuracy is still suboptimal in the early hours after chest pain onset. Due to its small size, heart-type fatty acid-binding protein (H-FABP) has been reported accurate in diagnosis of AMI, however, this remains undetermined. The aim is to investigate the diagnostic performance of H-FABP alone and in conjunction with high-sensitivity troponin (hs-Tn) within 6 hours of symptom onset. Furthermore, accuracy in 0h/3h algorithm was also assessed. Medline and EMBASE databases were searched; sensitivity, specificity and area under ROC curve (AUC) were used as measures of the diagnostic accuracy. We pooled data on bivariate modelling, threshold effect and publication bias was applied for heterogeneity analysis. Twenty-two studies with 6602 populations were included, pooled sensitivity, specificity and AUC of H-FABP were 0.75 (0.68-0.81), 0.81 (0.75-0.86) and 0.85 (0.82-0.88) within 6 hours. Similar sensitivity (0.76, 0.69-0.82), specificity (0.80, 0.71-0.87) and AUC (0.85, 0.82-0.88) of H-FABP were observed in 4185 (63%) patients in 0h/3h algorithm. The additional use of H-FABP improved the sensitivity of hs-Tn alone but worsened its specificity (all p<0.001), and resulted in no improvement of AUC (p>0.99). There was no threshold effect (p=0.18) and publication bias (p=0.31) in this study. H-FABP has modest accuracy for early diagnosis of AMI within 3 and 6 hours of symptom onset. The incremental value of H-FABP seemed much smaller and was of uncertain clinical significance in addition to hs-Tn in patients with suspected AMI. Routine use of H-FABP in early presentation does not seem warranted. Copyright © 2017 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.
Characterization of Metal Powders Used for Additive Manufacturing
Slotwinski, JA; Garboczi, EJ; Stutzman, PE; Ferraris, CF; Watson, SS; Peltz, MA
2014-01-01
Additive manufacturing (AM) techniques1 can produce complex, high-value metal parts, with potential applications as critical parts, such as those found in aerospace components. The production of AM parts with consistent and predictable properties requires input materials (e.g., metal powders) with known and repeatable characteristics, which in turn requires standardized measurement methods for powder properties. First, based on our previous work, we assess the applicability of current standardized methods for powder characterization for metal AM powders. Then we present the results of systematic studies carried out on two different powder materials used for additive manufacturing: stainless steel and cobalt-chrome. The characterization of these powders is important in NIST efforts to develop appropriate measurements and standards for additive materials and to document the property of powders used in a NIST-led additive manufacturing material round robin. An extensive array of characterization techniques was applied to these two powders, in both virgin and recycled states. The physical techniques included laser diffraction particle size analysis, X-ray computed tomography for size and shape analysis, and optical and scanning electron microscopy. Techniques sensitive to structure and chemistry, including X-ray diffraction, energy dispersive analytical X-ray analysis using the X-rays generated during scanning electron microscopy, and X-Ray photoelectron spectroscopy were also employed. The results of these analyses show how virgin powder changes after being exposed to and recycled from one or more Direct Metal Laser Sintering (DMLS) additive manufacturing build cycles. In addition, these findings can give insight into the actual additive manufacturing process. PMID:26601040
NASA Astrophysics Data System (ADS)
Muneepeerakul, Chitsomanus; Huffaker, Ray; Munoz-Carpena, Rafael
2016-04-01
The weather index insurance promises financial resilience to farmers struck by harsh weather conditions with swift compensation at affordable premium thanks to its minimal adverse selection and moral hazard. Despite these advantages, the very nature of indexing causes the presence of "production basis risk" that the selected weather indexes and their thresholds do not correspond to actual damages. To reduce basis risk without additional data collection cost, we propose the use of rain intensity and frequency as indexes as it could offer better protection at the lower premium by avoiding basis risk-strike trade-off inherent in the total rainfall index. We present empirical evidences and modeling results that even under the similar cumulative rainfall and temperature environment, yield can significantly differ especially for drought sensitive crops. We further show that deriving the trigger level and payoff function from regression between historical yield and total rainfall data may pose significant basis risk owing to their non-unique relationship in the insured range of rainfall. Lastly, we discuss the design of index insurance in terms of contract specifications based on the results from global sensitivity analysis.
Cohesive detachment of an elastic pillar from a dissimilar substrate
NASA Astrophysics Data System (ADS)
Fleck, N. A.; Khaderi, S. N.; McMeeking, R. M.; Arzt, E.
The adhesion of micron-scale surfaces due to intermolecular interactions is a subject of intense interest spanning electronics, biomechanics and the application of soft materials to engineering devices. The degree of adhesion is sensitive to the diameter of micro-pillars in addition to the degree of elastic mismatch between pillar and substrate. Adhesion-strength-controlled detachment of an elastic circular cylinder from a dissimilar substrate is predicted using a Dugdale-type of analysis, with a cohesive zone of uniform tensile strength emanating from the interface corner. Detachment initiates when the opening of the cohesive zone attains a critical value, giving way to crack formation. When the cohesive zone size at crack initiation is small compared to the pillar diameter, the initiation of detachment can be expressed in terms of a critical value Hc of the corner stress intensity. The estimated pull-off force is somewhat sensitive to the choice of stick/slip boundary condition used on the cohesive zone, especially when the substrate material is much stiffer than the pillar material. The analysis can be used to predict the sensitivity of detachment force to the size of pillar and to the degree of elastic mismatch between pillar and substrate.
Fundamental study of flow field generated by rotorcraft blades using wide-field shadowgraph
NASA Technical Reports Server (NTRS)
Parthasarathy, S. P.; Cho, Y. I.; Back, L. H.
1985-01-01
The vortex trajectory and vortex wake generated by helicopter rotors are visualized using a wide-field shadowgraph technique. Use of a retro-reflective Scotchlite screen makes it possible to investigate the flow field generated by full-scale rotors. Tip vortex trajectories are visible in shadowgraphs for a range of tip Mach number of 0.38 to 0.60. The effect of the angle of attack is substantial. At an angle of attack greater than 8 degrees, the visibility of the vortex core is significant even at relatively low tip Mach numbers. The theoretical analysis of the sensitivity is carried out for a rotating blade. This analysis demonstrates that the sensitivity decreases with increasing dimensionless core radius and increases with increasing tip Mach number. The threshold value of the sensitivity is found to be 0.0015, below which the vortex core is not visible and above which it is visible. The effect of the optical path length is also discussed. Based on this investigation, it is concluded that the application of this wide-field shadowgraph technique to a large wind tunnel test should be feasible. In addition, two simultaneous shadowgraph views would allow three-dimensional reconstruction of vortex trajectories.
Flexible nanopillar-based electrochemical sensors for genetic detection of foodborne pathogens
NASA Astrophysics Data System (ADS)
Park, Yoo Min; Lim, Sun Young; Jeong, Soon Woo; Song, Younseong; Bae, Nam Ho; Hong, Seok Bok; Choi, Bong Gill; Lee, Seok Jae; Lee, Kyoung G.
2018-06-01
Flexible and highly ordered nanopillar arrayed electrodes have brought great interest for many electrochemical applications, especially to the biosensors, because of its unique mechanical and topological properties. Herein, we report an advanced method to fabricate highly ordered nanopillar electrodes produced by soft-/photo-lithography and metal evaporation. The highly ordered nanopillar array exhibited the superior electrochemical and mechanical properties in regard with the wide space to response with electrolytes, enabling the sensitive analysis. As-prepared gold and silver electrodes on nanopillar arrays exhibit great and stable electrochemical performance to detect the amplified gene from foodborne pathogen of Escherichia coli O157:H7. Additionally, lightweight, flexible, and USB-connectable nanopillar-based electrochemical sensor platform improves the connectivity, portability, and sensitivity. Moreover, we successfully confirm the performance of genetic analysis using real food, specially designed intercalator, and amplified gene from foodborne pathogens with high reproducibility (6% standard deviation) and sensitivity (10 × 1.01 CFU) within 25 s based on the square wave voltammetry principle. This study confirmed excellent mechanical and chemical characteristics of nanopillar electrodes have a great and considerable electrochemical activity to apply as genetic biosensor platform in the fields of point-of-care testing (POCT).
Rigatti, Fabiane; Tizotti, Maísa Kraulich; Hörner, Rosmari; Domingues, Vanessa Oliveira; Martini, Rosiéli; Mayer, Letícia Eichstaedt; Khun, Fábio Teixeira; de França, Chirles Araújo; da Costa, Mateus Matiuzzi
2010-01-01
This study aimed to characterize the prevalence and susceptibility profile to oxacillin-resistant Coagulase-negative Staphylococci strains isolated from blood cultures in a teaching hospital, located in Santa Maria, RS. In addition, different methodologies for phenotypic characterization of mecA-mediated oxacillin resistance were compared with genotypic reference testing. After identification (MicroScan - Siemens), the isolates were tested for antimicrobial sensitivity using disk diffusion and automation (MicroScan - Siemens). The presence of mecA gene was identified by the polymerase chain reaction molecular technique. The most common species was Staphylococcus epidermidis (n=40, 67%). The mecA gene was detected in 54 (90%) strains, while analysis of the sensitivity profiles revealed a high rate of resistance to multiple classes of antimicrobial drugs. However, all isolates were uniformly sensitive to vancomycin and tigecycline. The cefoxitin disk was the phenotypic method that best correlated with the gold standard. Analysis of the clinical significance of CoNS isolated from hemocultures and the precise detection of oxacillin resistance represent decisive factors for the correct choice of antibiotic therapy. Although vancomycin constitutes the normal treatment in most Brazilian hospitals, reduction in its use is recommended.
NASA Astrophysics Data System (ADS)
Wiesauer, Karin; Pircher, Michael; Goetzinger, Erich; Hitzenberger, Christoph K.; Engelke, Rainer; Ahrens, Gisela; Pfeiffer, Karl; Ostrzinski, Ute; Gruetzner, Gabi; Oster, Reinhold; Stifter, David
2006-02-01
Optical coherence tomography (OCT) is a contactless and non-invasive technique nearly exclusively applied for bio-medical imaging of tissues. Besides the internal structure, additionally strains within the sample can be mapped when OCT is performed in a polarization sensitive (PS) way. In this work, we demonstrate the benefits of PS-OCT imaging for non-biological applications. We have developed the OCT technique beyond the state-of-the-art: based on transversal ultra-high resolution (UHR-)OCT, where an axial resolution below 2 μm within materials is obtained using a femtosecond laser as light source, we have modified the setup for polarization sensitive measurements (transversal UHR-PS-OCT). We perform structural analysis and strain mapping for different types of samples: for a highly strained elastomer specimen we demonstrate the necessity of UHR-imaging. Furthermore, we investigate epoxy waveguide structures, photoresist moulds for the fabrication of micro-electromechanical parts (MEMS), and the glass-fibre composite outer shell of helicopter rotor blades where cracks are present. For these examples, transversal scanning UHR-PS-OCT is shown to provide important information about the structural properties and the strain distribution within the samples.
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
Highly sensitive catalytic spectrophotometric determination of ruthenium
NASA Astrophysics Data System (ADS)
Naik, Radhey M.; Srivastava, Abhishek; Prasad, Surendra
2008-01-01
A new and highly sensitive catalytic kinetic method (CKM) for the determination of ruthenium(III) has been established based on its catalytic effect on the oxidation of L-phenylalanine ( L-Pheala) by KMnO 4 in highly alkaline medium. The reaction has been followed spectrophotometrically by measuring the decrease in the absorbance at 526 nm. The proposed CKM is based on the fixed time procedure under optimum reaction conditions. It relies on the linear relationship where the change in the absorbance (Δ At) versus added Ru(III) amounts in the range of 0.101-2.526 ng ml -1 is plotted. Under the optimum conditions, the sensitivity of the proposed method, i.e. the limit of detection corresponding to 5 min is 0.08 ng ml -1, and decreases with increased time of analysis. The method is featured with good accuracy and reproducibility for ruthenium(III) determination. The ruthenium(III) has also been determined in presence of several interfering and non-interfering cations, anions and polyaminocarboxylates. No foreign ions interfered in the determination ruthenium(III) up to 20-fold higher concentration of foreign ions. In addition to standard solutions analysis, this method was successfully applied for the quantitative determination of ruthenium(III) in drinking water samples. The method is highly sensitive, selective and very stable. A review of recently published catalytic spectrophotometric methods for the determination of ruthenium(III) has also been presented for comparison.
Schijven, J F; Mülschlegel, J H C; Hassanizadeh, S M; Teunis, P F M; de Roda Husman, A M
2006-09-01
Protection zones of shallow unconfined aquifers in The Netherlands were calculated that allow protection against virus contamination to the level that the infection risk of 10(-4) per person per year is not exceeded with a 95% certainty. An uncertainty and a sensitivity analysis of the calculated protection zones were included. It was concluded that protection zones of 1 to 2 years travel time (206-418 m) are needed (6 to 12 times the currently applied travel time of 60 days). This will lead to enlargement of protection zones, encompassing 110 unconfined groundwater well systems that produce 3 x 10(8) m3 y(-1) of drinking water (38% of total Dutch production from groundwater). A smaller protection zone is possible if it can be shown that an aquifer has properties that lead to greater reduction of virus contamination, like more attachment. Deeper aquifers beneath aquitards of at least 2 years of vertical travel time are adequately protected because vertical flow in the aquitards is only 0.7 m per year. The most sensitive parameters are virus attachment and inactivation. The next most sensitive parameters are grain size of the sand, abstraction rate of groundwater, virus concentrations in raw sewage and consumption of unboiled drinking water. Research is recommended on additional protection by attachment and under unsaturated conditions.
Maebe, Kevin; Meeus, Ivan; De Riek, Jan; Smagghe, Guy
2015-01-01
Bumblebees such as Bombus terrestris are essential pollinators in natural and managed ecosystems. In addition, this species is intensively used in agriculture for its pollination services, for instance in tomato and pepper greenhouses. Here we performed a quantitative trait loci (QTL) analysis on B. terrestris using 136 microsatellite DNA markers to identify genes linked with 20 traits including light sensitivity, body size and mass, and eye and hind leg measures. By composite interval mapping (IM), we found 83 and 34 suggestive QTLs for 19 of the 20 traits at the linkage group wide significance levels of p = 0.05 and 0.01, respectively. Furthermore, we also found five significant QTLs at the genome wide significant level of p = 0.05. Individual QTLs accounted for 7.5-53.3% of the phenotypic variation. For 15 traits, at least one QTL was confirmed with multiple QTL model mapping. Multivariate principal components analysis confirmed 11 univariate suggestive QTLs but revealed three suggestive QTLs not identified by the individual traits. We also identified several candidate genes linked with light sensitivity, in particular the Phosrestin-1-like gene is a primary candidate for its phototransduction function. In conclusion, we believe that the suggestive and significant QTLs, and markers identified here, can be of use in marker-assisted breeding to improve selection towards light sensitive bumblebees, and thus also the pollination service of bumblebees.
Kim, Chang Sup; Seo, Jeong Hyun; Cha, Hyung Joon
2012-08-07
The development of analytical tools is important for understanding the infection mechanisms of pathogenic bacteria or viruses. In the present work, a functional carbohydrate microarray combined with a fluorescence immunoassay was developed to analyze the interactions of Vibrio cholerae toxin (ctx) proteins and GM1-related carbohydrates. Ctx proteins were loaded onto the surface-immobilized GM1 pentasaccharide and six related carbohydrates, and their binding affinities were detected immunologically. The analysis of the ctx-carbohydrate interactions revealed that the intrinsic selectivity of ctx was GM1 pentasaccharide ≫ GM2 tetrasaccharide > asialo GM1 tetrasaccharide ≥ GM3trisaccharide, indicating that a two-finger grip formation and the terminal monosaccharides play important roles in the ctx-GM1 interaction. In addition, whole cholera toxin (ctxAB(5)) had a stricter substrate specificity and a stronger binding affinity than only the cholera toxin B subunit (ctxB). On the basis of the quantitative analysis, the carbohydrate microarray showed the sensitivity of detection of the ctxAB(5)-GM1 interaction with a limit-of-detection (LOD) of 2 ng mL(-1) (23 pM), which is comparable to other reported high sensitivity assay tools. In addition, the carbohydrate microarray successfully detected the actual toxin directly secreted from V. cholerae, without showing cross-reactivity to other bacteria. Collectively, these results demonstrate that the functional carbohydrate microarray is suitable for analyzing toxin protein-carbohydrate interactions and can be applied as a biosensor for toxin detection.
Pasikanti, Kishore Kumar; Esuvaranathan, Kesavan; Hong, Yanjun; Ho, Paul C; Mahendran, Ratha; Raman Nee Mani, Lata; Chiong, Edmund; Chan, Eric Chun Yong
2013-09-06
Cystoscopy is the gold standard clinical diagnosis of human bladder cancer (BC). As cystoscopy is expensive and invasive, it compromises patients' compliance toward surveillance screening and challenges the detection of recurrent BC. Therefore, the development of a noninvasive method for the diagnosis and surveillance of BC and the elucidation of BC progression become pertinent. In this study, urine samples from 38 BC patients and 61 non-BC controls were subjected to urinary metabotyping using two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOFMS). Subsequent to data preprocessing and chemometric analysis, the orthogonal partial least-squares discriminant analysis (OPLS-DA, R2X=0.278, R2Y=0.904 and Q2Y (cumulative)=0.398) model was validated using permutation tests and receiver operating characteristic (ROC) analysis. Marker metabolites were further screened from the OPLS-DA model using statistical tests. GC×GC-TOFMS urinary metabotyping demonstrated 100% specificity and 71% sensitivity in detecting BC, while 100% specificity and 46% sensitivity were observed via cytology. In addition, the model revealed 46 metabolites that characterize human BC. Among the perturbed metabolic pathways, our clinical finding on the alteration of the tryptophan-quinolinic metabolic axis in BC suggested the potential roles of kynurenine in the malignancy and therapy of BC. In conclusion, global urinary metabotyping holds potential for the noninvasive diagnosis and surveillance of BC in clinics. In addition, perturbed metabolic pathways gleaned from urinary metabotyping shed new and established insights on the biology of human BC.
Colon Capsule Endoscopy for the Detection of Colorectal Polyps: An Economic Analysis
Palimaka, Stefan; Blackhouse, Gord; Goeree, Ron
2015-01-01
Background Colorectal cancer is a leading cause of mortality and morbidity in Ontario. Most cases of colorectal cancer are preventable through early diagnosis and the removal of precancerous polyps. Colon capsule endoscopy is a non-invasive test for detecting colorectal polyps. Objectives The objectives of this analysis were to evaluate the cost-effectiveness and the impact on the Ontario health budget of implementing colon capsule endoscopy for detecting advanced colorectal polyps among adult patients who have been referred for computed tomographic (CT) colonography. Methods We performed an original cost-effectiveness analysis to assess the additional cost of CT colonography and colon capsule endoscopy resulting from misdiagnoses. We generated diagnostic accuracy data from a clinical evidence-based analysis (reported separately), and we developed a deterministic Markov model to estimate the additional long-term costs and life-years lost due to false-negative results. We then also performed a budget impact analysis using data from Ontario administrative sources. One-year costs were estimated for CT colonography and colon capsule endoscopy (replacing all CT colonography procedures, and replacing only those CT colonography procedures in patients with an incomplete colonoscopy within the previous year). We conducted this analysis from the payer perspective. Results Using the point estimates of diagnostic accuracy from the head-to-head study between colon capsule endoscopy and CT colonography, we found the additional cost of false-positive results for colon capsule endoscopy to be $0.41 per patient, while additional false-negatives for the CT colonography arm generated an added cost of $116 per patient, with 0.0096 life-years lost per patient due to cancer. This results in an additional cost of $26,750 per life-year gained for colon capsule endoscopy compared with CT colonography. The total 1-year cost to replace all CT colonography procedures with colon capsule endoscopy in Ontario is about $2.72 million; replacing only those CT colonography procedures in patients with an incomplete colonoscopy in the previous year would cost about $740,600 in the first year. Limitations The difference in accuracy between colon capsule endoscopy and CT colonography was not statistically significant for the detection of advanced adenomas (≥ 10 mm in diameter), according to the head-to-head clinical study from which the diagnostic accuracy was taken. This leads to uncertainty in the economic analysis, with results highly sensitive to changes in diagnostic accuracy. Conclusions The cost-effectiveness of colon capsule endoscopy for use in patients referred for CT colonography is $26,750 per life-year, assuming an increased sensitivity of colon capsule endoscopy. Replacement of CT colonography with colon capsule endoscopy is associated with moderate costs to the health care system. PMID:26366240
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Troy; Bhat, Sham; Marcy, Peter
Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less
Holland, Troy; Bhat, Sham; Marcy, Peter; ...
2017-08-25
Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less
NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.
2009-02-28
The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less
2017-01-01
The yeast Scheffersomyces stipitis naturally produces ethanol from xylose, however reaching high ethanol yields is strongly dependent on aeration conditions. It has been reported that changes in the availability of NAD(H/+) cofactors can improve fermentation in some microorganisms. In this work genome-scale metabolic modeling and phenotypic phase plane analysis were used to characterize metabolic response on a range of uptake rates. Sensitivity analysis was used to assess the effect of ARC on ethanol production indicating that modifying ARC by inhibiting the respiratory chain ethanol production can be improved. It was shown experimentally in batch culture using Rotenone as an inhibitor of the mitochondrial NADH dehydrogenase complex I (CINADH), increasing ethanol yield by 18%. Furthermore, trajectories for uptakes rates, specific productivity and specific growth rate were determined by modeling the batch culture, to calculate ARC associated to the addition of CINADH inhibitor. Results showed that the increment in ethanol production via respiratory inhibition is due to excess in ARC, which generates an increase in ethanol production. Thus ethanol production improvement could be predicted by a change in ARC. PMID:28658270
Acevedo, Alejandro; Conejeros, Raúl; Aroca, Germán
2017-01-01
The yeast Scheffersomyces stipitis naturally produces ethanol from xylose, however reaching high ethanol yields is strongly dependent on aeration conditions. It has been reported that changes in the availability of NAD(H/+) cofactors can improve fermentation in some microorganisms. In this work genome-scale metabolic modeling and phenotypic phase plane analysis were used to characterize metabolic response on a range of uptake rates. Sensitivity analysis was used to assess the effect of ARC on ethanol production indicating that modifying ARC by inhibiting the respiratory chain ethanol production can be improved. It was shown experimentally in batch culture using Rotenone as an inhibitor of the mitochondrial NADH dehydrogenase complex I (CINADH), increasing ethanol yield by 18%. Furthermore, trajectories for uptakes rates, specific productivity and specific growth rate were determined by modeling the batch culture, to calculate ARC associated to the addition of CINADH inhibitor. Results showed that the increment in ethanol production via respiratory inhibition is due to excess in ARC, which generates an increase in ethanol production. Thus ethanol production improvement could be predicted by a change in ARC.
Wijnker, J J; Tjeerdsma-van Bokhoven, J L M; Veldhuizen, E J A
2009-01-01
Certain phosphates have been identified as suitable additives for the improvement of the microbial and mechanical properties of processed natural sausage casings. When mixed with NaCl (sodium chloride) and used under specific treatment and storage conditions, these phosphates are found to prevent the spread of foot-and-mouth disease and classical swine fever via treated casings. The commercially available Quantichrom™ phosphate assay kit has been evaluated as to whether it can serve as a reliable and low-tech method for routine analysis of casings treated with phosphate. The outcome of this study indicates that this particular assay kit has sufficient sensitivity to qualitatively determine the presence of phosphate in treated casings without interference of naturally occurring phosphate in salt used for brines in which casings are preserved.
A new similarity index for nonlinear signal analysis based on local extrema patterns
NASA Astrophysics Data System (ADS)
Niknazar, Hamid; Motie Nasrabadi, Ali; Shamsollahi, Mohammad Bagher
2018-02-01
Common similarity measures of time domain signals such as cross-correlation and Symbolic Aggregate approximation (SAX) are not appropriate for nonlinear signal analysis. This is because of the high sensitivity of nonlinear systems to initial points. Therefore, a similarity measure for nonlinear signal analysis must be invariant to initial points and quantify the similarity by considering the main dynamics of signals. The statistical behavior of local extrema (SBLE) method was previously proposed to address this problem. The SBLE similarity index uses quantized amplitudes of local extrema to quantify the dynamical similarity of signals by considering patterns of sequential local extrema. By adding time information of local extrema as well as fuzzifying quantized values, this work proposes a new similarity index for nonlinear and long-term signal analysis, which extends the SBLE method. These new features provide more information about signals and reduce noise sensitivity by fuzzifying them. A number of practical tests were performed to demonstrate the ability of the method in nonlinear signal clustering and classification on synthetic data. In addition, epileptic seizure detection based on electroencephalography (EEG) signal processing was done by the proposed similarity to feature the potentials of the method as a real-world application tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steill, Jeffrey D.; Huang, Haifeng; Hoops, Alexandra A.
This report summarizes our development of spectroscopic chemical analysis techniques and spectral modeling for trace-gas measurements of highly-regulated low-concentration species present in flue gas emissions from utility coal boilers such as HCl under conditions of high humidity. Detailed spectral modeling of the spectroscopy of HCl and other important combustion and atmospheric species such as H 2 O, CO 2 , N 2 O, NO 2 , SO 2 , and CH 4 demonstrates that IR-laser spectroscopy is a sensitive multi-component analysis strategy. Experimental measurements from techniques based on IR laser spectroscopy are presented that demonstrate sub-ppm sensitivity levels to thesemore » species. Photoacoustic infrared spectroscopy is used to detect and quantify HCl at ppm levels with extremely high signal-to-noise even under conditions of high relative humidity. Additionally, cavity ring-down IR spectroscopy is used to achieve an extremely high sensitivity to combustion trace gases in this spectral region; ppm level CH 4 is one demonstrated example. The importance of spectral resolution in the sensitivity of a trace-gas measurement is examined by spectral modeling in the mid- and near-IR, and efforts to improve measurement resolution through novel instrument development are described. While previous project reports focused on benefits and complexities of the dual-etalon cavity ring-down infrared spectrometer, here details on steps taken to implement this unique and potentially revolutionary instrument are described. This report also illustrates and critiques the general strategy of IR- laser photodetection of trace gases leading to the conclusion that mid-IR laser spectroscopy techniques provide a promising basis for further instrument development and implementation that will enable cost-effective sensitive detection of multiple key contaminant species simultaneously.« less
Kortink, Elise D; Weeda, Wouter D; Crowley, Michael J; Gunther Moor, Bregtje; van der Molen, Melle J W
2018-06-01
Monitoring social threat is essential for maintaining healthy social relationships, and recent studies suggest a neural alarm system that governs our response to social rejection. Frontal-midline theta (4-8 Hz) oscillatory power might act as a neural correlate of this system by being sensitive to unexpected social rejection. Here, we examined whether frontal-midline theta is modulated by individual differences in personality constructs sensitive to social disconnection. In addition, we examined the sensitivity of feedback-related brain potentials (i.e., the feedback-related negativity and P3) to social feedback. Sixty-five undergraduate female participants (mean age = 19.69 years) participated in the Social Judgment Paradigm, a fictitious peer-evaluation task in which participants provided expectancies about being liked/disliked by peer strangers. Thereafter, they received feedback signaling social acceptance/rejection. A community structure analysis was employed to delineate personality profiles in our data. Results provided evidence of two subgroups: one group scored high on attachment-related anxiety and fear of negative evaluation, whereas the other group scored high on attachment-related avoidance and low on fear of negative evaluation. In both groups, unexpected rejection feedback yielded a significant increase in theta power. The feedback-related negativity was sensitive to unexpected feedback, regardless of valence, and was largest for unexpected rejection feedback. The feedback-related P3 was significantly enhanced in response to expected social acceptance feedback. Together, these findings confirm the sensitivity of frontal midline theta oscillations to the processing of social threat, and suggest that this alleged neural alarm system behaves similarly in individuals that differ in personality constructs relevant to social evaluation.
Racial and ethnic differences in experimental pain sensitivity: systematic review and meta-analysis.
Kim, Hee Jun; Yang, Gee Su; Greenspan, Joel D; Downton, Katherine D; Griffith, Kathleen A; Renn, Cynthia L; Johantgen, Meg; Dorsey, Susan G
2017-02-01
Our objective was to describe the racial and ethnic differences in experimental pain sensitivity. Four databases (PubMed, EMBASE, the Cochrane Central Register of Controlled Trials, and PsycINFO) were searched for studies examining racial/ethnic differences in experimental pain sensitivity. Thermal-heat, cold-pressor, pressure, ischemic, mechanical cutaneous, electrical, and chemical experimental pain modalities were assessed. Risk of bias was assessed using the Agency for Healthcare Research and Quality guideline. Meta-analysis was used to calculate standardized mean differences (SMDs) by pain sensitivity measures. Studies comparing African Americans (AAs) and non-Hispanic whites (NHWs) were included for meta-analyses because of high heterogeneity in other racial/ethnic group comparisons. Statistical heterogeneity was assessed by subgroup analyses by sex, sample size, sample characteristics, and pain modalities. A total of 41 studies met the review criteria. Overall, AAs, Asians, and Hispanics had higher pain sensitivity compared with NHWs, particularly lower pain tolerance, higher pain ratings, and greater temporal summation of pain. Meta-analyses revealed that AAs had lower pain tolerance (SMD: -0.90, 95% confidence intervals [CIs]: -1.10 to -0.70) and higher pain ratings (SMD: 0.50, 95% CI: 0.30-0.69) but no significant differences in pain threshold (SMD: -0.06, 95% CI: -0.23 to 0.10) compared with NHWs. Estimates did not vary by pain modalities, nor by other demographic factors; however, SMDs were significantly different based on the sample size. Racial/ethnic differences in experimental pain sensitivity were more pronounced with suprathreshold than with threshold stimuli, which is important in clinical pain treatment. Additional studies examining mechanisms to explain such differences in pain tolerance and pain ratings are needed.
Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G; Ultsch, Alfred
2017-08-16
The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R
2017-11-17
This article investigated and compared frequency domain and time domain characteristics of drivers' behaviors before and after the start of distracted driving. Data from an existing naturalistic driving study were used. Fast Fourier transform (FFT) was applied for the frequency domain analysis to explore drivers' behavior pattern changes between nondistracted (prestarting of visual-manual task) and distracted (poststarting of visual-manual task) driving periods. Average relative spectral power in a low frequency range (0-0.5 Hz) and the standard deviation in a 10-s time window of vehicle control variables (i.e., lane offset, yaw rate, and acceleration) were calculated and further compared. Sensitivity analyses were also applied to examine the reliability of the time and frequency domain analyses. Results of the mixed model analyses from the time and frequency domain analyses all showed significant degradation in lateral control performance after engaging in visual-manual tasks while driving. Results of the sensitivity analyses suggested that the frequency domain analysis was less sensitive to the frequency bandwidth, whereas the time domain analysis was more sensitive to the time intervals selected for variation calculations. Different time interval selections can result in significantly different standard deviation values, whereas average spectral power analysis on yaw rate in both low and high frequency bandwidths showed consistent results, that higher variation values were observed during distracted driving when compared to nondistracted driving. This study suggests that driver state detection needs to consider the behavior changes during the prestarting periods, instead of only focusing on periods with physical presence of distraction, such as cell phone use. Lateral control measures can be a better indicator of distraction detection than longitudinal controls. In addition, frequency domain analyses proved to be a more robust and consistent method in assessing driving performance compared to time domain analyses.
Using Dynamic Sensitivity Analysis to Assess Testability
NASA Technical Reports Server (NTRS)
Voas, Jeffrey; Morell, Larry; Miller, Keith
1990-01-01
This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.
Birchler, J. A.; Bhadra, U.; Rabinow, L.; Linsk, R.; Nguyen-Huynh, A. T.
1994-01-01
A locus is described in Drosophila melanogaster that modifies the expression of the white eye color gene. This trans-acting modifier reduces the expression of the white gene in the eye, but elevates the expression in other adult tissues. Because of the eye phenotype in which the expression of white is lessened but not eliminated, the newly described locus is called the Weakener of white (Wow). Northern analysis reveals that Wow can exert an inverse or direct modifying effect depending upon the developmental stage. Two related genes, brown and scarlet, that are coordinately expressed with white, are also affected by Wow. In addition, Wow modulates the steady state RNA level of the retrotransposon, copia. When tested with a white promoter-Alcohol dehydrogenase reporter, Wow confers the modifying effect to the reporter, suggesting a requirement of the white regulatory sequences for mediating the response. In addition to being a dosage sensitive regulator of white, brown, scarlet and copia, Wow acts as a suppressor of position effect variegation. There are many dosage sensitive suppressors of position effect variegation and many dosage-sensitive modifiers of gene expression. The Wow mutations provide evidence for an overlap between the two types of modifiers. PMID:7982560
ZHAO, Bin; BASTON, David S.; KHAN, Elaine; SORRENTINO, Claudio; DENISON, Michael S.
2011-01-01
Reporter genes produce a protein product in transfected cells that can be easily measured in intact or lysed cells and they have been extensively used in numerous basic and applied research applications. Over the past 10 years, reporter gene assays have been widely accepted and used for analysis of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like compounds in various types of matrices, such as biological, environmental, food and feed samples, given that high-resolution instrumental analysis techniques are impractical for large-scale screening analysis. The most sensitive cell-based reporter gene bioassay systems developed are the mechanism-based CALUX (Chemically Activated Luciferase Expression) and CAFLUX (Chemically Activated Fluorescent Expression) bioassays, which utilize recombinant cell lines containing stably transfected dioxin (AhR)-responsive firefly luciferase or enhanced green fluorescent protein (EGFP) reporter genes, respectively. While the current CALUX and CAFLUX bioassays are very sensitive, increasing their lower limit of sensitivity, magnitude of response and dynamic range for chemical detection would significantly increase their utility, particularly for those samples that contain low levels of dioxin-like HAHs (i.e., serum). In this study, we report that the addition of modulators of cell signaling pathways or modification of cell culture conditions results in significant improvement in the magnitude and overall responsiveness of the existing CALUX and CAFLUX cell bioassays. PMID:21394221
Ma, Xiaoye; Chen, Yong; Cole, Stephen R; Chu, Haitao
2016-12-01
To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities, and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented. © The Author(s) 2014.
Ma, Xiaoye; Chen, Yong; Cole, Stephen R.; Chu, Haitao
2014-01-01
To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented. PMID:24862512
Out, Astrid A; van Minderhout, Ivonne J H M; van der Stoep, Nienke; van Bommel, Lysette S R; Kluijt, Irma; Aalfs, Cora; Voorendt, Marsha; Vossen, Rolf H A M; Nielsen, Maartje; Vasen, Hans F A; Morreau, Hans; Devilee, Peter; Tops, Carli M J; Hes, Frederik J
2015-06-01
Familial adenomatous polyposis is most frequently caused by pathogenic variants in either the APC gene or the MUTYH gene. The detection rate of pathogenic variants depends on the severity of the phenotype and sensitivity of the screening method, including sensitivity for mosaic variants. For 171 patients with multiple colorectal polyps without previously detectable pathogenic variant, APC was reanalyzed in leukocyte DNA by one uniform technique: high-resolution melting (HRM) analysis. Serial dilution of heterozygous DNA resulted in a lowest detectable allelic fraction of 6% for the majority of variants. HRM analysis and subsequent sequencing detected pathogenic fully heterozygous APC variants in 10 (6%) of the patients and pathogenic mosaic variants in 2 (1%). All these variants were previously missed by various conventional scanning methods. In parallel, HRM APC scanning was applied to DNA isolated from polyp tissue of two additional patients with apparently sporadic polyposis and without detectable pathogenic APC variant in leukocyte DNA. In both patients a pathogenic mosaic APC variant was present in multiple polyps. The detection of pathogenic APC variants in 7% of the patients, including mosaics, illustrates the usefulness of a complete APC gene reanalysis of previously tested patients, by a supplementary scanning method. HRM is a sensitive and fast pre-screening method for reliable detection of heterozygous and mosaic variants, which can be applied to leukocyte and polyp derived DNA.
Li, Shuangming; Wan, Ying; Fan, Chunhai; Su, Yan
2017-03-22
Love wave sensors have been widely used for sensing applications. In this work, we introduce the theoretical analysis of the monolayer and double-layer waveguide Love wave sensors. The velocity, particle displacement and energy distribution of Love waves were analyzed. Using the variations of the energy repartition, the sensitivity coefficients of Love wave sensors were calculated. To achieve a higher sensitivity coefficient, a thin gold layer was added as the second waveguide on top of the silicon dioxide (SiO₂) waveguide-based, 36 degree-rotated, Y-cut, X-propagating lithium tantalate (36° YX LiTaO₃) Love wave sensor. The Love wave velocity was significantly reduced by the added gold layer, and the flow of wave energy into the waveguide layer from the substrate was enhanced. By using the double-layer structure, almost a 72-fold enhancement in the sensitivity coefficient was achieved compared to the monolayer structure. Additionally, the thickness of the SiO₂ layer was also reduced with the application of the gold layer, resulting in easier device fabrication. This study allows for the possibility of designing and realizing robust Love wave sensors with high sensitivity and a low limit of detection.
Behavioral profiles of feline breeds in Japan.
Takeuchi, Yukari; Mori, Yuji
2009-08-01
To clarify the behavioral profiles of 9 feline purebreds, 2 Persian subbreeds and the Japanese domestic cat, a questionnaire survey was distributed to 67 small-animal veterinarians. We found significant differences among breeds in all behavioral traits examined except for "inappropriate elimination". In addition, sexual differences were observed in certain behaviors, including "aggression toward cats", "general activity", "novelty-seeking", and "excitability". These behaviors were more common in males than females, whereas "nervousness" and "inappropriate elimination" were rated higher in females. When all breeds were categorized into four groups on the basis of a cluster analysis using the scores of two behavioral trait factors called "aggressiveness/sensitivity" and "vivaciousness", the group including Abyssinian, Russian Blue, Somali, Siamese, and Chinchilla breeds showed high aggressiveness/sensitivity and low vivaciousness. In contrast, the group including the American Shorthair and Japanese domestic cat displayed low aggressiveness/sensitivity and high vivaciousness, and the Himalayan and Persian group showed mild aggressiveness/sensitivity and very low vivaciousness. Finally, the group containing Maine Coon, Ragdoll, and Scottish Fold breeds displayed very low aggressiveness/sensitivity and low vivaciousness. The present results demonstrate that some feline behavioral traits vary by breed and/or sex.
miR-25 modulates NSCLC cell radio-sensitivity through directly inhibiting BTG2 expression
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Zhiwei, E-mail: carlhe@126.com; Liu, Yi, E-mail: cassieliu@126.com; Xiao, Bing, E-mail: rockg714@aliyun.com
2015-02-13
A large proportion of the NSCLC patients were insensitive to radiotherapy, but the exact mechanism is still unclear. This study explored the role of miR-25 in regulating sensitivity of NSCLC cells to ionizing radiation (IR) and its downstream targets. Based on measurement in tumor samples from NSCLC patients, this study found that miR-25 expression is upregulated in both NSCLC and radio-resistant NSCLC patients compared the healthy and radio-sensitive controls. In addition, BTG expression was found negatively correlated with miR-25a expression in the both tissues and cells. By applying luciferase reporter assay, we verified two putative binding sites between miR-25 andmore » BTG2. Therefore, BTG2 is a directly target of miR-25 in NSCLC cancer. By applying loss-and-gain function analysis in NSCLC cell lines, we demonstrated that miR-25-BTG2 axis could directly regulated BTG2 expression and affect radiotherapy sensitivity of NSCLC cells. - Highlights: • miR-25 is upregulated, while BTG2 is downregulated in radioresistant NSCLC patients. • miR-25 modulates sensitivity to radiation induced apoptosis. • miR-25 directly targets BTG2 and suppresses its expression. • miR-25 modulates sensitivity to radiotherapy through inhibiting BTG2 expression.« less
Study of Multimission Modular Spacecraft (MMS) propulsion requirements
NASA Technical Reports Server (NTRS)
Fischer, N. H.; Tischer, A. E.
1977-01-01
The cost effectiveness of various propulsion technologies for shuttle-launched multimission modular spacecraft (MMS) missions was determined with special attention to the potential role of ion propulsion. The primary criterion chosen for comparison for the different types of propulsion technologies was the total propulsion related cost, including the Shuttle charges, propulsion module costs, upper stage costs, and propulsion module development. In addition to the cost comparison, other criteria such as reliability, risk, and STS compatibility are examined. Topics covered include MMS mission models, propulsion technology definition, trajectory/performance analysis, cost assessment, program evaluation, sensitivity analysis, and conclusions and recommendations.
Dynamic analysis of gas-core reactor system
NASA Technical Reports Server (NTRS)
Turner, K. H., Jr.
1973-01-01
A heat transfer analysis was incorporated into a previously developed model CODYN to obtain a model of open-cycle gaseous core reactor dynamics which can predict the heat flux at the cavity wall. The resulting model was used to study the sensitivity of the model to the value of the reactivity coefficients and to determine the system response for twenty specified perturbations. In addition, the model was used to study the effectiveness of several control systems in controlling the reactor. It was concluded that control drums located in the moderator region capable of inserting reactivity quickly provided the best control.
Jiang, Nan; Tamayol, Ali; Ruiz-Esparza, Guillermo U.; Zhang, Yu Shrike; Medina-Pando, Sofía; Gupta, Aditi; Wolffsohn, James S.; Butt, Haider; Khademhosseini, Ali
2017-01-01
The analysis of tear constituents at point-of-care settings has a potential for early diagnosis of ocular disorders such as dry eye disease, low-cost screening, and surveillance of at-risk subjects. However, current minimally-invasive rapid tear analysis systems for point-of-care settings have been limited to assessment of osmolarity or inflammatory markers and cannot differentiate between dry eye subclassifications. Here, we demonstrate a portable microfluidic system that allows quantitative analysis of electrolytes in the tear fluid that is suited for point-of-care settings. The microfluidic system consists of a capillary tube for sample collection, a reservoir for sample dilution, and a paper-based microfluidic device for electrolyte analysis. The sensing regions are functionalized with fluorescent crown ethers, o-acetanisidide, and seminaphtorhodafluor that are sensitive to mono- and divalent electrolytes, and their fluorescence outputs are measured with a smartphone readout device. The measured sensitivity values of Na+, K+, Ca2+ ions and pH in artificial tear fluid were matched with the known ion concentrations within the physiological range. The microfluidic system was tested with samples having different ionic concentrations, demonstrating the feasibility for the detection of early-stage dry eye, differential diagnosis of dry eye sub-types, and their severity staging. PMID:28207920
Systems engineering and integration: Cost estimation and benefits analysis
NASA Technical Reports Server (NTRS)
Dean, ED; Fridge, Ernie; Hamaker, Joe
1990-01-01
Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shahnam, Mehrdad; Gel, Aytekin; Subramaniyan, Arun K.
Adequate assessment of the uncertainties in modeling and simulation is becoming an integral part of the simulation based engineering design. The goal of this study is to demonstrate the application of non-intrusive Bayesian uncertainty quantification (UQ) methodology in multiphase (gas-solid) flows with experimental and simulation data, as part of our research efforts to determine the most suited approach for UQ of a bench scale fluidized bed gasifier. UQ analysis was first performed on the available experimental data. Global sensitivity analysis performed as part of the UQ analysis shows that among the three operating factors, steam to oxygen ratio has themore » most influence on syngas composition in the bench-scale gasifier experiments. An analysis for forward propagation of uncertainties was performed and results show that an increase in steam to oxygen ratio leads to an increase in H2 mole fraction and a decrease in CO mole fraction. These findings are in agreement with the ANOVA analysis performed in the reference experimental study. Another contribution in addition to the UQ analysis is the optimization-based approach to guide to identify next best set of additional experimental samples, should the possibility arise for additional experiments. Hence, the surrogate models constructed as part of the UQ analysis is employed to improve the information gain and make incremental recommendation, should the possibility to add more experiments arise. In the second step, series of simulations were carried out with the open-source computational fluid dynamics software MFiX to reproduce the experimental conditions, where three operating factors, i.e., coal flow rate, coal particle diameter, and steam-to-oxygen ratio, were systematically varied to understand their effect on the syngas composition. Bayesian UQ analysis was performed on the numerical results. As part of Bayesian UQ analysis, a global sensitivity analysis was performed based on the simulation results, which shows that the predicted syngas composition is strongly affected not only by the steam-to-oxygen ratio (which was observed in experiments as well) but also by variation in the coal flow rate and particle diameter (which was not observed in experiments). The carbon monoxide mole fraction is underpredicted at lower steam-to-oxygen ratios and overpredicted at higher steam-to-oxygen ratios. The opposite trend is observed for the carbon dioxide mole fraction. These discrepancies are attributed to either excessive segregation of the phases that leads to the fuel-rich or -lean regions or alternatively the selection of reaction models, where different reaction models and kinetics can lead to different syngas compositions throughout the gasifier. To improve quality of numerical models used, the effect that uncertainties in reaction models for gasification, char oxidation, carbon monoxide oxidation, and water gas shift will have on the syngas composition at different grid resolution, along with bed temperature were investigated. The global sensitivity analysis showed that among various reaction models employed for water gas shift, gasification, char oxidation, the choice of reaction model for water gas shift has the greatest influence on syngas composition, with gasification reaction model being second. Syngas composition also shows a small sensitivity to temperature of the bed. The hydrodynamic behavior of the bed did not change beyond grid spacing of 18 times the particle diameter. However, the syngas concentration continued to be affected by the grid resolution as low as 9 times the particle diameter. This is due to a better resolution of the phasic interface between the gases and solid that leads to stronger heterogeneous reactions. This report is a compilation of three manuscripts published in peer-reviewed journals for the series of studies mentioned above.« less
Design analysis of an MPI human functional brain scanner
Mason, Erica E.; Cooley, Clarissa Z.; Cauley, Stephen F.; Griswold, Mark A.; Conolly, Steven M.; Wald, Lawrence L.
2017-01-01
MPI’s high sensitivity makes it a promising modality for imaging brain function. Functional contrast is proposed based on blood SPION concentration changes due to Cerebral Blood Volume (CBV) increases during activation, a mechanism utilized in fMRI studies. MPI offers the potential for a direct and more sensitive measure of SPION concentration, and thus CBV, than fMRI. As such, fMPI could surpass fMRI in sensitivity, enhancing the scientific and clinical value of functional imaging. As human-sized MPI systems have not been attempted, we assess the technical challenges of scaling MPI from rodent to human brain. We use a full-system MPI simulator to test arbitrary hardware designs and encoding practices, and we examine tradeoffs imposed by constraints that arise when scaling to human size as well as safety constraints (PNS and central nervous system stimulation) not considered in animal scanners, thereby estimating spatial resolutions and sensitivities achievable with current technology. Using a projection FFL MPI system, we examine coil hardware options and their implications for sensitivity and spatial resolution. We estimate that an fMPI brain scanner is feasible, although with reduced sensitivity (20×) and spatial resolution (5×) compared to existing rodent systems. Nonetheless, it retains sufficient sensitivity and spatial resolution to make it an attractive future instrument for studying the human brain; additional technical innovations can result in further improvements. PMID:28752130
Applying causal mediation analysis to personality disorder research.
Walters, Glenn D
2018-01-01
This article is designed to address fundamental issues in the application of causal mediation analysis to research on personality disorders. Causal mediation analysis is used to identify mechanisms of effect by testing variables as putative links between the independent and dependent variables. As such, it would appear to have relevance to personality disorder research. It is argued that proper implementation of causal mediation analysis requires that investigators take several factors into account. These factors are discussed under 5 headings: variable selection, model specification, significance evaluation, effect size estimation, and sensitivity testing. First, care must be taken when selecting the independent, dependent, mediator, and control variables for a mediation analysis. Some variables make better mediators than others and all variables should be based on reasonably reliable indicators. Second, the mediation model needs to be properly specified. This requires that the data for the analysis be prospectively or historically ordered and possess proper causal direction. Third, it is imperative that the significance of the identified pathways be established, preferably with a nonparametric bootstrap resampling approach. Fourth, effect size estimates should be computed or competing pathways compared. Finally, investigators employing the mediation method are advised to perform a sensitivity analysis. Additional topics covered in this article include parallel and serial multiple mediation designs, moderation, and the relationship between mediation and moderation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Penn, Jay P.
1996-03-01
It is generally believed by those skilled in launch system design that Single-Stage-To-Orbit (SSTO) designs are more technically challenging, more performance sensitive, and yield larger lift-off weights than do Two-Stage-To-Orbit designs (TSTO's) offering similar payload delivery capability. Without additional insight into the other considerations which drive the development, recurring costs, operability, and reliability of a launch fleet, an analyst may easily conclude that the higher performing, less sensitive TSTO designs, thus yield a better solution to achieving low cost payload delivery. This limited insight could justify an argument to eliminate the X-33 SSTO technology/demonstration development effort, and thus proceed directly to less risky TSTO designs. Insight into real world design considerations of launch vehicles makes the choice of SSTO vs TSTO much less clear. The presentation addresses a more comprehensive evaluation of the general class of SSTO and TSTO concepts. These include pure SSTO's, augmented SSTO's, Siamese Twin, and Pure TSTO designs. The assessment considers vehicle performance and scaling relationships which characterize real vehicle designs. The assessment also addresses technology requirements, operations and supportability, cost implications, and sensitivities. Results of the assessment indicate that the trade space between various SSTO and TSTO design approaches is complex and not yet fully understood. The results of the X-33 technology demonstrators, as well as additional parametric analysis is required to better define the relative performance and costs of the various design approaches. The results also indicate that with modern technologies and today's better understanding of vehicle design considerations, the perception that SSTO's are dramatically heavier and more sensitive than TSTO designs is more of a myth, than reality.
Application of a fluorometric microplate algal toxicity assay for riverine periphytic algal species.
Nagai, Takashi; Taya, Kiyoshi; Annoh, Hirochica; Ishihara, Satoru
2013-08-01
Although riverine periphytic algae attached to riverbed gravel are dominant species in flowing rivers, there is limited toxicity data on them because of the difficulty in cell culture and assays. Moreover, it is well known that sensitivity to pesticides differ markedly among species, and therefore the toxicity data for multiple species need to be efficiently obtained. In this study, we investigated the use of fluorometric microplate toxicity assay for testing periphytic algal species. We selected five candidate test algal species Desmodesmus subspicatus, Achnanthidium minutissimum, Navicula pelliculosa, Nitzschia palea, and Pseudanabaena galeata. The selected species are dominant in the river, include a wide range of taxon, and represent actual species composition. Other additional species were also used to compare the sensitivity and suitability of the microplate assay. A 96-well microplate was used as a test chamber and algal growth was measured by in-vivo fluorescence. Assay conditions using microplate and fluorometric measurement were established, and sensitivities of 3,5-dichlorophenol as a reference substance were assayed. The 50 percent effect concentrations (EC50s) obtained by fluorometric microplate assay and those obtained by conventional Erlenmeyer flask assay conducted in this study were consistent. Moreover, the EC50 values of 3,5-dichlorophenol were within the reported confidence intervals in literature. These results supported the validity of our microplate assay. Species sensitivity distribution (SSD) analysis was conducted using the EC50s of five species. The SSD was found to be similar to the SSD obtained using additional tested species, suggesting that SSD using the five species largely represents algal sensitivity. Our results provide a useful and efficient method for high-tier probabilistic ecological risk assessment of pesticides. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Evelyn H.; Appulage, Dananjaya Kalu; McAllister, Erin A.; Schug, Kevin A.
2017-09-01
Recently, direct intact protein quantitation using triple quadrupole mass spectrometry (QqQ-MS) and multiple reaction monitoring (MRM) was demonstrated (J. Am. Soc. Mass Spectrom. 27, 886-896 (2016)). Even though QqQ-MS is known to provide extraordinary detection sensitivity for quantitative analysis, we found that intact proteins exhibited a less than 5% ion transmission from the first quadrupole to the third quadrupole mass analyzer in the presence of zero collision energy (ZCE). With the goal to enhance intact protein quantitation sensitivity, ion scattering effects, proton transfer effects, and mass filter resolution widths were examined for their contributions to the lost signal. Protein standards myoglobin and ubiquitin along with small molecules reserpine and vancomycin were analyzed together with various collision induced dissociation (CID) gases (N2, He, and Ar) at different gas pressures. Mass resolution settings played a significant role in reducing ion transmission signal. By narrowing the mass resolution window by 0.35 m/z on each side, roughly 75%-90% of the ion signal was lost. The multiply charged proteins experienced additional proton transfer effects, corresponding to 10-fold signal reduction. A study of increased sensitivity of the method was also conducted with various MRM summation techniques. Although the degree of enhancement was analyte-dependent, an up to 17-fold increase in sensitivity was observed for ubiquitin using a summation of multiple MRM transitions. Biological matrix, human urine, and equine plasma were spiked with proteins to demonstrate the specificity of the method. This study provides additional insight into optimizing the use and sensitivity of QqQ-MS for intact protein quantification. [Figure not available: see fulltext.
Analysis of long-term ionizing radiation effects in bipolar transistors
NASA Technical Reports Server (NTRS)
Stanley, A. G.; Martin, K. E.
1978-01-01
The ionizing radiation effects of electrons on bipolar transistors have been analyzed using the data base from the Voyager project. The data were subjected to statistical analysis, leading to a quantitative characterization of the product and to data on confidence limits which will be useful for circuit design purposes. These newly-developed methods may form the basis for a radiation hardness assurance system. In addition, an attempt was made to identify the causes of the large variations in the sensitivity observed on different product lines. This included a limited construction analysis and a determination of significant design and processes variables, as well as suggested remedies for improving the tolerance of the devices to radiation.
NASA Astrophysics Data System (ADS)
Camilo, Ana E. F.; Grégio, André; Santos, Rafael D. C.
2016-05-01
Malware detection may be accomplished through the analysis of their infection behavior. To do so, dynamic analysis systems run malware samples and extract their operating system activities and network traffic. This traffic may represent malware accessing external systems, either to steal sensitive data from victims or to fetch other malicious artifacts (configuration files, additional modules, commands). In this work, we propose the use of visualization as a tool to identify compromised systems based on correlating malware communications in the form of graphs and finding isomorphisms between them. We produced graphs from over 6 thousand distinct network traffic files captured during malware execution and analyzed the existing relationships among malware samples and IP addresses.
Advances in Mid-Infrared Spectroscopy for Chemical Analysis
NASA Astrophysics Data System (ADS)
Haas, Julian; Mizaikoff, Boris
2016-06-01
Infrared spectroscopy in the 3-20 μm spectral window has evolved from a routine laboratory technique into a state-of-the-art spectroscopy and sensing tool by benefitting from recent progress in increasingly sophisticated spectra acquisition techniques and advanced materials for generating, guiding, and detecting mid-infrared (MIR) radiation. Today, MIR spectroscopy provides molecular information with trace to ultratrace sensitivity, fast data acquisition rates, and high spectral resolution catering to demanding applications in bioanalytics, for example, and to improved routine analysis. In addition to advances in miniaturized device technology without sacrificing analytical performance, selected innovative applications for MIR spectroscopy ranging from process analysis to biotechnology and medical diagnostics are highlighted in this review.
Ferko, Nicole; Ferrante, Giuseppe; Hasegawa, James T; Schikorr, Tanya; Soleas, Ireena M; Hernandez, John B; Sabaté, Manel; Kaiser, Christoph; Brugaletta, Salvatore; de la Torre Hernandez, Jose Maria; Galatius, Soeren; Cequier, Angel; Eberli, Franz; de Belder, Adam; Serruys, Patrick W; Valgimigli, Marco
2017-05-01
Second-generation drug eluting stents (DES) may reduce costs and improve clinical outcomes compared to first-generation DES with improved cost-effectiveness when compared to bare metal stents (BMS). We aimed to conduct an economic evaluation of a cobalt-chromium everolimus eluting stent (Co-Cr EES) compared with BMS in percutaneous coronary intervention (PCI). To conduct a cost-effectiveness analysis (CEA) of a cobalt-chromium everolimus eluting stent (Co-Cr EES) versus BMS in PCI. A Markov state transition model with a 2-year time horizon was applied from a US Medicare setting with patients undergoing PCI with Co-Cr EES or BMS. Baseline characteristics, treatment effects, and safety measures were taken from a patient level meta-analysis of 5 RCTs (n = 4,896). The base-case analysis evaluated stent-related outcomes; a secondary analysis considered the broader set of outcomes reported in the meta-analysis. The base-case and secondary analyses reported an additional 0.018 and 0.013 quality-adjusted life years (QALYs) and cost savings of $236 and $288, respectively with Co-Cr EES versus BMS. Results were robust to sensitivity analyses and were most sensitive to the price of clopidogrel. In the probabilistic sensitivity analysis, Co-Cr EES was associated with a greater than 99% chance of being cost saving or cost effective (at a cost per QALY threshold of $50,000) versus BMS. Using data from a recent patient level meta-analysis and contemporary cost data, this analysis found that PCI with Co-Cr EES is more effective and less costly than PCI with BMS. © 2016 The Authors. Catheterization and Cardiovascular Interventions Published by Wiley Periodicals, Inc. © 2016 The Authors. Catheterization and Cardiovascular Interventions Published by Wiley Periodicals, Inc.
Study of multiple hologram recording in lithium niobate
NASA Technical Reports Server (NTRS)
Gaylord, T. K.; Callen, W. R.
1976-01-01
The results of a number of theoretical and experimental studies relating to multiple hologram recording in lithium niobate are reported. The analysis of holographic gratings stored in lithium niobate has been extended to cover a more realistic range of physical situations. A new successful dynamic (feedback) theory for describing recording, nondestructive reading, erasure, enhancement, and angular sensitivity has been developed. In addition, the possible architectures of mass data storage systems have been studied.
Bilingualism delays age at onset of dementia, independent of education and immigration status.
Mortimer, James A
2014-05-27
Editors' Note: Mortimer argues that important confounding variables may have biased the conclusion by Alladi et al. on the role of bilingualism in delaying the onset of dementia. Following Mortimer’s comments, Alladi et al. conducted additional analysis of their data to support their conclusion. The attitude of "close enough" is not appropriate when determining brain death. Stadlan comments and supports Frank’s call for action regarding this sensitive issue.
Sensitivity Analysis of the Seakeeping Behavior of Trimaran Ships
2003-12-01
Architects and Marine Engineers; 1967. 827 p. [18] Lloyd ARJM. Seakeeping: Ship Behavior in Rough Weather. West Yorkshire ; Ellis Horwood Ltd ; 1989...INCAT Australia Pty Ltd . This design features side hulls with a very low freeboard at their bows and a definite, above-water center bow. Additional...composite ship, uses an Air Cushion Catamaran (ACC) design, which is an advanced variant of SES technology. Most recently, a co -operative design team that
Host Genes and Resistance/Sensitivity to Military Priority Pathogens
2012-06-01
must be performed before fruitful linkage analysis can be performed with each of these parameters. We have also begun to measure the concentrations of...resistant to this pathogen). • Using parental strain we have identified at least eleven additional phenotypes that will allow fruitful linkage... affect severity of oviduct infection (Figure 2). 3.3 BXD strains colony for DoD select agents We maintained more than 400 cages for DoD
Microanalysis of plant cell wall polysaccharides.
Obel, Nicolai; Erben, Veronika; Schwarz, Tatjana; Kühnel, Stefan; Fodor, Andrea; Pauly, Markus
2009-09-01
Oligosaccharide Mass Profiling (OLIMP) allows a fast and sensitive assessment of cell wall polymer structure when coupled with Matrix Assisted Laser Desorption Ionisation Time Of Flight Mass Spectrometry (MALDI-TOF MS). The short time required for sample preparation and analysis makes possible the study of a wide range of plant organs, revealing a high degree of heterogeneity in the substitution pattern of wall polymers such as the cross-linking glycan xyloglucan and the pectic polysaccharide homogalacturonan. The high sensitivity of MALDI-TOF allows the use of small amounts of samples, thus making it possible to investigate the wall structure of single cell types when material is collected by such methods as laser micro-dissection. As an example, the analysis of the xyloglucan structure in the leaf cell types outer epidermis layer, entire epidermis cell layer, palisade mesophyll cells, and vascular bundles were investigated. OLIMP is amenable to in situ wall analysis, where wall polymers are analyzed on unprepared plant tissue itself without first isolating cell walls. In addition, OLIMP enables analysis of wall polymers in Golgi-enriched fractions, the location of nascent matrix polysaccharide biosynthesis, enabling separation of the processes of wall biosynthesis versus post-deposition apoplastic metabolism. These new tools will make possible a semi-quantitative analysis of the cell wall at an unprecedented level.
NASA Technical Reports Server (NTRS)
Otterson, D. A.; Seng, G. T.
1984-01-01
A new high-performance liquid chromatographic (HPLC) method for group-type analysis of middistillate fuels is described. It uses a refractive index detector and standards that are prepared by reacting a portion of the fuel sample with sulfuric acid. A complete analysis of a middistillate fuel for saturates and aromatics (including the preparation of the standard) requires about 15 min if standards for several fuels are prepared simultaneously. From model fuel studies, the method was found to be accurate to within 0.4 vol% saturates or aromatics, and provides a precision of + or - 0.4 vol%. Olefin determinations require an additional 15 min of analysis time. However, this determination is needed only for those fuels displaying a significant olefin response at 200 nm (obtained routinely during the saturated/aromatics analysis procedure). The olefin determination uses the responses of the olefins and the corresponding saturates, as well as the average value of their refractive index sensitivity ratios (1.1). Studied indicated that, although the relative error in the olefins result could reach 10 percent by using this average sensitivity ratio, it was 5 percent for the fuels used in this study. Olefin concentrations as low as 0.1 vol% have been determined using this method.
de Ruiter, C. M.; van der Veer, C.; Leeflang, M. M. G.; Deborggraeve, S.; Lucas, C.
2014-01-01
Molecular methods have been proposed as highly sensitive tools for the detection of Leishmania parasites in visceral leishmaniasis (VL) patients. Here, we evaluate the diagnostic accuracy of these tools in a meta-analysis of the published literature. The selection criteria were original studies that evaluate the sensitivities and specificities of molecular tests for diagnosis of VL, adequate classification of study participants, and the absolute numbers of true positives and negatives derivable from the data presented. Forty studies met the selection criteria, including PCR, real-time PCR, nucleic acid sequence-based amplification (NASBA), and loop-mediated isothermal amplification (LAMP). The sensitivities of the individual studies ranged from 29 to 100%, and the specificities ranged from 25 to 100%. The pooled sensitivity of PCR in whole blood was 93.1% (95% confidence interval [CI], 90.0 to 95.2), and the specificity was 95.6% (95% CI, 87.0 to 98.6). The specificity was significantly lower in consecutive studies, at 63.3% (95% CI, 53.9 to 71.8), due either to true-positive patients not being identified by parasitological methods or to the number of asymptomatic carriers in areas of endemicity. PCR for patients with HIV-VL coinfection showed high diagnostic accuracy in buffy coat and bone marrow, ranging from 93.1 to 96.9%. Molecular tools are highly sensitive assays for Leishmania detection and may contribute as an additional test in the algorithm, together with a clear clinical case definition. We observed wide variety in reference standards and study designs and now recommend consecutively designed studies. PMID:24829226
NASA Astrophysics Data System (ADS)
Jiang, Shan; Wang, Fang; Shen, Luming; Liao, Guiping; Wang, Lin
2017-03-01
Spectrum technology has been widely used in crop non-destructive testing diagnosis for crop information acquisition. Since spectrum covers a wide range of bands, it is of critical importance to extract the sensitive bands. In this paper, we propose a methodology to extract the sensitive spectrum bands of rapeseed using multiscale multifractal detrended fluctuation analysis. Our obtained sensitive bands are relatively robust in the range of 534 nm-574 nm. Further, by using the multifractal parameter (Hurst exponent) of the extracted sensitive bands, we propose a prediction model to forecast the Soil and plant analyzer development values ((SPAD), often used as a parameter to indicate the chlorophyll content) and an identification model to distinguish the different planting patterns. Three vegetation indices (VIs) based on previous work are used for comparison. Three evaluation indicators, namely, the root mean square error, the correlation coefficient, and the relative error employed in the SPAD values prediction model all demonstrate that our Hurst exponent has the best performance. Four rapeseed compound planting factors, namely, seeding method, planting density, fertilizer type, and weed control method are considered in the identification model. The Youden indices calculated by the random decision forest method and the K-nearest neighbor method show that our Hurst exponent is superior to other three Vis, and their combination for the factor of seeding method. In addition, there is no significant difference among the five features for other three planting factors. This interesting finding suggests that the transplanting and the direct seeding would make a big difference in the growth of rapeseed.
Carey, Robert J; DePalma, Gail; Damianopoulos, Ernest
2003-07-01
An animal's response to novelty has been suggested to be a predictor of its response to drugs of abuse. The possible relationship between an individual's behavioral response to novelty and its subsequent behavioral response to cocaine has not been subjected to a detailed correlational analysis. To use a repeated cocaine treatment protocol to induce cocaine sensitization and conditioned cocaine locomotor stimulant effects and to assess the relationship of these effects to pre-cocaine locomotor behavior in a novel environment. In two separate experiments, rats were given a 20-min test in a novel open-field environment. Subsequently, the rats were given a series of additional tests in conjunction with either saline or cocaine (10 mg/kg) treatments to induce cocaine sensitization and conditioned effects. The repeated cocaine treatments induced cocaine behavioral sensitization and conditioned effects. Correlational analyses showed that the initial 20-min novel environment test proved to be a strong predictor of an animal's subsequent saline activity level but did not predict the rats' behavioral acute and sensitized response to cocaine. When change in activity was used as the dependent variable, initial activity level was reliably negatively correlated with activity changes on cocaine tests as well as cocaine conditioning tests. The negative correlation between initial activity in a novel environment and the change in activity induced by cocaine indicates that low responders to environmental novelty tend to have the strongest response to cocaine. These results appear consistent with the classic initial value and response rate dependent analyses of stimulant drug effects.
Analysis of electrical tomography sensitive field based on multi-terminal network and electric field
NASA Astrophysics Data System (ADS)
He, Yongbo; Su, Xingguo; Xu, Meng; Wang, Huaxiang
2010-08-01
Electrical tomography (ET) aims at the study of the conductivity/permittivity distribution of the interested field non-intrusively via the boundary voltage/current. The sensor is usually regarded as an electric field, and finite element method (FEM) is commonly used to calculate the sensitivity matrix and to optimize the sensor architecture. However, only the lumped circuit parameters can be measured by the data acquisition electronics, it's very meaningful to treat the sensor as a multi terminal network. Two types of multi terminal network with common node and common loop topologies are introduced. Getting more independent measurements and making more uniform current distribution are the two main ways to minimize the inherent ill-posed effect. By exploring the relationships of network matrixes, a general formula is proposed for the first time to calculate the number of the independent measurements. Additionally, the sensitivity distribution is analyzed with FEM. As a result, quasi opposite mode, an optimal single source excitation mode, that has the advantages of more uniform sensitivity distribution and more independent measurements, is proposed.
Fluorescence quencher improves SCANSYSTEM for rapid bacterial detection.
Schmidt, M; Hourfar, M K; Wahl, A; Nicol, S-B; Montag, T; Roth, W K; Seifried, E
2006-05-01
The optimized scansystem could detect contaminated platelet products within 24 h. However, the system's sensitivity was reduced by a high fluorescence background even in sterile samples, which led to the necessity of a well-trained staff for confirmation of microscope results. A new protocol of the optimized scansystem with the addition of a fluorescence quencher was evaluated. Pool platelet concentrates contaminated with five transfusion-relevant bacterial strains were tested in a blind study. In conjunction with new analysis software, the new quenching dye was able to reduce significantly unspecific background fluorescence. Sensitivity was best for Bacillus cereus and Escherichia coli (3 CFU/ml). The application of a fluorescence quencher enables automated discrimination of positive and negative test results in 60% of all analysed samples.
Aerodynamic parameter studies and sensitivity analysis for rotor blades in axial flight
NASA Technical Reports Server (NTRS)
Chiu, Y. Danny; Peters, David A.
1991-01-01
The analytical capability is offered for aerodynamic parametric studies and sensitivity analyses of rotary wings in axial flight by using a 3-D undistorted wake model in curved lifting line theory. The governing equations are solved by both the Multhopp Interpolation technique and the Vortex Lattice method. The singularity from the bound vortices is eliminated through the Hadamard's finite part concept. Good numerical agreement between both analytical methods and finite differences methods are found. Parametric studies were made to assess the effects of several shape variables on aerodynamic loads. It is found, e.g., that a rotor blade with out-of-plane and inplane curvature can theoretically increase lift in the inboard and outboard regions respectively without introducing an additional induced drag.
Influence of surface oxides on hydrogen-sensitive Pd:GaN Schottky diodes
NASA Astrophysics Data System (ADS)
Weidemann, O.; Hermann, M.; Steinhoff, G.; Wingbrant, H.; Lloyd Spetz, A.; Stutzmann, M.; Eickhoff, M.
2003-07-01
The hydrogen response of Pd:GaN Schottky diodes, prepared by in situ and ex situ deposition of catalytic Pd Schottky contacts on Si-doped GaN layers is compared. Ex situ fabricated devices show a sensitivity towards molecular hydrogen, which is about 50 times higher than for in situ deposited diodes. From the analysis of these results, we conclude that adsorption sites for atomic hydrogen in Pd:GaN sensors are provided by an oxidic intermediate layer. In addition, in situ deposited Pd Schottky contacts reveal lower barrier heights and drastically higher reverse currents. We suggest that the passivation of the GaN surface before ex situ deposition of Pd also results in quenching of leakage paths caused by structural defects.
Recent Advances in Clinical Glycoproteomics of Immunoglobulins (Igs).
Plomp, Rosina; Bondt, Albert; de Haan, Noortje; Rombouts, Yoann; Wuhrer, Manfred
2016-07-01
Antibody glycosylation analysis has seen methodological progress resulting in new findings with regard to antibody glycan structure and function in recent years. For example, antigen-specific IgG glycosylation analysis is now applicable for clinical samples because of the increased sensitivity of measurements, and this has led to new insights in the relationship between IgG glycosylation and various diseases. Furthermore, many new methods have been developed for the purification and analysis of IgG Fc glycopeptides, notably multiple reaction monitoring for high-throughput quantitative glycosylation analysis. In addition, new protocols for IgG Fab glycosylation analysis were established revealing autoimmune disease-associated changes. Functional analysis has shown that glycosylation of IgA and IgE is involved in transport across the intestinal epithelium and receptor binding, respectively. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Hackett, Daniel A.; Baker, Michael K.
2016-01-01
The purpose of this study was to examine the effect of regular exercise training on insulin sensitivity in adults with type 2 diabetes mellitus (T2DM) using the pooled data available from randomised controlled trials. In addition, we sought to determine whether short-term periods of physical inactivity diminish the exercise-induced improvement in insulin sensitivity. Eligible trials included exercise interventions that involved ≥3 exercise sessions, and reported a dynamic measurement of insulin sensitivity. There was a significant pooled effect size (ES) for the effect of exercise on insulin sensitivity (ES, –0.588; 95% confidence interval [CI], –0.816 to –0.359; P<0.001). Of the 14 studies included for meta-analyses, nine studies reported the time of data collection from the last exercise bout. There was a significant improvement in insulin sensitivity in favour of exercise versus control between 48 and 72 hours after exercise (ES, –0.702; 95% CI, –1.392 to –0.012; P=0.046); and this persisted when insulin sensitivity was measured more than 72 hours after the last exercise session (ES, –0.890; 95% CI, –1.675 to –0.105; P=0.026). Regular exercise has a significant benefit on insulin sensitivity in adults with T2DM and this may persist beyond 72 hours after the last exercise session. PMID:27535644
Influence of Functional Groups on the Viscosity of Organic Aerosol.
Rothfuss, Nicholas E; Petters, Markus D
2017-01-03
Organic aerosols can exist in highly viscous or glassy phase states. A viscosity database for organic compounds with atmospherically relevant functional groups is compiled and analyzed to quantify the influence of number and location of functional groups on viscosity. For weakly functionalized compounds the trend in viscosity sensitivity to functional group addition is carboxylic acid (COOH) ≈ hydroxyl (OH) > nitrate (ONO 2 ) > carbonyl (CO) ≈ ester (COO) > methylene (CH 2 ). Sensitivities to group addition increase with greater levels of prior functionalization and decreasing temperature. For carboxylic acids a sharp increase in sensitivity is likely present already at the second addition at room temperature. Ring structures increase viscosity relative to linear structures. Sensitivities are correlated with analogously derived sensitivities of vapor pressure reduction. This may be exploited in the future to predict viscosity in numerical models by piggybacking on schemes that track the evolution of organic aerosol volatility with age.
Kane, Elisabeth J; Braunstein, Kara; Ollendick, Thomas H.; Muris, Peter
2014-01-01
The relations of fear to anxiety sensitivity, control beliefs, and maternal overprotection were examined in 126 7- to 13-year-old clinically referred children with specific phobias. Results indicated that anxiety sensitivity and control beliefs were significant predictors of children’s fear levels, accounting for approximately 48% of the total variance. Unexpectedly, age, gender, and maternal overprotection did not emerge as significant predictors of fear in the overall sample. In subsequent analyses, anxiety sensitivity was found to be a consistent, significant predictor for both girls and boys, for both younger and older children, and for children with and without an additional anxiety disorder diagnosis. Control beliefs were only a significant predictor for girls, younger children, and children with an additional anxiety diagnosis. Maternal overprotection was not a significant predictor for any group. Children with an additional anxiety disorder diagnosis had higher levels of fear, anxiety sensitivity, and maternal overprotection, as well as lower levels of control beliefs than the non-additional anxiety disorder subgroup. Future directions and clinical implications are explored. PMID:26273182
Kane, Elisabeth J; Braunstein, Kara; Ollendick, Thomas H; Muris, Peter
2015-07-01
The relations of fear to anxiety sensitivity, control beliefs, and maternal overprotection were examined in 126 7- to 13-year-old clinically referred children with specific phobias. Results indicated that anxiety sensitivity and control beliefs were significant predictors of children's fear levels, accounting for approximately 48% of the total variance. Unexpectedly, age, gender, and maternal overprotection did not emerge as significant predictors of fear in the overall sample. In subsequent analyses, anxiety sensitivity was found to be a consistent, significant predictor for both girls and boys, for both younger and older children, and for children with and without an additional anxiety disorder diagnosis. Control beliefs were only a significant predictor for girls, younger children, and children with an additional anxiety diagnosis. Maternal overprotection was not a significant predictor for any group. Children with an additional anxiety disorder diagnosis had higher levels of fear, anxiety sensitivity, and maternal overprotection, as well as lower levels of control beliefs than the non-additional anxiety disorder subgroup. Future directions and clinical implications are explored.
NASA Astrophysics Data System (ADS)
Waliczek, Mateusz; Kijewska, Monika; Rudowska, Magdalena; Setner, Bartosz; Stefanowicz, Piotr; Szewczuk, Zbigniew
2016-11-01
Mass spectrometric analysis of trace amounts of peptides may be problematic due to the insufficient ionization efficiency resulting in limited sensitivity. One of the possible ways to overcome this problem is the application of ionization enhancers. Herein we developed new ionization markers based on 2,4,6-triphenylpyridinium and 2,4,6-trimethylpyridinium salts. Using of inexpensive and commercially available pyrylium salt allows selective derivatization of primary amino groups, especially those sterically unhindered, such as ɛ-amino group of lysine. The 2,4,6-triphenylpyridinium modified peptides generate in MS/MS experiments an abundant protonated 2,4,6-triphenylpyridinium ion. This fragment is a promising reporter ion for the multiple reactions monitoring (MRM) analysis. In addition, the fixed positive charge of the pyridinium group enhances the ionization efficiency. Other advantages of the proposed ionization enhancers are the simplicity of derivatization of peptides and the possibility of convenient incorporation of isotopic labels into derivatized peptides.
NASA Astrophysics Data System (ADS)
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
Zou, Ling; Guo, Qian; Xu, Yi; Yang, Biao; Jiao, Zhuqing; Xiang, Jianbo
2016-04-29
Functional magnetic resonance imaging (fMRI) is an important tool in neuroscience for assessing connectivity and interactions between distant areas of the brain. To find and characterize the coherent patterns of brain activity as a means of identifying brain systems for the cognitive reappraisal of the emotion task, both density-based k-means clustering and independent component analysis (ICA) methods can be applied to characterize the interactions between brain regions involved in cognitive reappraisal of emotion. Our results reveal that compared with the ICA method, the density-based k-means clustering method provides a higher sensitivity of polymerization. In addition, it is more sensitive to those relatively weak functional connection regions. Thus, the study concludes that in the process of receiving emotional stimuli, the relatively obvious activation areas are mainly distributed in the frontal lobe, cingulum and near the hypothalamus. Furthermore, density-based k-means clustering method creates a more reliable method for follow-up studies of brain functional connectivity.
Laser-Induced Breakdown Spectroscopy of Cinematographic Film
NASA Astrophysics Data System (ADS)
Oujja, M.; Abrusci, C.; Gaspard, S.; Rebollar, E.; Amo, A. del; Catalina, F.; Castillejo, M.
Laser-induced breakdown spectroscopy (LIBS) was used to characterize the composition of black-and-white, silver-gelatine photographic films. LIB spectra of samples and reference gelatine (of various gel strengths, Bloom values 225 and 75 and crosslinking degrees) were acquired in vacuum by excitation at 266 nm. The elemental composition of the gelatine used in the upper protective layer and in the underlying emulsion is revealed by the stratigraphic analysis carried out by delivering successive pulses on the same spot of the sample. Silver (Ag) lines from the light-sensitive silver halide salts are accompanied by iron, lead and chrome lines. Fe and Pb are constituents of film developers and Cr is included in the hardening agent. The results demonstrate the analytical capacity of LIBS for study and classification of different gelatine types and the sensitivity of the technique to minor changes in gelatine composition. In addition LIBS analysis allows extracting important information on the chemicals used as developers and hardeners of archival cinematographic films.
A cost-benefit analysis on the specialization in departments of obstetrics and gynecology in Japan.
Shen, Junyi; Fukui, On; Hashimoto, Hiroyuki; Nakashima, Takako; Kimura, Tadashi; Morishige, Kenichiro; Saijo, Tatsuyoshi
2012-03-27
In April 2008, the specialization in departments of obstetrics and gynecology was conducted in Sennan area of Osaka prefecture in Japan, which aims at solving the problems of regional provision of obstetrical service. Under this specialization, the departments of obstetrics and gynecology in two city hospitals were combined as one medical center, whilst one hospital is in charge of the department of gynecology and the other one operates the department of obstetrics. In this paper, we implement a cost-benefit analysis to evaluate the validity of this specialization. The benefit-cost ratio is estimated at 1.367 under a basic scenario, indicating that the specialization can generate a net benefit. In addition, with a consideration of different kinds of uncertainty in the future, a number of sensitivity analyses are conducted. The results of these sensitivity analyses suggest that the specialization is valid in the sense that all the estimated benefit-cost ratios are above 1.0 in any case.
Schminke, G; Seubert, A
2000-02-01
An established method for the determination of the disinfection by-product bromate is ion chromatography (IC). This paper presents a comparison of three IC methods based on either conductivity detection (IC-CD), a post-column-reaction (IC-PCR-VIS) or the on-line-coupling with inductively coupled plasma mass spectrometry (IC-ICP-MS). Main characteristics of the methods such as method detection limits (MDL), time of analysis and sample pretreatment are compared and applicability for routine analysis is critically discussed. The most sensitive and rugged method is IC-ICP-MS, followed by IC-PCR-VIS. The photometric detection is subject to a minor interference in real world samples, presumably caused by carbonate. The lowest sensitivity is shown by the IC-CD method as slowest method compared, which, in addition, requires a sample pretreatment. The highest amount of information is delivered by IC-PCR-VIS, which allows the simultaneous determination of the seven standard anions and bromate.
Ciccimaro, Eugene; Ranasinghe, Asoka; D'Arienzo, Celia; Xu, Carrie; Onorato, Joelle; Drexler, Dieter M; Josephs, Jonathan L; Poss, Michael; Olah, Timothy
2014-12-02
Due to observed collision induced dissociation (CID) fragmentation inefficiency, developing sensitive liquid chromatography tandem mass spectrometry (LC-MS/MS) assays for CID resistant compounds is especially challenging. As an alternative to traditional LC-MS/MS, we present here a methodology that preserves the intact analyte ion for quantification by selectively filtering ions while reducing chemical noise. Utilizing a quadrupole-Orbitrap MS, the target ion is selectively isolated while interfering matrix components undergo MS/MS fragmentation by CID, allowing noise-free detection of the analyte's surviving molecular ion. In this manner, CID affords additional selectivity during high resolution accurate mass analysis by elimination of isobaric interferences, a fundamentally different concept than the traditional approach of monitoring a target analyte's unique fragment following CID. This survivor-selected ion monitoring (survivor-SIM) approach has allowed sensitive and specific detection of disulfide-rich cyclic peptides extracted from plasma.
NASA Astrophysics Data System (ADS)
Watanabe, Kenichi; Minniti, Triestino; Kockelmann, Winfried; Dalgliesh, Robert; Burca, Genoveva; Tremsin, Anton S.
2017-07-01
The uncertainties and the stability of a neutron sensitive MCP/Timepix detector when operating in the event timing mode for quantitative image analysis at a pulsed neutron source were investigated. The dominant component to the uncertainty arises from the counting statistics. The contribution of the overlap correction to the uncertainty was concluded to be negligible from considerations based on the error propagation even if a pixel occupation probability is more than 50%. We, additionally, have taken into account the multiple counting effect in consideration of the counting statistics. Furthermore, the detection efficiency of this detector system changes under relatively high neutron fluxes due to the ageing effects of current Microchannel Plates. Since this efficiency change is position-dependent, it induces a memory image. The memory effect can be significantly reduced with correction procedures using the rate equations describing the permanent gain degradation and the scrubbing effect on the inner surfaces of the MCP pores.
An analysis of parameter sensitivities of preference-inspired co-evolutionary algorithms
NASA Astrophysics Data System (ADS)
Wang, Rui; Mansor, Maszatul M.; Purshouse, Robin C.; Fleming, Peter J.
2015-10-01
Many-objective optimisation problems remain challenging for many state-of-the-art multi-objective evolutionary algorithms. Preference-inspired co-evolutionary algorithms (PICEAs) which co-evolve the usual population of candidate solutions with a family of decision-maker preferences during the search have been demonstrated to be effective on such problems. However, it is unknown whether PICEAs are robust with respect to the parameter settings. This study aims to address this question. First, a global sensitivity analysis method - the Sobol' variance decomposition method - is employed to determine the relative importance of the parameters controlling the performance of PICEAs. Experimental results show that the performance of PICEAs is controlled for the most part by the number of function evaluations. Next, we investigate the effect of key parameters identified from the Sobol' test and the genetic operators employed in PICEAs. Experimental results show improved performance of the PICEAs as more preferences are co-evolved. Additionally, some suggestions for genetic operator settings are provided for non-expert users.
Comas, Mercè; Arrospide, Arantzazu; Mar, Javier; Sala, Maria; Vilaprinyó, Ester; Hernández, Cristina; Cots, Francesc; Martínez, Juan; Castells, Xavier
2014-01-01
To assess the budgetary impact of switching from screen-film mammography to full-field digital mammography in a population-based breast cancer screening program. A discrete-event simulation model was built to reproduce the breast cancer screening process (biennial mammographic screening of women aged 50 to 69 years) combined with the natural history of breast cancer. The simulation started with 100,000 women and, during a 20-year simulation horizon, new women were dynamically entered according to the aging of the Spanish population. Data on screening were obtained from Spanish breast cancer screening programs. Data on the natural history of breast cancer were based on US data adapted to our population. A budget impact analysis comparing digital with screen-film screening mammography was performed in a sample of 2,000 simulation runs. A sensitivity analysis was performed for crucial screening-related parameters. Distinct scenarios for recall and detection rates were compared. Statistically significant savings were found for overall costs, treatment costs and the costs of additional tests in the long term. The overall cost saving was 1,115,857€ (95%CI from 932,147 to 1,299,567) in the 10th year and 2,866,124€ (95%CI from 2,492,610 to 3,239,638) in the 20th year, representing 4.5% and 8.1% of the overall cost associated with screen-film mammography. The sensitivity analysis showed net savings in the long term. Switching to digital mammography in a population-based breast cancer screening program saves long-term budget expense, in addition to providing technical advantages. Our results were consistent across distinct scenarios representing the different results obtained in European breast cancer screening programs.
Comas, Mercè; Arrospide, Arantzazu; Mar, Javier; Sala, Maria; Vilaprinyó, Ester; Hernández, Cristina; Cots, Francesc; Martínez, Juan; Castells, Xavier
2014-01-01
Objective To assess the budgetary impact of switching from screen-film mammography to full-field digital mammography in a population-based breast cancer screening program. Methods A discrete-event simulation model was built to reproduce the breast cancer screening process (biennial mammographic screening of women aged 50 to 69 years) combined with the natural history of breast cancer. The simulation started with 100,000 women and, during a 20-year simulation horizon, new women were dynamically entered according to the aging of the Spanish population. Data on screening were obtained from Spanish breast cancer screening programs. Data on the natural history of breast cancer were based on US data adapted to our population. A budget impact analysis comparing digital with screen-film screening mammography was performed in a sample of 2,000 simulation runs. A sensitivity analysis was performed for crucial screening-related parameters. Distinct scenarios for recall and detection rates were compared. Results Statistically significant savings were found for overall costs, treatment costs and the costs of additional tests in the long term. The overall cost saving was 1,115,857€ (95%CI from 932,147 to 1,299,567) in the 10th year and 2,866,124€ (95%CI from 2,492,610 to 3,239,638) in the 20th year, representing 4.5% and 8.1% of the overall cost associated with screen-film mammography. The sensitivity analysis showed net savings in the long term. Conclusions Switching to digital mammography in a population-based breast cancer screening program saves long-term budget expense, in addition to providing technical advantages. Our results were consistent across distinct scenarios representing the different results obtained in European breast cancer screening programs. PMID:24832200
Zhan, Mei; Zheng, Hanrui; Xu, Ting; Yang, Yu; Li, Qiu
2017-08-01
Malignant pleural mesothelioma (MPM) is a rare malignancy, and pemetrexed/cisplatin (PC) is the gold standard first-line regime. This study evaluated the cost-effectiveness of the addition of bevacizumab to PC (with maintenance bevacizumab) for unresectable MPM based on a phase III trial that showed a survival benefit compared with chemotherapy alone. To estimate the incremental cost-effectiveness ratio (ICER) of the incorporation of bevacizumab, a Markov model based on the MAPS trial, including the disease states of progression-free survival, progressive disease and death, was used. Total costs were calculated from a Chinese payer perspective, and health outcomes were converted into quality-adjusted life year (QALY). Model robustness was explored in sensitivity analyses. The addition of bevacizumab to PC was estimated to increase the cost by $81446.69, with a gain of 0.112 QALYs, resulting in an ICER of $727202.589 per QALY. In both one-way sensitivity and probabilistic sensitivity analyses, the ICER exceeded the commonly accepted willingness-to-pay threshold of 3 times the gross domestic product per capita of China ($23970.00 per QALY). The cost of bevacizumab had the most important impact on the ICER. The combination of bevacizumab with PC chemotherapy is not a cost-effective treatment option for MPM in China. Given its positive clinical value and extremely low incidence of MPM, an appropriate price discount, assistance programs and medical insurance should be considered to make bevacizumab more affordable for this rare patient population. Copyright © 2017 Elsevier B.V. All rights reserved.
Alves, Vinicius M.; Muratov, Eugene; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander
2015-01-01
Skin permeability is widely considered to be mechanistically implicated in chemically-induced skin sensitization. Although many chemicals have been identified as skin sensitizers, there have been very few reports analyzing the relationships between molecular structure and skin permeability of sensitizers and non-sensitizers. The goals of this study were to: (i) compile, curate, and integrate the largest publicly available dataset of chemicals studied for their skin permeability; (ii) develop and rigorously validate QSAR models to predict skin permeability; and (iii) explore the complex relationships between skin sensitization and skin permeability. Based on the largest publicly available dataset compiled in this study, we found no overall correlation between skin permeability and skin sensitization. In addition, cross-species correlation coefficient between human and rodent permeability data was found to be as low as R2=0.44. Human skin permeability models based on the random forest method have been developed and validated using OECD-compliant QSAR modeling workflow. Their external accuracy was high (Q2ext = 0.73 for 63% of external compounds inside the applicability domain). The extended analysis using both experimentally-measured and QSAR-imputed data still confirmed the absence of any overall concordance between skin permeability and skin sensitization. This observation suggests that chemical modifications that affect skin permeability should not be presumed a priori to modulate the sensitization potential of chemicals. The models reported herein as well as those developed in the companion paper on skin sensitization suggest that it may be possible to rationally design compounds with the desired high skin permeability but low sensitization potential. PMID:25560673
NASA Astrophysics Data System (ADS)
Newman, James Charles, III
1997-10-01
The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for several two- and three-dimensional cases. In twodimensions, an initially symmetric NACA-0012 airfoil and a high-lift multielement airfoil were examined. For the three-dimensional configurations, an initially rectangular wing with uniform NACA-0012 cross-sections was optimized; in addition, a complete Boeing 747-200 aircraft was studied. Furthermore, the current study also examines the effect of inconsistency in the order of spatial accuracy between the nonlinear fluid and linear shape sensitivity equations. The second step was to develop a computationally efficient, high-fidelity, integrated static aeroelastic analysis procedure. To accomplish this, a structural analysis code was coupled with the aforementioned unstructured grid aerodynamic analysis solver. The use of an unstructured grid scheme for the aerodynamic analysis enhances the interaction compatibility with the wing structure. The structural analysis utilizes finite elements to model the wing so that accurate structural deflections may be obtained. In the current work, parameters have been introduced to control the interaction of the computational fluid dynamics and structural analyses; these control parameters permit extremely efficient static aeroelastic computations. To demonstrate and evaluate this procedure, static aeroelastic analysis results for a flexible wing in low subsonic, high subsonic (subcritical), transonic (supercritical), and supersonic flow conditions are presented.
Ely, D. Matthew
2006-01-01
Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, J.; Tolson, B.
2017-12-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
Takeyoshi, Masahiro; Sawaki, Masakuni; Yamasaki, Kanji; Kimber, Ian
2003-09-30
The murine local lymph node assay (LLNA) is used for the identification of chemicals that have the potential to cause skin sensitization. However, it requires specific facility and handling procedures to accommodate a radioisotopic (RI) endpoint. We have developed non-radioisotopic (non-RI) endpoint of LLNA based on BrdU incorporation to avoid a use of RI. Although this alternative method appears viable in principle, it is somewhat less sensitive than the standard assay. In this study, we report investigations to determine the use of statistical analysis to improve the sensitivity of a non-RI LLNA procedure with alpha-hexylcinnamic aldehyde (HCA) in two separate experiments. Consequently, the alternative non-RI method required HCA concentrations of greater than 25% to elicit a positive response based on the criterion for classification as a skin sensitizer in the standard LLNA. Nevertheless, dose responses to HCA in the alternative method were consistent in both experiments and we examined whether the use of an endpoint based upon the statistical significance of induced changes in LNC turnover, rather than an SI of 3 or greater, might provide for additional sensitivity. The results reported here demonstrate that with HCA at least significant responses were, in each of two experiments, recorded following exposure of mice to 25% of HCA. These data suggest that this approach may be more satisfactory-at least when BrdU incorporation is measured. However, this modification of the LLNA is rather less sensitive than the standard method if employing statistical endpoint. Taken together the data reported here suggest that a modified LLNA in which BrdU is used in place of radioisotope incorporation shows some promise, but that in its present form, even with the use of a statistical endpoint, lacks some of the sensitivity of the standard method. The challenge is to develop strategies for further refinement of this approach.
Connolly, Keith P; Schwartzberg, Randy S; Reuss, Bryan; Crumbie, David; Homan, Brad M
2013-02-20
Magnetic resonance imaging (MRI) has been suggested to be of high accuracy at academic institutions in the identification of superior labral tears; however, many Type-II superior labral anterior-posterior (SLAP) lesions encountered during arthroscopy have not been previously diagnosed with noncontrast images. This study evaluated the accuracy of diagnosing Type-II SLAP lesions in a community setting with use of noncontrast MRI and analyzed the effect that radiologist training and the scanner type or magnet strength had on sensitivity and specificity. One hundred and forty-four patients requiring repair of an arthroscopically confirmed Type-II SLAP lesion who had a noncontrast MRI examination performed within twelve months before the procedure were included in the sensitivity analysis. An additional 100 patients with arthroscopically confirmed, normal superior labral anatomy were identified for specificity analysis. The transcribed interpretations of the images by the radiologists were used to document the diagnosis of a SLAP lesion and were compared with the operative report. The magnet strength, type of MRI system (open or closed), and whether the radiologist had completed a musculoskeletal fellowship were also recorded. Noncontrast MRI identified SLAP lesions in fifty-four of 144 shoulders, yielding an overall sensitivity of 38% (95% confidence interval [CI] = 30%, 46%). Specificity was 94% (95% CI = 87%, 98%), with six SLAP lesions diagnosed in 100 shoulders that did not contain the lesion. Musculoskeletal fellowship-trained radiologists performed with higher sensitivity than those who had not completed the fellowship (46% versus 19%; p = 0.009). Our results demonstrate a low sensitivity and high specificity in the diagnosis of Type-II SLAP lesions with noncontrast MRI in this community setting. Musculoskeletal fellowship-trained radiologists had significantly higher sensitivities in accurately diagnosing the lesion than did radiologists without such training. Noncontrast MRI is not a reliable diagnostic tool for Type-II SLAP lesions in a community setting.
Lee, Adria D; Cassiday, Pamela K; Pawloski, Lucia C; Tatti, Kathleen M; Martin, Monte D; Briere, Elizabeth C; Tondella, M Lucia; Martin, Stacey W
2018-01-01
The appropriate use of clinically accurate diagnostic tests is essential for the detection of pertussis, a poorly controlled vaccine-preventable disease. The purpose of this study was to estimate the sensitivity and specificity of different diagnostic criteria including culture, multi-target polymerase chain reaction (PCR), anti-pertussis toxin IgG (IgG-PT) serology, and the use of a clinical case definition. An additional objective was to describe the optimal timing of specimen collection for the various tests. Clinical specimens were collected from patients with cough illness at seven locations across the United States between 2007 and 2011. Nasopharyngeal and blood specimens were collected from each patient during the enrollment visit. Patients who had been coughing for ≤ 2 weeks were asked to return in 2-4 weeks for collection of a second, convalescent blood specimen. Sensitivity and specificity of each diagnostic test were estimated using three methods-pertussis culture as the "gold standard," composite reference standard analysis (CRS), and latent class analysis (LCA). Overall, 868 patients were enrolled and 13.6% were B. pertussis positive by at least one diagnostic test. In a sample of 545 participants with non-missing data on all four diagnostic criteria, culture was 64.0% sensitive, PCR was 90.6% sensitive, and both were 100% specific by LCA. CRS and LCA methods increased the sensitivity estimates for convalescent serology and the clinical case definition over the culture-based estimates. Culture and PCR were most sensitive when performed during the first two weeks of cough; serology was optimally sensitive after the second week of cough. Timing of specimen collection in relation to onset of illness should be considered when ordering diagnostic tests for pertussis. Consideration should be given to including IgG-PT serology as a confirmatory test in the Council of State and Territorial Epidemiologists (CSTE) case definition for pertussis.
Youn, Jung-Ho; Drake, Steven K.; Weingarten, Rebecca A.; Frank, Karen M.; Dekker, John P.
2015-01-01
Rapid detection of blaKPC-containing organisms can significantly impact infection control and clinical practices, as well as therapeutic choices. Current molecular and phenotypic methods to detect these organisms, however, require additional testing beyond routine organism identification. In this study, we evaluated the clinical performance of matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) to detect pKpQIL_p019 (p019)—an ∼11,109-Da protein associated with certain blaKPC-containing plasmids that was previously shown to successfully track a clonal outbreak of blaKPC-pKpQIL-Klebsiella pneumoniae in a proof-of-principle study (A. F. Lau, H. Wang, R. A. Weingarten, S. K. Drake, A. F. Suffredini, M. K. Garfield, Y. Chen, M. Gucek, J. H. Youn, F. Stock, H. Tso, J. DeLeo, J. J. Cimino, K. M. Frank, and J. P. Dekker, J Clin Microbiol 52:2804–2812, 2014, http://dx.doi.org/10.1128/JCM.00694-14). PCR for the p019 gene was used as the reference method. Here, blind analysis of 140 characterized Enterobacteriaceae isolates using two protein extraction methods (plate extraction and tube extraction) and two peak detection methods (manual and automated) showed sensitivities and specificities ranging from 96% to 100% and from 95% to 100%, respectively (2,520 spectra analyzed). Feasible laboratory implementation methods (plate extraction and automated analysis) demonstrated 96% sensitivity and 99% specificity. All p019-positive isolates (n = 26) contained blaKPC and were carbapenem resistant. Retrospective analysis of an additional 720 clinical Enterobacteriaceae spectra found an ∼11,109-Da signal in nine spectra (1.3%), including seven from p019-containing, carbapenem-resistant isolates (positive predictive value [PPV], 78%). Instrument tuning had a significant effect on assay sensitivity, highlighting important factors that must be considered as MALDI-TOF MS moves into applications beyond microbial identification. Using a large blind clinical data set, we have shown that spectra acquired for routine organism identification can also be analyzed automatically in real time at high throughput, at no additional expense to the laboratory, to enable rapid detection of potentially blaKPC-containing carbapenem-resistant isolates, providing early and clinically actionable results. PMID:26338858
Electrodermal Activity Is Sensitive to Cognitive Stress under Water.
Posada-Quintero, Hugo F; Florian, John P; Orjuela-Cañón, Alvaro D; Chon, Ki H
2017-01-01
When divers are at depth in water, the high pressure and low temperature alone can cause severe stress, challenging the human physiological control systems. The addition of cognitive stress, for example during a military mission, exacerbates the challenge. In these conditions, humans are more susceptible to autonomic imbalance. Reliable tools for the assessment of the autonomic nervous system (ANS) could be used as indicators of the relative degree of stress a diver is experiencing, which could reveal heightened risk during a mission. Electrodermal activity (EDA), a measure of the changes in conductance at the skin surface due to sweat production, is considered a promising alternative for the non-invasive assessment of sympathetic control of the ANS. EDA is sensitive to stress of many kinds. Therefore, as a first step, we tested the sensitivity of EDA, in the time and frequency domains, specifically to cognitive stress during water immersion of the subject (albeit with their measurement finger dry for safety). The data from 14 volunteer subjects were used from the experiment. After a 4-min adjustment and baseline period after being immersed in water, subjects underwent the Stroop task, which is known to induce cognitive stress. The time-domain indices of EDA, skin conductance level (SCL) and non-specific skin conductance responses (NS.SCRs), did not change during cognitive stress, compared to baseline measurements. Frequency-domain indices of EDA, EDASymp (based on power spectral analysis) and TVSymp (based on time-frequency analysis), did significantly change during cognitive stress. This leads to the conclusion that EDA, assessed by spectral analysis, is sensitive to cognitive stress in water-immersed subjects, and can potentially be used to detect cognitive stress in divers.
Gandhoke, Gurpreet S; Pease, Matthew; Smith, Kenneth J; Sekula, Raymond F
2017-09-01
To perform a cost-minimization study comparing the supraorbital and endoscopic endonasal (EEA) approach with or without craniotomy for the resection of olfactory groove meningiomas (OGMs). We built a decision tree using probabilities of gross total resection (GTR) and cerebrospinal fluid (CSF) leak rates with the supraorbital approach versus EEA with and without additional craniotomy. The cost (not charge or reimbursement) at each "stem" of this decision tree for both surgical options was obtained from our hospital's finance department. After a base case calculation, we applied plausible ranges to all parameters and carried out multiple 1-way sensitivity analyses. Probabilistic sensitivity analyses confirmed our results. The probabilities of GTR (0.8) and CSF leak (0.2) for the supraorbital craniotomy were obtained from our series of 5 patients who underwent a supraorbital approach for the resection of an OGM. The mean tumor volume was 54.6 cm 3 (range, 17-94.2 cm 3 ). Literature-reported rates of GTR (0.6) and CSF leak (0.3) with EEA were applied to our economic analysis. Supraorbital craniotomy was the preferred strategy, with an expected value of $29,423, compared with an EEA cost of $83,838. On multiple 1-way sensitivity analyses, supraorbital craniotomy remained the preferred strategy, with a minimum cost savings of $46,000 and a maximum savings of $64,000. Probabilistic sensitivity analysis found the lowest cost difference between the 2 surgical options to be $37,431. Compared with EEA, supraorbital craniotomy provides substantial cost savings in the treatment of OGMs. Given the potential differences in effectiveness between approaches, a cost-effectiveness analysis should be undertaken. Copyright © 2017 Elsevier Inc. All rights reserved.
Kim, DaeHee; Rhodes, Jeffrey A; Hashim, Jeffrey A; Rickabaugh, Lawrence; Brams, David M; Pinkus, Edward; Dou, Yamin
2018-06-07
Highly specific preoperative localizing test is required to select patients for minimally invasive parathyroidectomy (MIP) in lieu of traditional four-gland exploration. We hypothesized that Tc-99m sestamibi scan interpretation incorporating numerical measurements on the degree of asymmetrical activity from bilateral thyroid beds can be useful in localizing single adenoma for MIP. We devised a quantitative interpretation method for Tc-99m sestamibi scan based on the numerically graded asymmetrical activity on early phase. The numerical ratio value of each scan was obtained by dividing the number of counts from symmetrically drawn regions of interest (ROI) over bilateral thyroid beds. The final pathology and clinical outcome of 109 patients were used to perform receiver operating curve (ROC) analysis. Receiver operating curve analysis revealed the area under the curve (AUC) was calculated to be 0.71 (P = 0.0032), validating this method as a diagnostic tool. The optimal cut-off point for the ratio value with maximal combined sensitivity and specificity was found with corresponding sensitivity of 67.9% (56.5-77.2%, 95% CI) and specificity of 75.0% (52.8-91.8%, 95% CI). An additional higher cut-off with higher specificity with minimal possible sacrifice on sensitivity was also selected, yielding sensitivity of 28.6% (18.8-38.6%, 95% CI) and specificity of 90.0% (69.6-98.8%, 95% CI). Our results demonstrated that the more asymmetrical activity on the initial phase, the more successful it is to localize a single parathyroid adenoma on sestamibi scans. Using early-phase Tc-99m sestamibi scan only, we were able to select patients for minimally invasive parathyroidectomy with 90% specificity. © 2018 The Royal Australian and New Zealand College of Radiologists.
Duvivier, Wilco F; van Beek, Teris A; Nielen, Michel W F
2016-11-15
Recently, several direct and/or ambient mass spectrometry (MS) approaches have been suggested for drugs of abuse imaging in hair. The use of mass spectrometers with insufficient selectivity could result in false-positive measurements due to isobaric interferences. Different mass analyzers have been evaluated regarding their selectivity and sensitivity for the detection of Δ9-tetrahydrocannabinol (THC) from intact hair samples using direct analysis in real time (DART) ionization. Four different mass analyzers, namely (1) an orbitrap, (2) a quadrupole orbitrap, (3) a triple quadrupole, and (4) a quadrupole time-of-flight (QTOF), were evaluated. Selectivity and sensitivity were assessed by analyzing secondary THC standard dilutions on stainless steel mesh screens and blank hair samples, and by the analysis of authentic cannabis user hair samples. Additionally, separation of isobaric ions by use of travelling wave ion mobility (TWIM) was investigated. The use of a triple quadrupole instrument resulted in the highest sensitivity; however, transitions used for multiple reaction monitoring were only found to be specific when using high mass resolution product ion measurements. A mass resolution of at least 30,000 FWHM at m/z 315 was necessary to avoid overlap of THC with isobaric ions originating from the hair matrix. Even though selectivity was enhanced by use of TWIM, the QTOF instrument in resolution mode could not indisputably differentiate THC from endogenous isobaric ions in drug user hair samples. Only the high resolution of the (quadrupole) orbitrap instruments and the QTOF instrument in high-resolution mode distinguished THC in hair samples from endogenous isobaric interferences. As expected, enhanced selectivity compromises sensitivity and THC was only detectable in hair from heavy users. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
ASPASIA: A toolkit for evaluating the effects of biological interventions on SBML model behaviour.
Evans, Stephanie; Alden, Kieran; Cucurull-Sanchez, Lourdes; Larminie, Christopher; Coles, Mark C; Kullberg, Marika C; Timmis, Jon
2017-02-01
A calibrated computational model reflects behaviours that are expected or observed in a complex system, providing a baseline upon which sensitivity analysis techniques can be used to analyse pathways that may impact model responses. However, calibration of a model where a behaviour depends on an intervention introduced after a defined time point is difficult, as model responses may be dependent on the conditions at the time the intervention is applied. We present ASPASIA (Automated Simulation Parameter Alteration and SensItivity Analysis), a cross-platform, open-source Java toolkit that addresses a key deficiency in software tools for understanding the impact an intervention has on system behaviour for models specified in Systems Biology Markup Language (SBML). ASPASIA can generate and modify models using SBML solver output as an initial parameter set, allowing interventions to be applied once a steady state has been reached. Additionally, multiple SBML models can be generated where a subset of parameter values are perturbed using local and global sensitivity analysis techniques, revealing the model's sensitivity to the intervention. To illustrate the capabilities of ASPASIA, we demonstrate how this tool has generated novel hypotheses regarding the mechanisms by which Th17-cell plasticity may be controlled in vivo. By using ASPASIA in conjunction with an SBML model of Th17-cell polarisation, we predict that promotion of the Th1-associated transcription factor T-bet, rather than inhibition of the Th17-associated transcription factor RORγt, is sufficient to drive switching of Th17 cells towards an IFN-γ-producing phenotype. Our approach can be applied to all SBML-encoded models to predict the effect that intervention strategies have on system behaviour. ASPASIA, released under the Artistic License (2.0), can be downloaded from http://www.york.ac.uk/ycil/software.
Systematic review and meta-analysis of psychomotor effects of mobile phone electromagnetic fields.
Valentini, Elia; Ferrara, Michele; Presaghi, Fabio; De Gennaro, Luigi; Gennaro, Luigi De; Curcio, Giuseppe
2010-10-01
Over the past 10 years there has been increasing concern about the possible behavioural effects of mobile phone use. This systematic review and meta-analysis focuses on studies published since 1999 on the human cognitive and performance effects of mobile phone-related electromagnetic fields (EMF). PubMed, Biomed, Medline, Biological Sciences, PsychInfo, PsycARTICLES, Environmental Sciences and Pollution Management, Neurosciences Abstracts and Web of Science professional databases were searched and 24 studies selected for meta-analysis. Each study had to have at least one psychomotor measurement result as a main outcome. Data were analysed using standardised mean difference (SMD) as the effect size measure. Results Only three tasks (2-back, 3-back and simple reaction time (SRT)) displayed significant heterogeneity, but after studies with extreme SMD were excluded using sensitivity analysis, the statistical significance disappeared (χ(2)(7)=1.63, p=0.20; χ(2)(6)=1.00, p=0.32; χ(2)(10)=14.04, p=0.17, respectively). Following sensitivity analysis, the effect of sponsorship and publication bias were assessed. Meta-regression indicated a significant effect (b1/40.12, p<0.05) only for the 2-back task with mixed funding (industry and public/charity). Funnel plot inspection revealed a significant publication bias only for two cognitive tasks: SRT (Begg's rank correlation r=0.443; Egger's test b=-0.652) and the subtraction task (Egger's test b=-0.687). Mobile phone-like EMF do not seem to induce cognitive and psychomotor effects. Nonetheless, the existence of sponsorship and publication biases should encourage WHO intervention to develop official research standards and guidelines. In addition, future research should address critical and neglected issues such as investigation of repeated, intensive and chronic exposures, especially in highly sensitive populations such as children.
Allahdina, Ali M; Stetson, Paul F; Vitale, Susan; Wong, Wai T; Chew, Emily Y; Ferris, Fredrick L; Sieving, Paul A; Cukras, Catherine
2018-04-01
As optical coherence tomography (OCT) minimum intensity (MI) analysis provides a quantitative assessment of changes in the outer nuclear layer (ONL), we evaluated the ability of OCT-MI analysis to detect hydroxychloroquine toxicity. Fifty-seven predominantly female participants (91.2% female; mean age, 55.7 ± 10.4 years; mean time on hydroxychloroquine, 15.0 ± 7.5 years) were enrolled in a case-control study and categorized into affected (i.e., with toxicity, n = 19) and unaffected (n = 38) groups using objective multifocal electroretinographic (mfERG) criteria. Spectral-domain OCT scans of the macula were analyzed and OCT-MI values quantitated for each subfield of the Early Treatment Diabetic Retinopathy Study (ETDRS) grid. A two-sample U-test and a cross-validation approach were used to assess the sensitivity and specificity of toxicity detection according to OCT-MI criteria. The medians of the OCT-MI values in all nine of the ETDRS subfields were significantly elevated in the affected group relative to the unaffected group (P < 0.005 for all comparisons), with the largest difference found for the inner inferior subfield (P < 0.0001). The receiver operating characteristic analysis of median MI values of the inner inferior subfields showed high sensitivity and high specificity in the detection of toxicity with area under the curve = 0.99. Retinal changes secondary to hydroxychloroquine toxicity result in increased OCT reflectivity in the ONL that can be detected and quantitated using OCT-MI analysis. Analysis of OCT-MI values demonstrates high sensitivity and specificity for detecting the presence of hydroxychloroquine toxicity in this cohort and may contribute additionally to current screening practices.
Allahdina, Ali M.; Stetson, Paul F.; Vitale, Susan; Wong, Wai T.; Chew, Emily Y.; Ferris, Fredrick L.; Sieving, Paul A.
2018-01-01
Purpose As optical coherence tomography (OCT) minimum intensity (MI) analysis provides a quantitative assessment of changes in the outer nuclear layer (ONL), we evaluated the ability of OCT-MI analysis to detect hydroxychloroquine toxicity. Methods Fifty-seven predominantly female participants (91.2% female; mean age, 55.7 ± 10.4 years; mean time on hydroxychloroquine, 15.0 ± 7.5 years) were enrolled in a case-control study and categorized into affected (i.e., with toxicity, n = 19) and unaffected (n = 38) groups using objective multifocal electroretinographic (mfERG) criteria. Spectral-domain OCT scans of the macula were analyzed and OCT-MI values quantitated for each subfield of the Early Treatment Diabetic Retinopathy Study (ETDRS) grid. A two-sample U-test and a cross-validation approach were used to assess the sensitivity and specificity of toxicity detection according to OCT-MI criteria. Results The medians of the OCT-MI values in all nine of the ETDRS subfields were significantly elevated in the affected group relative to the unaffected group (P < 0.005 for all comparisons), with the largest difference found for the inner inferior subfield (P < 0.0001). The receiver operating characteristic analysis of median MI values of the inner inferior subfields showed high sensitivity and high specificity in the detection of toxicity with area under the curve = 0.99. Conclusions Retinal changes secondary to hydroxychloroquine toxicity result in increased OCT reflectivity in the ONL that can be detected and quantitated using OCT-MI analysis. Analysis of OCT-MI values demonstrates high sensitivity and specificity for detecting the presence of hydroxychloroquine toxicity in this cohort and may contribute additionally to current screening practices. PMID:29677357
Francis, Tittu; Washington, Travis; Srivastava, Karan; Moutzouros, Vasilios; Makhni, Eric C; Hakeos, William
2017-11-01
Tension band wiring (TBW) and locked plating are common treatment options for Mayo IIA olecranon fractures. Clinical trials have shown excellent functional outcomes with both techniques. Although TBW implants are significantly less expensive than a locked olecranon plate, TBW often requires an additional operation for implant removal. To choose the most cost-effective treatment strategy, surgeons must understand how implant costs and return to the operating room influence the most cost-effective strategy. This cost-effective analysis study explored the optimal treatment strategies by using decision analysis tools. An expected-value decision tree was constructed to estimate costs based on the 2 implant choices. Values for critical variables, such as implant removal rate, were obtained from the literature. A Monte Carlo simulation consisting of 100,000 trials was used to incorporate variability in medical costs and implant removal rates. Sensitivity analysis and strategy tables were used to show how different variables influence the most cost-effective strategy. TBW was the most cost-effective strategy, with a cost savings of approximately $1300. TBW was also the dominant strategy by being the most cost-effective solution in 63% of the Monte Carlo trials. Sensitivity analysis identified implant costs for plate fixation and surgical costs for implant removal as the most sensitive parameters influencing the cost-effective strategy. Strategy tables showed the most cost-effective solution as 2 parameters vary simultaneously. TBW is the most cost-effective strategy in treating Mayo IIA olecranon fractures despite a higher rate of return to the operating room. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Wu, Bin; Dong, Baijun; Xu, Yuejuan; Zhang, Qiang; Shen, Jinfang; Chen, Huafeng; Xue, Wei
2012-01-01
Background To estimate, from the perspective of the Chinese healthcare system, the economic outcomes of five different first-line strategies among patients with metastatic renal cell carcinoma (mRCC). Methods and Findings A decision-analytic model was developed to simulate the lifetime disease course associated with renal cell carcinoma. The health and economic outcomes of five first-line strategies (interferon-alfa, interleukin-2, interleukin-2 plus interferon-alfa, sunitinib and bevacizumab plus interferon-alfa) were estimated and assessed by indirect comparison. The clinical and utility data were taken from published studies. The cost data were estimated from local charge data and current Chinese practices. Sensitivity analyses were used to explore the impact of uncertainty regarding the results. The impact of the sunitinib patient assistant program (SPAP) was evaluated via scenario analysis. The base-case analysis showed that the sunitinib strategy yielded the maximum health benefits: 2.71 life years and 1.40 quality-adjusted life-years (QALY). The marginal cost-effectiveness (cost per additional QALY) gained via the sunitinib strategy compared with the conventional strategy was $220,384 (without SPAP, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated) and $16,993 (with SPAP, interferon-alfa, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated). In general, the results were sensitive to the hazard ratio of progression-free survival. The probabilistic sensitivity analysis demonstrated that the sunitinib strategy with SPAP was the most cost-effective approach when the willingness-to-pay threshold was over $16,000. Conclusions Our analysis suggests that traditional cytokine therapy is the cost-effective option in the Chinese healthcare setting. In some relatively developed regions, sunitinib with SPAP may be a favorable cost-effective alternative for mRCC. PMID:22412884
Wu, Bin; Dong, Baijun; Xu, Yuejuan; Zhang, Qiang; Shen, Jinfang; Chen, Huafeng; Xue, Wei
2012-01-01
To estimate, from the perspective of the Chinese healthcare system, the economic outcomes of five different first-line strategies among patients with metastatic renal cell carcinoma (mRCC). A decision-analytic model was developed to simulate the lifetime disease course associated with renal cell carcinoma. The health and economic outcomes of five first-line strategies (interferon-alfa, interleukin-2, interleukin-2 plus interferon-alfa, sunitinib and bevacizumab plus interferon-alfa) were estimated and assessed by indirect comparison. The clinical and utility data were taken from published studies. The cost data were estimated from local charge data and current Chinese practices. Sensitivity analyses were used to explore the impact of uncertainty regarding the results. The impact of the sunitinib patient assistant program (SPAP) was evaluated via scenario analysis. The base-case analysis showed that the sunitinib strategy yielded the maximum health benefits: 2.71 life years and 1.40 quality-adjusted life-years (QALY). The marginal cost-effectiveness (cost per additional QALY) gained via the sunitinib strategy compared with the conventional strategy was $220,384 (without SPAP, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated) and $16,993 (with SPAP, interferon-alfa, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated). In general, the results were sensitive to the hazard ratio of progression-free survival. The probabilistic sensitivity analysis demonstrated that the sunitinib strategy with SPAP was the most cost-effective approach when the willingness-to-pay threshold was over $16,000. Our analysis suggests that traditional cytokine therapy is the cost-effective option in the Chinese healthcare setting. In some relatively developed regions, sunitinib with SPAP may be a favorable cost-effective alternative for mRCC.
Aydin, Ümit; Vorwerk, Johannes; Küpper, Philipp; Heers, Marcel; Kugel, Harald; Galka, Andreas; Hamid, Laith; Wellmer, Jörg; Kellinghaus, Christoph; Rampp, Stefan; Wolters, Carsten Hermann
2014-01-01
To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP) and field (SEF) data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data. PMID:24671208
Kepha, Stella; Kihara, Jimmy H.; Njenga, Sammy M.; Pullan, Rachel L.; Brooker, Simon J.
2014-01-01
Objectives This study evaluates the diagnostic accuracy and cost-effectiveness of the Kato-Katz and Mini-FLOTAC methods for detection of soil-transmitted helminths (STH) in a post-treatment setting in western Kenya. A cost analysis also explores the cost implications of collecting samples during school surveys when compared to household surveys. Methods Stool samples were collected from children (n = 652) attending 18 schools in Bungoma County and diagnosed by the Kato-Katz and Mini-FLOTAC coprological methods. Sensitivity and additional diagnostic performance measures were analyzed using Bayesian latent class modeling. Financial and economic costs were calculated for all survey and diagnostic activities, and cost per child tested, cost per case detected and cost per STH infection correctly classified were estimated. A sensitivity analysis was conducted to assess the impact of various survey parameters on cost estimates. Results Both diagnostic methods exhibited comparable sensitivity for detection of any STH species over single and consecutive day sampling: 52.0% for single day Kato-Katz; 49.1% for single-day Mini-FLOTAC; 76.9% for consecutive day Kato-Katz; and 74.1% for consecutive day Mini-FLOTAC. Diagnostic performance did not differ significantly between methods for the different STH species. Use of Kato-Katz with school-based sampling was the lowest cost scenario for cost per child tested ($10.14) and cost per case correctly classified ($12.84). Cost per case detected was lowest for Kato-Katz used in community-based sampling ($128.24). Sensitivity analysis revealed the cost of case detection for any STH decreased non-linearly as prevalence rates increased and was influenced by the number of samples collected. Conclusions The Kato-Katz method was comparable in diagnostic sensitivity to the Mini-FLOTAC method, but afforded greater cost-effectiveness. Future work is required to evaluate the cost-effectiveness of STH surveillance in different settings. PMID:24810593
Assefa, Liya M; Crellen, Thomas; Kepha, Stella; Kihara, Jimmy H; Njenga, Sammy M; Pullan, Rachel L; Brooker, Simon J
2014-05-01
This study evaluates the diagnostic accuracy and cost-effectiveness of the Kato-Katz and Mini-FLOTAC methods for detection of soil-transmitted helminths (STH) in a post-treatment setting in western Kenya. A cost analysis also explores the cost implications of collecting samples during school surveys when compared to household surveys. Stool samples were collected from children (n = 652) attending 18 schools in Bungoma County and diagnosed by the Kato-Katz and Mini-FLOTAC coprological methods. Sensitivity and additional diagnostic performance measures were analyzed using Bayesian latent class modeling. Financial and economic costs were calculated for all survey and diagnostic activities, and cost per child tested, cost per case detected and cost per STH infection correctly classified were estimated. A sensitivity analysis was conducted to assess the impact of various survey parameters on cost estimates. Both diagnostic methods exhibited comparable sensitivity for detection of any STH species over single and consecutive day sampling: 52.0% for single day Kato-Katz; 49.1% for single-day Mini-FLOTAC; 76.9% for consecutive day Kato-Katz; and 74.1% for consecutive day Mini-FLOTAC. Diagnostic performance did not differ significantly between methods for the different STH species. Use of Kato-Katz with school-based sampling was the lowest cost scenario for cost per child tested ($10.14) and cost per case correctly classified ($12.84). Cost per case detected was lowest for Kato-Katz used in community-based sampling ($128.24). Sensitivity analysis revealed the cost of case detection for any STH decreased non-linearly as prevalence rates increased and was influenced by the number of samples collected. The Kato-Katz method was comparable in diagnostic sensitivity to the Mini-FLOTAC method, but afforded greater cost-effectiveness. Future work is required to evaluate the cost-effectiveness of STH surveillance in different settings.
Analysis of painted arts by energy sensitive radiographic techniques with the Pixel Detector Timepix
NASA Astrophysics Data System (ADS)
Zemlicka, J.; Jakubek, J.; Kroupa, M.; Hradil, D.; Hradilova, J.; Mislerova, H.
2011-01-01
Non-invasive techniques utilizing X-ray radiation offer a significant advantage in scientific investigations of painted arts and other cultural artefacts such as painted artworks or statues. In addition, there is also great demand for a mobile analytical and real-time imaging device given the fact that many fine arts cannot be transported. The highly sensitive hybrid semiconductor pixel detector, Timepix, is capable of detecting and resolving subtle and low-contrast differences in the inner composition of a wide variety of objects. Moreover, it is able to map the surface distribution of the contained elements. Several transmission and emission techniques are presented which have been proposed and tested for the analysis of painted artworks. This study focuses on the novel techniques of X-ray transmission radiography (conventional and energy sensitive) and X-ray induced fluorescence imaging (XRF) which can be realised at the table-top scale with the state-of-the-art pixel detector Timepix. Transmission radiography analyses the changes in the X-ray beam intensity caused by specific attenuation of different components in the sample. The conventional approach uses all energies from the source spectrum for the creation of the image while the energy sensitive alternative creates images in given energy intervals which enable identification and separation of materials. The XRF setup is based on the detection of characteristic radiation induced by X-ray photons through a pinhole geometry collimator. The XRF method is extremely sensitive to the material composition but it creates only surface maps of the elemental distribution. For the purpose of the analysis several sets of painted layers have been prepared in a restoration laboratory. The composition of these layers corresponds to those of real historical paintings from the 19th century. An overview of the current status of our methods will be given with respect to the instrumentation and the application in the field of cultural heritage.
NASA Astrophysics Data System (ADS)
Ryken, A.; Gochis, D.; Carroll, R. W. H.; Bearup, L. A.; Williams, K. H.; Maxwell, R. M.
2017-12-01
The hydrology of high-elevation, mountainous regions is poorly represented in Earth Systems Models (ESMs). In addition to regulating downstream water delivery, these ecosystems play an important role in the storage and land-atmosphere exchange of carbon and water. Water balances are sensitive to the amount of water stored in the snowpack (SWE) and the amount of water leaving the system in the form of evapotranspiration—two pieces of the hydrologic cycle that are difficult to observe and model in heterogeneous mountainous regions due to spatially variant weather patterns. In an effort to resolve this hydrologic gap in ESMs, this study seeks to better understand the interactions between groundwater, carbon flux, and the lower atmosphere in these high-altitude environments through integration of field observations and model simulations. We compare model simulations to field observations to elucidate process performance combined with a sensitivity analysis to better understand parameter uncertainty. Observations from a meteorological station in the East River Basin are used to force an integrated single-column hydrologic model, ParFlow-CLM. This met station is co-located with an eddy covariance tower, which, along with snow surveys, is used to better constrain the water, carbon, and energy fluxes in the coupled land-atmosphere model to increase our understanding of high-altitude headwaters. Preliminary results suggest the model compares well to the eddy covariance tower and field observations, shown through both correct magnitude and timing of peak SWE along with similar magnitudes and diurnal patterns of heat and water fluxes. Initial sensitivity analysis results show that an increase in temperature leads to a decrease in peak SWE as well as an increase in latent heat revealing a sensitivity of the model to air temperature. Further sensitivity analysis will help us understand more parameter uncertainty. Through obtaining more accurate and higher resolution meteorological data and applying it to a coupled hydrologic model, this study can lead to better representation of mountainous environments in all ESMs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertz, P.R.
Fluorescence spectroscopy is a highly sensitive and selective tool for the analysis of complex systems. In order to investigate the efficacy of several steady state and dynamic techniques for the analysis of complex systems, this work focuses on two types of complex, multicomponent samples: petrolatums and coal liquids. It is shown in these studies dynamic, fluorescence lifetime-based measurements provide enhanced discrimination between complex petrolatum samples. Additionally, improved quantitative analysis of multicomponent systems is demonstrated via incorporation of organized media in coal liquid samples. This research provides the first systematic studies of (1) multifrequency phase-resolved fluorescence spectroscopy for dynamic fluorescence spectralmore » fingerprinting of complex samples, and (2) the incorporation of bile salt micellar media to improve accuracy and sensitivity for characterization of complex systems. In the petroleum studies, phase-resolved fluorescence spectroscopy is used to combine spectral and lifetime information through the measurement of phase-resolved fluorescence intensity. The intensity is collected as a function of excitation and emission wavelengths, angular modulation frequency, and detector phase angle. This multidimensional information enhances the ability to distinguish between complex samples with similar spectral characteristics. Examination of the eigenvalues and eigenvectors from factor analysis of phase-resolved and steady state excitation-emission matrices, using chemometric methods of data analysis, confirms that phase-resolved fluorescence techniques offer improved discrimination between complex samples as compared with conventional steady state methods.« less
Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Sornette, Didier
2012-01-01
Notwithstanding the significant efforts to develop estimators of long-range correlations (LRC) and to compare their performance, no clear consensus exists on what is the best method and under which conditions. In addition, synthetic tests suggest that the performance of LRC estimators varies when using different generators of LRC time series. Here, we compare the performances of four estimators [Fluctuation Analysis (FA), Detrended Fluctuation Analysis (DFA), Backward Detrending Moving Average (BDMA), and Centred Detrending Moving Average (CDMA)]. We use three different generators [Fractional Gaussian Noises, and two ways of generating Fractional Brownian Motions]. We find that CDMA has the best performance and DFA is only slightly worse in some situations, while FA performs the worst. In addition, CDMA and DFA are less sensitive to the scaling range than FA. Hence, CDMA and DFA remain “The Methods of Choice” in determining the Hurst index of time series. PMID:23150785
Hughes, I
1998-09-24
The direct analysis of selected components from combinatorial libraries by sensitive methods such as mass spectrometry is potentially more efficient than deconvolution and tagging strategies since additional steps of resynthesis or introduction of molecular tags are avoided. A substituent selection procedure is described that eliminates the mass degeneracy commonly observed in libraries prepared by "split-and-mix" methods, without recourse to high-resolution mass measurements. A set of simple rules guides the choice of substituents such that all components of the library have unique nominal masses. Additional rules extend the scope by ensuring that characteristic isotopic mass patterns distinguish isobaric components. The method is applicable to libraries having from two to four varying substituent groups and can encode from a few hundred to several thousand components. No restrictions are imposed on the manner in which the "self-coded" library is synthesized or screened.
NASA Astrophysics Data System (ADS)
Alamgir, Malik; Khuhawar, Muhammad Yar; Memon, Saima Q.; Hayat, Amir; Zounr, Rizwan Ali
2015-01-01
A sensitive and simple spectrofluorimetric method has been developed for the analysis of famotidine, from pharmaceutical preparations and biological fluids after derivatization with benzoin. The reaction was carried out in alkaline medium with measurement of fluorescence intensity at 446 nm with excitation wavelength at 286 nm. Linear calibration was obtained with 0.5-15 μg/ml with coefficient of determination (r2) 0.997. The factors affecting the fluorescence intensity were optimized. The pharmaceutical additives and amino acid did not interfere in the determination. The mean percentage recovery (n = 4) calculated by standard addition from pharmaceutical preparation was 94.8-98.2% with relative standard deviation (RSD) 1.56-3.34% and recovery from deproteinized spiked serum and urine of healthy volunteers was 98.6-98.9% and 98.0-98.4% with RSD 0.34-0.84% and 0.29-0.87% respectively.
Gu, Binghe; Meldrum, Brian; McCabe, Terry; Phillips, Scott
2012-01-01
A theoretical treatment was developed and validated that relates analyte concentration and mass sensitivities to injection volume, retention factor, particle diameter, column length, column inner diameter and detection wavelength in liquid chromatography, and sample volume and extracted volume in solid-phase extraction (SPE). The principles were applied to improve sensitivity for trace analysis of clopyralid in drinking water. It was demonstrated that a concentration limit of detection of 0.02 ppb (μg/L) for clopyralid could be achieved with the use of simple UV detection and 100 mL of a spiked drinking water sample. This enabled reliable quantitation of clopyralid at the targeted 0.1 ppb level. Using a buffered solution as the elution solvent (potassium acetate buffer, pH 4.5, containing 10% of methanol) in the SPE procedures was found superior to using 100% methanol, as it provided better extraction recovery (70-90%) and precision (5% for a concentration at 0.1 ppb level). In addition, the eluted sample was in a weaker solvent than the mobile phase, permitting the direct injection of the extracted sample, which enabled a faster cycle time of the overall analysis. Excluding the preparation of calibration standards, the analysis of a single sample, including acidification, extraction, elution and LC run, could be completed in 1 h. The method was used successfully for the determination of clopyralid in over 200 clopyralid monoethanolamine-fortified drinking water samples, which were treated with various water treatment resins. Copyright © 2012 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim.
Simulation-based optimization framework for reuse of agricultural drainage water in irrigation.
Allam, A; Tawfik, A; Yoshimura, C; Fleifle, A
2016-05-01
A simulation-based optimization framework for agricultural drainage water (ADW) reuse has been developed through the integration of a water quality model (QUAL2Kw) and a genetic algorithm. This framework was applied to the Gharbia drain in the Nile Delta, Egypt, in summer and winter 2012. First, the water quantity and quality of the drain was simulated using the QUAL2Kw model. Second, uncertainty analysis and sensitivity analysis based on Monte Carlo simulation were performed to assess QUAL2Kw's performance and to identify the most critical variables for determination of water quality, respectively. Finally, a genetic algorithm was applied to maximize the total reuse quantity from seven reuse locations with the condition not to violate the standards for using mixed water in irrigation. The water quality simulations showed that organic matter concentrations are critical management variables in the Gharbia drain. The uncertainty analysis showed the reliability of QUAL2Kw to simulate water quality and quantity along the drain. Furthermore, the sensitivity analysis showed that the 5-day biochemical oxygen demand, chemical oxygen demand, total dissolved solids, total nitrogen and total phosphorous are highly sensitive to point source flow and quality. Additionally, the optimization results revealed that the reuse quantities of ADW can reach 36.3% and 40.4% of the available ADW in the drain during summer and winter, respectively. These quantities meet 30.8% and 29.1% of the drainage basin requirements for fresh irrigation water in the respective seasons. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Caton, R. G.; Colman, J. J.; Parris, R. T.; Nickish, L.; Bullock, G.
2017-12-01
The Air Force Research Laboratory, in collaboration with NorthWest Research Associates, is developing advanced software capabilities for high fidelity simulations of high frequency (HF) sky wave propagation and performance analysis of HF systems. Based on the HiCIRF (High-frequency Channel Impulse Response Function) platform [Nickisch et. al, doi:10.1029/2011RS004928], the new Air Force Coverage Analysis Program (AFCAP) provides the modular capabilities necessary for a comprehensive sensitivity study of the large number of variables which define simulations of HF propagation modes. In this paper, we report on an initial exercise of AFCAP to analyze the sensitivities of the tool to various environmental and geophysical parameters. Through examination of the channel scattering function and amplitude-range-Doppler output on two-way propagation paths with injected target signals, we will compare simulated returns over a range of geophysical conditions as well as varying definitions for environmental noise, meteor clutter, and sea state models for Bragg backscatter. We also investigate the impacts of including clutter effects due to field-aligned backscatter from small scale ionization structures at varied levels of severity as defined by the climatologically WideBand Model (WBMOD). In the absence of additional user provided information, AFCAP relies on International Reference Ionosphere (IRI) model to define the ionospheric state for use in 2D ray tracing algorithms. Because the AFCAP architecture includes the option for insertion of a user defined gridded ionospheric representation, we compare output from the tool using the IRI and ionospheric definitions from assimilative models such as GPSII (GPS Ionospheric Inversion).
Tracking Matrix Effects in the Analysis of DNA Adducts of Polycyclic Aromatic Hydrocarbons
Klaene, Joshua J.; Flarakos, Caroline; Glick, James; Barret, Jennifer T.; Zarbl, Helmut; Vouros, Paul
2015-01-01
LC-MS using electrospray ionization is currently the method of choice in bio-organic analysis covering a wide range of applications in a broad spectrum of biological media. The technique is noted for its high sensitivity but one major limitation which hinders achievement of its optimal sensitivity is the signal suppression due to matrix inferences introduced by the presence of co-extracted compounds during the sample preparation procedure. The analysis of DNA adducts of common environmental carcinogens is particularly sensitive to such matrix effects as sample preparation is a multistep process which involves “contamination” of the sample due to the addition of enzymes and other reagents for digestion of the DNA in order to isolate the analyte(s). This problem is further exacerbated by the need to reach low levels of quantitation (LOQ in the ppb level) while also working with limited (2-5 μg) quantities of sample. We report here on the systematic investigation of ion signal suppression contributed by each individual step involved in the sample preparation associated with the analysis of DNA adducts of polycyclic aromatic hydrocarbon (PAH) using as model analyte dG-BaP, the deoxyguanosine adduct of benzo[a]pyrene (BaP). The individual matrix contribution of each one of these sources to analyte signal was systematically addressed as were any interactive effects. The information was used to develop a validated analytical protocol for the target biomarker at levels typically encountered in vivo using as little as 2 μg of DNA and applied to a dose response study using a metabolically competent cell line. PMID:26607319
NASA Astrophysics Data System (ADS)
La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto
2016-09-01
With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.
Sensitivity analysis of a wing aeroelastic response
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.
1991-01-01
A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.
Jain, Siddharth; Kilgore, Meredith; Edwards, Rodney K; Owen, John
2016-07-01
Preterm birth (PTB) is a significant cause of neonatal morbidity and mortality. Studies have shown that vaginal progesterone therapy for women diagnosed with shortened cervical length can reduce the risk of PTB. However, published cost-effectiveness analyses of vaginal progesterone for short cervix have not considered an appropriate range of clinically important parameters. To evaluate the cost-effectiveness of universal cervical length screening in women without a history of spontaneous PTB, assuming that all women with shortened cervical length receive progesterone to reduce the likelihood of PTB. A decision analysis model was developed to compare universal screening and no-screening strategies. The primary outcome was the cost-effectiveness ratio of both the strategies, defined as the estimated patient cost per quality-adjusted life-year (QALY) realized by the children. One-way sensitivity analyses were performed by varying progesterone efficacy to prevent PTB. A probabilistic sensitivity analysis was performed to address uncertainties in model parameter estimates. In our base-case analysis, assuming that progesterone reduces the likelihood of PTB by 11%, the incremental cost-effectiveness ratio for screening was $158,000/QALY. Sensitivity analyses show that these results are highly sensitive to the presumed efficacy of progesterone to prevent PTB. In a 1-way sensitivity analysis, screening results in cost-saving if progesterone can reduce PTB by 36%. Additionally, for screening to be cost-effective at WTP=$60,000 in three clinical scenarios, progesterone therapy has to reduce PTB by 60%, 34% and 93%. Screening is never cost-saving in the worst-case scenario or when serial ultrasounds are employed, but could be cost-saving with a two-day hospitalization only if progesterone were 64% effective. Cervical length screening and treatment with progesterone is a not a dominant, cost-effective strategy unless progesterone is more effective than has been suggested by available data for US women. Until future trials demonstrate greater progesterone efficacy, and effectiveness studies confirm a benefit from screening and treatment, the cost-effectiveness of universal cervical length screening in the United States remains questionable. Copyright © 2016 Elsevier Inc. All rights reserved.
2016-10-01
lymphoid and cancer cells from freshly dissociated tumors in cases where enough tumor is available, allowing analysis by flow cytometry and mRNA...agent, 5-aza-cytidine (AZA) potently stimulates tumor immune attraction of T- cells to the tumor microenvironment. This augmented by addition of a...demethylation, histone deactylases, immune checkpoint therapy, viral defense, immune cell attraction 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF
Microemulsion Electrokinetic Chromatography.
Buchberger, Wolfgang
2016-01-01
Microemulsion electrokinetic chromatography (MEEKC) is a special mode of capillary electrophoresis employing a microemulsion as carrier electrolyte. Analytes may partition between the aqueous phase of the microemulsion and its oil droplets which act as a pseudostationary phase. The technique is well suited for the separation of neutral species, in which case charged oil droplets (obtained by addition of an anionic or cationic surfactant) are present. A single set of separation parameters may be sufficient for separation of a wide range of analytes belonging to quite different chemical classes. Fine-tuning of resolution and analysis time may be achieved by addition of organic solvents, by changes in the nature of the surfactants (and cosurfactants) used to stabilize the microemulsion, or by various additives that may undergo some additional interactions with the analytes. Besides the separation of neutral analytes (which may be the most important application area of MEEKC), it can also be employed for cationic and/or anionic species. In this chapter, MEEKC conditions are summarized that have proven their reliability for routine analysis. Furthermore, the mechanisms encountered in MEEKC allow an efficient on-capillary preconcentration of analytes, so that the problem of poor concentration sensitivity of ultraviolet absorbance detection is circumvented.
Geographic and Environmental Sources of Variation in Lake Bacterial Community Composition†
Yannarell, Anthony C.; Triplett, Eric W.
2005-01-01
This study used a genetic fingerprinting technique (automated ribosomal intergenic spacer analysis [ARISA]) to characterize microbial communities from a culture-independent perspective and to identify those environmental factors that influence the diversity of bacterial assemblages in Wisconsin lakes. The relationships between bacterial community composition and 11 environmental variables for a suite of 30 lakes from northern and southern Wisconsin were explored by canonical correspondence analysis (CCA). In addition, the study assessed the influences of ARISA fragment detection threshold (sensitivity) and the quantitative, semiquantitative, and binary (presence-absence) use of ARISA data. It was determined that the sensitivity of ARISA was influential only when presence-absence-transformed data were used. The outcomes of analyses depended somewhat on the data transformation applied to ARISA data, but there were some features common to all of the CCA models. These commonalities indicated that differences in bacterial communities were best explained by regional (i.e., northern versus southern Wisconsin lakes) and landscape level (i.e., seepage lakes versus drainage lakes) factors. ARISA profiles from May samples were consistently different from those collected in other months. In addition, communities varied along gradients of pH and water clarity (Secchi depth) both within and among regions. The results demonstrate that environmental, temporal, regional, and landscape level features interact to determine the makeup of bacterial assemblages in northern temperate lakes. PMID:15640192
Strategies and Approaches to TPS Design
NASA Technical Reports Server (NTRS)
Kolodziej, Paul
2005-01-01
Thermal protection systems (TPS) insulate planetary probes and Earth re-entry vehicles from the aerothermal heating experienced during hypersonic deceleration to the planet s surface. The systems are typically designed with some additional capability to compensate for both variations in the TPS material and for uncertainties in the heating environment. This additional capability, or robustness, also provides a surge capability for operating under abnormal severe conditions for a short period of time, and for unexpected events, such as meteoroid impact damage, that would detract from the nominal performance. Strategies and approaches to developing robust designs must also minimize mass because an extra kilogram of TPS displaces one kilogram of payload. Because aircraft structures must be optimized for minimum mass, reliability-based design approaches for mechanical components exist that minimize mass. Adapting these existing approaches to TPS component design takes advantage of the extensive work, knowledge, and experience from nearly fifty years of reliability-based design of mechanical components. A Non-Dimensional Load Interference (NDLI) method for calculating the thermal reliability of TPS components is presented in this lecture and applied to several examples. A sensitivity analysis from an existing numerical simulation of a carbon phenolic TPS provides insight into the effects of the various design parameters, and is used to demonstrate how sensitivity analysis may be used with NDLI to develop reliability-based designs of TPS components.
NASA Astrophysics Data System (ADS)
Huan, Tao; Troyer, Dean A.; Li, Liang
2016-08-01
We report a method of metabolomic profiling of intact tissue based on molecular preservation by extraction and fixation (mPREF) and high-performance chemical isotope labeling (CIL) liquid chromatography mass spectrometry (LC-MS). mPREF extracts metabolites by aqueous methanol from tissue biopsies without altering tissue architecture and thus conventional histology can be performed on the same tissue. In a proof-of-principle study, we applied dansylation LC-MS to profile the amine/phenol submetabolome of prostate needle biopsies from 25 patient samples derived from 16 subjects. 2900 metabolites were consistently detected in more than 50% of the samples. This unprecedented coverage allowed us to identify significant metabolites for differentiating tumor and normal tissues. The panel of significant metabolites was refined using 36 additional samples from 18 subjects. Receiver Operating Characteristic (ROC) analysis showed area-under-the-curve (AUC) of 0.896 with sensitivity of 84.6% and specificity of 83.3% using 7 metabolites. A blind study of 24 additional validation samples gave a specificity of 90.9% at the same sensitivity of 84.6%. The mPREF extraction can be readily implemented into the existing clinical workflow. Our method of combining mPREF with CIL LC-MS offers a powerful and convenient means of performing histopathology and discovering or detecting metabolite biomarkers in the same tissue biopsy.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Kawai, Takayuki; Koino, Hiroshi; Sueyoshi, Kenji; Kitagawa, Fumihiko; Otsuka, Koji
2012-07-13
To improve the sensitivity in chiral analysis by capillary electrophoresis without loss of optical resolution, application of large-volume sample stacking with an electroosmotic flow pump (LVSEP) was investigated. Effects of the addition of cyclodextrin (CD) into a running solution on the LVSEP preconcentration was theoretically studied, where the preconcentration efficiency and effective separation length would be slightly increased if the effective electrophoretic velocity (v(ep,eff,BGS)) of the analytes was decreased by interacting with CD. In LVSEP-CD-modified capillary zone electrophoresis (CDCZE) and LVSEP-CD electrokinetic chromatography with reduced v(ep,eff,BGS), up to 1000-fold sensitivity increases were achieved with almost no loss of resolution. In LVSEP-CD-modified micellar electrokinetic chromatography of amino acids with increased v(ep,eff,BGS), a 1300-fold sensitivity increase was achieved without much loss of resolution, indicating the versatile applicability of LVSEP to many separation modes. An enantio-excess (EE) assay was also carried out in LVSEP-CDCZE, resulting in successful analyses of up to 99.6% EE. Finally, we analyzed ibuprofen in urine by desalting with a C₁₈ solid-phase extraction column. As a typical result, 250ppb ibuprofen was well concentrated and optically resolved with 84.0-86.6% recovery in LVSEP-CDCZE, indicating the applicability of LVSEP to real samples containing a large amount of unnecessary background salts. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.
2002-01-01
The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.
Watanabe, Yoshiyuki; Kim, Hyun Soo; Castoro, Ryan J; Chung, Woonbok; Estecio, Marcos R H; Kondo, Kimie; Guo, Yi; Ahmed, Saira S; Toyota, Minoru; Itoh, Fumio; Suk, Ki Tae; Cho, Mee-Yon; Shen, Lanlan; Jelinek, Jaroslav; Issa, Jean-Pierre J
2009-06-01
Aberrant DNA methylation is an early and frequent process in gastric carcinogenesis and could be useful for detection of gastric neoplasia. We hypothesized that methylation analysis of DNA recovered from gastric washes could be used to detect gastric cancer. We studied 51 candidate genes in 7 gastric cancer cell lines and 24 samples (training set) and identified 6 for further studies. We examined the methylation status of these genes in a test set consisting of 131 gastric neoplasias at various stages. Finally, we validated the 6 candidate genes in a different population of 40 primary gastric cancer samples and 113 nonneoplastic gastric mucosa samples. Six genes (MINT25, RORA, GDNF, ADAM23, PRDM5, MLF1) showed frequent differential methylation between gastric cancer and normal mucosa in the training, test, and validation sets. GDNF and MINT25 were most sensitive molecular markers of early stage gastric cancer, whereas PRDM5 and MLF1 were markers of a field defect. There was a close correlation (r = 0.5-0.9, P = .03-.001) between methylation levels in tumor biopsy and gastric washes. MINT25 methylation had the best sensitivity (90%), specificity (96%), and area under the receiver operating characteristic curve (0.961) in terms of tumor detection in gastric washes. These findings suggest MINT25 is a sensitive and specific marker for screening in gastric cancer. Additionally, we have developed a new method for gastric cancer detection by DNA methylation in gastric washes.
Sensitive Carbohydrate Detection using Surface Enhanced Raman Tagging
Vangala, Karthikeshwar; Yanney, Michael; Hsiao, Cheng-Te; Wu, Wells W.; Shen, Rong-Fong; Zou, Sige; Sygula, Andrzej; Zhang, Dongmao
2010-01-01
Glycomic analysis is an increasingly important field in biological and biomedical research as glycosylation is one of the most important protein post-translational modifications. We have developed a new technique to detect carbohydrates using surface enhanced Raman spectroscopy (SERS) by designing and applying a Rhodamine B derivative as the SERS tag. Using a reductive amination reaction, the Rhodamine-based tag (RT) was successfully conjugated to three model carbohydrates (glucose, lactose and glucuronic acid). SERS detection limits obtained with 632 nm HeNe laser were ~1 nM in concentration for all the RT-carbohydrate conjugates and ~10 fmol in total sample consumption. The dynamic range of the SERS method is about 4 orders of magnitude, spanning from 1 nM to 5 µM. Ratiometric SERS quantification using isotope-substituted SERS internal references also allows comparative quantifications of carbohydrates labeled with RT and deuterium/hydrogen substituted RT tags, respectively. In addition to enhancing the SERS detection of the tagged carbohydrates, the Rhodamine tagging facilitates fluorescence and mass spectrometric detection of carbohydrates. Current fluorescence sensitivity of RT-carbohydrates is ~ 3 nM in concentration while the mass spectrometry (MS) sensitivity is about 1 fmol that was achieved with linear ion trap electrospray ionization (ESI)-MS instrument. Potential applications that take advantage of the high SERS, fluorescence and MS sensitivity of this SERS tagging strategy are discussed for practical glycomic analysis where carbohydrates may be quantified with a fluorescence and SERS technique, and then identified with ESI-MS techniques. PMID:21082777
Alkylation sensitivity screens reveal a conserved cross-species functionome
Svilar, David; Dyavaiah, Madhu; Brown, Ashley R.; Tang, Jiang-bo; Li, Jianfeng; McDonald, Peter R.; Shun, Tong Ying; Braganza, Andrea; Wang, Xiao-hong; Maniar, Salony; St Croix, Claudette M.; Lazo, John S.; Pollack, Ian F.; Begley, Thomas J.; Sobol, Robert W.
2013-01-01
To identify genes that contribute to chemotherapy resistance in glioblastoma, we conducted a synthetic lethal screen in a chemotherapy-resistant glioblastoma derived cell line with the clinical alkylator temozolomide (TMZ) and an siRNA library tailored towards “druggable” targets. Select DNA repair genes in the screen were validated independently, confirming the DNA glycosylases UNG and MYH as well as MPG to be involved in the response to high dose TMZ. The involvement of UNG and MYH is likely the result of a TMZ-induced burst of reactive oxygen species. We then compared the human TMZ sensitizing genes identified in our screen with those previously identified from alkylator screens conducted in E. coli and S. cerevisiae. The conserved biological processes across all three species composes an Alkylation Functionome that includes many novel proteins not previously thought to impact alkylator resistance. This high-throughput screen, validation and cross-species analysis was then followed by a mechanistic analysis of two essential nodes: base excision repair (BER) DNA glycosylases (UNG, human and mag1, S. cerevisiae) and protein modification systems, including UBE3B and ICMT in human cells or pby1, lip22, stp22 and aim22 in S. cerevisiae. The conserved processes of BER and protein modification were dual targeted and yielded additive sensitization to alkylators in S. cerevisiae. In contrast, dual targeting of BER and protein modification genes in human cells did not increase sensitivity, suggesting an epistatic relationship. Importantly, these studies provide potential new targets to overcome alkylating agent resistance. PMID:23038810
Study of aircraft in intraurban transportation systems, volume 1
NASA Technical Reports Server (NTRS)
Stout, E. G.; Kesling, P. H.; Matteson, H. C.; Sherwood, D. E.; Tuck, W. R., Jr.; Vaughn, L. A.
1971-01-01
An analysis of an effective short range, high density computer transportation system for intraurban systems is presented. The seven county Detroit, Michigan, metropolitan area, was chosen as the scenario for the analysis. The study consisted of an analysis and forecast of the Detroit market through 1985, a parametric analysis of appropriate short haul aircraft concepts and associated ground systems, and a preliminary overall economic analysis of a simplified total system designed to evaluate the candidate vehicles and select the most promising VTOL and STOL aircraft. Data are also included on the impact of advanced technology on the system, the sensitivity of mission performance to changes in aircraft characteristics and system operations, and identification of key problem areas that may be improved by additional research. The approach, logic, and computer models used are adaptable to other intraurban or interurban areas.
Lv, Jungang; Feng, Jimin; Zhang, Wen; Shi, Rongguang; Liu, Yong; Wang, Zhaohong; Zhao, Meng
2013-01-01
Pressure-sensitive tape is often used to bind explosive devices. It can become important trace evidence in many cases. Three types of calcium carbonate (heavy, light, and active CaCO(3)), which were widely used as additives in pressure-sensitive tape substrate, were analyzed with Fourier transform infrared spectroscopy (FTIR) in this study. A Spectrum GX 2000 system with a diamond anvil cell and a deuterated triglycine sulfate detector was employed for IR observation. Background was subtracted for every measurement, and triplicate tests were performed. Differences in positions of main peaks and the corresponding functional groups were investigated. Heavy CaCO(3) could be identified from the two absorptions near 873 and 855/cm, while light CaCO(3) only has one peak near 873/cm because of the low content of aragonite. Active CaCO(3) could be identified from the absorptions in the 2800-2900/cm region because of the existence of organic compounds. Tiny but indicative changes in the 878-853/cm region were found in the spectra of CaCO(3) with different content of aragonite and calcite. CaCO(3) in pressure-sensitive tape, which cannot be differentiated by scanning electron microscope/energy dispersive X-ray spectrometer and thermal analysis, can be easily identified using FTIR. The findings were successfully applied to three specific explosive cases and would be helpful in finding the possible source of explosive devices in future cases. © 2012 American Academy of Forensic Sciences.
Riboli, Danilo Flávio Moraes; Lyra, João César; Silva, Eliane Pessoa; Valadão, Luisa Leite; Bentlin, Maria Regina; Corrente, José Eduardo; Rugolo, Ligia Maria Suppo de Souza; da Cunha, Maria de Lourdes Ribeiro de Souza
2014-05-22
Catheter-related bloodstream infections (CR-BSIs) have become the most common cause of healthcare-associated bloodstream infections in neonatal intensive care units (ICUs). Microbiological evidence implicating catheters as the source of bloodstream infection is necessary to establish the diagnosis of CR-BSIs. Semi-quantitative culture is used to determine the presence of microorganisms on the external catheter surface, whereas quantitative culture also isolates microorganisms present inside the catheter. The main objective of this study was to determine the sensitivity and specificity of these two techniques for the diagnosis of CR-BSIs in newborns from a neonatal ICU. In addition, PFGE was used for similarity analysis of the microorganisms isolated from catheters and blood cultures. Semi-quantitative and quantitative methods were used for the culture of catheter tips obtained from newborns. Strains isolated from catheter tips and blood cultures which exhibited the same antimicrobial susceptibility profile were included in the study as positive cases of CR-BSI. PFGE of the microorganisms isolated from catheters and blood cultures was performed for similarity analysis and detection of clones in the ICU. A total of 584 catheter tips from 399 patients seen between November 2005 and June 2012 were analyzed. Twenty-nine cases of CR-BSI were confirmed. Coagulase-negative staphylococci (CoNS) were the most frequently isolated microorganisms, including S. epidermidis as the most prevalent species (65.5%), followed by S. haemolyticus (10.3%), yeasts (10.3%), K. pneumoniae (6.9%), S. aureus (3.4%), and E. coli (3.4%). The sensitivity of the semi-quantitative and quantitative techniques was 72.7% and 59.3%, respectively, and specificity was 95.7% and 94.4%. The diagnosis of CR-BSIs based on PFGE analysis of similarity between strains isolated from catheter tips and blood cultures showed 82.6% sensitivity and 100% specificity. The semi-quantitative culture method showed higher sensitivity and specificity for the diagnosis of CR-BSIs in newborns when compared to the quantitative technique. In addition, this method is easier to perform and shows better agreement with the gold standard, and should therefore be recommended for routine clinical laboratory use. PFGE may contribute to the control of CR-BSIs by identifying clusters of microorganisms in neonatal ICUs, providing a means of determining potential cross-infection between patients.
Genet, Hélène; He, Yujie; Lyu, Zhou; McGuire, A David; Zhuang, Qianlai; Clein, Joy; D'Amore, David; Bennett, Alec; Breen, Amy; Biles, Frances; Euskirchen, Eugénie S; Johnson, Kristofer; Kurkowski, Tom; Kushch Schroder, Svetlana; Pastick, Neal; Rupp, T Scott; Wylie, Bruce; Zhang, Yujin; Zhou, Xiaoping; Zhu, Zhiliang
2018-01-01
It is important to understand how upland ecosystems of Alaska, which are estimated to occupy 84% of the state (i.e., 1,237,774 km 2 ), are influencing and will influence state-wide carbon (C) dynamics in the face of ongoing climate change. We coupled fire disturbance and biogeochemical models to assess the relative effects of changing atmospheric carbon dioxide (CO 2 ), climate, logging and fire regimes on the historical and future C balance of upland ecosystems for the four main Landscape Conservation Cooperatives (LCCs) of Alaska. At the end of the historical period (1950-2009) of our analysis, we estimate that upland ecosystems of Alaska store ~50 Pg C (with ~90% of the C in soils), and gained 3.26 Tg C/yr. Three of the LCCs had gains in total ecosystem C storage, while the Northwest Boreal LCC lost C (-6.01 Tg C/yr) because of increases in fire activity. Carbon exports from logging affected only the North Pacific LCC and represented less than 1% of the state's net primary production (NPP). The analysis for the future time period (2010-2099) consisted of six simulations driven by climate outputs from two climate models for three emission scenarios. Across the climate scenarios, total ecosystem C storage increased between 19.5 and 66.3 Tg C/yr, which represents 3.4% to 11.7% increase in Alaska upland's storage. We conducted additional simulations to attribute these responses to environmental changes. This analysis showed that atmospheric CO 2 fertilization was the main driver of ecosystem C balance. By comparing future simulations with constant and with increasing atmospheric CO 2 , we estimated that the sensitivity of NPP was 4.8% per 100 ppmv, but NPP becomes less sensitive to CO 2 increase throughout the 21st century. Overall, our analyses suggest that the decreasing CO 2 sensitivity of NPP and the increasing sensitivity of heterotrophic respiration to air temperature, in addition to the increase in C loss from wildfires weakens the C sink from upland ecosystems of Alaska and will ultimately lead to a source of CO 2 to the atmosphere beyond 2100. Therefore, we conclude that the increasing regional C sink we estimate for the 21st century will most likely be transitional. © 2017 by the Ecological Society of America.