NASA Technical Reports Server (NTRS)
Hornberger, G. M.; Rastetter, E. B.
1982-01-01
A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.
USDA-ARS?s Scientific Manuscript database
For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...
USDA-ARS?s Scientific Manuscript database
For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...
DOT National Transportation Integrated Search
2017-02-08
The study re-evaluates distress prediction models using the Mechanistic-Empirical Pavement Design Guide (MEPDG) and expands the sensitivity analysis to a wide range of pavement structures and soils. In addition, an extensive validation analysis of th...
Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mason, B. H.; Walsh, J. L.
2001-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.
NASA Technical Reports Server (NTRS)
Bittker, David A.
1996-01-01
A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.
Jebaseelan, D Davidson; Jebaraj, C; Yoganandan, Narayan; Rajasekaran, S; Kanna, Rishi M
2012-05-01
The objective of the study was to determine the sensitivity of material properties of the juvenile spine to its external and internal responses using a finite element model under compression, and flexion-extension bending moments. The methodology included exercising the 8-year-old juvenile lumbar spine using parametric procedures. The model included the vertebral centrum, growth plates, laminae, pedicles, transverse processes and spinous processes; disc annulus and nucleus; and various ligaments. The sensitivity analysis was conducted by varying the modulus of elasticity for various components. The first simulation was done using mean material properties. Additional simulations were done for each component corresponding to low and high material property variations. External displacement/rotation and internal stress-strain responses were determined under compression and flexion-extension bending. Results indicated that, under compression, disc properties were more sensitive than bone properties, implying an elevated role of the disc under this mode. Under flexion-extension moments, ligament properties were more dominant than the other components, suggesting that various ligaments of the juvenile spine play a key role in modulating bending behaviors. Changes in the growth plate stress associated with ligament properties explained the importance of the growth plate in the pediatric spine with potential implications in progressive deformities.
NASA Astrophysics Data System (ADS)
Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef
2016-12-01
Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
Extension of the Thoracic Spine Sign: A New Sonographic Marker of Pleural Effusion.
Dickman, Eitan; Terentiev, Victoria; Likourezos, Antonios; Derman, Anna; Haines, Lawrence
2015-09-01
Dyspnea is a common emergency department (ED) condition, which may be caused by pleural effusion and other thoracic diseases. We present data on a new sonographic marker, the extension of the thoracic spine sign, for diagnosis of pleural effusion. In this prospective study, we enrolled a convenience sample of undifferentiated patients who underwent computed tomography (CT) of the abdomen or chest, which was performed as part of their emergency department evaluations. Patients underwent chest sonography to assess the utility of the extension of the thoracic spine sign for diagnosing pleural effusion. The point-of-care sonographic examinations were performed and interpreted by emergency physicians who were blinded to information in the medical records. Sonographic results were compared to radiologists' interpretations of the CT results, which were considered the criterion standard. Forty-one patients were enrolled, accounting for 82 hemithoraces. Seven hemithoraces were excluded from the analysis due to various limitations, leaving 75 hemithoraces for the final analysis. The median time for completion of the sonographic examination was 3 minutes. The sensitivity and specificity for extension of the thoracic spine were 73.7% (95% confidence interval [CI], 48.6%-89.9%) and 92.9% (95%CI, 81.9%-97.7%), respectively. Overall, there were 5 hemithoraces with false-negative results when using the extension sign. Of those 5 cases, 4 were found to have trace pleural effusions on CT. When trace pleural effusions were excluded in a subgroup analysis, the sensitivity and specificity of extension of the thoracic spine were 92.9% (95% CI, 64.2%-99.6%) and 92.9% (95% CI, 81.9%-97.7%). We found the extension of the thoracic spine sign to be an excellent diagnostic tool for clinically relevant pleural effusion. © 2015 by the American Institute of Ultrasound in Medicine.
Grid sensitivity for aerodynamic optimization and flow analysis
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1993-01-01
After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.
NASA Astrophysics Data System (ADS)
Wang, Qiqi; Rigas, Georgios; Esclapez, Lucas; Magri, Luca; Blonigan, Patrick
2016-11-01
Bluff body flows are of fundamental importance to many engineering applications involving massive flow separation and in particular the transport industry. Coherent flow structures emanating in the wake of three-dimensional bluff bodies, such as cars, trucks and lorries, are directly linked to increased aerodynamic drag, noise and structural fatigue. For low Reynolds laminar and transitional regimes, hydrodynamic stability theory has aided the understanding and prediction of the unstable dynamics. In the same framework, sensitivity analysis provides the means for efficient and optimal control, provided the unstable modes can be accurately predicted. However, these methodologies are limited to laminar regimes where only a few unstable modes manifest. Here we extend the stability analysis to low-dimensional chaotic regimes by computing the Lyapunov covariant vectors and their associated Lyapunov exponents. We compare them to eigenvectors and eigenvalues computed in traditional hydrodynamic stability analysis. Computing Lyapunov covariant vectors and Lyapunov exponents also enables the extension of sensitivity analysis to chaotic flows via the shadowing method. We compare the computed shadowing sensitivities to traditional sensitivity analysis. These Lyapunov based methodologies do not rely on mean flow assumptions, and are mathematically rigorous for calculating sensitivities of fully unsteady flow simulations.
Amplitude analysis of four-body decays using a massively-parallel fitting framework
NASA Astrophysics Data System (ADS)
Hasse, C.; Albrecht, J.; Alves, A. A., Jr.; d'Argent, P.; Evans, T. D.; Rademacker, J.; Sokoloff, M. D.
2017-10-01
The GooFit Framework is designed to perform maximum-likelihood fits for arbitrary functions on various parallel back ends, for example a GPU. We present an extension to GooFit which adds the functionality to perform time-dependent amplitude analyses of pseudoscalar mesons decaying into four pseudoscalar final states. Benchmarks of this functionality show a significant performance increase when utilizing a GPU compared to a CPU. Furthermore, this extension is employed to study the sensitivity on the {{{D}}}0-{\\bar{{{D}}}}0 mixing parameters x and y in a time-dependent amplitude analysis of the decay D0 → K+π-π+π-. Studying a sample of 50 000 events and setting the central values to the world average of x = (0.49 ± 0.15)% and y = (0.61 ± 0.08)%, the statistical sensitivities of x and y are determined to be σ(x) = 0.019 % and σ(y) = 0.019 %.
Control of Wheel/Rail Noise and Vibration
DOT National Transportation Integrated Search
1982-04-01
An analytical model of the generation of wheel/rail noise has been developed and validated through an extensive series of field tests carried out at the Transportation Test Center using the State of the Art Car. A sensitivity analysis has been perfor...
Extensions and applications of a second-order landsurface parameterization
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1983-01-01
Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.
Program Helps To Determine Chemical-Reaction Mechanisms
NASA Technical Reports Server (NTRS)
Bittker, D. A.; Radhakrishnan, K.
1995-01-01
General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Olivieri, Alejandro C
2005-08-01
Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
JPL Contamination Control Engineering
NASA Technical Reports Server (NTRS)
Blakkolb, Brian
2013-01-01
JPL has extensive expertise fielding contamination sensitive missions-in house and with our NASA/industry/academic partners.t Development and implementation of performance-driven cleanliness requirements for a wide range missions and payloads - UV-Vis-IR: GALEX, Dawn, Juno, WFPC-II, AIRS, TES, et al - Propulsion, thermal control, robotic sample acquisition systems. Contamination control engineering across the mission life cycle: - System and payload requirements derivation, analysis, and contamination control implementation plans - Hardware Design, Risk trades, Requirements V-V - Assembly, Integration & Test planning and implementation - Launch site operations and launch vehicle/payload integration - Flight ops center dot Personnel on staff have expertise with space materials development and flight experiments. JPL has capabilities and expertise to successfully address contamination issues presented by space and habitable environments. JPL has extensive experience fielding and managing contamination sensitive missions. Excellent working relationship with the aerospace contamination control engineering community/.
SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations
Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.
2016-02-25
Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less
Dai, Haiming; Ding, Husheng; Meng, X. Wei; Peterson, Kevin L.; Schneider, Paula A.; Karp, Judith E.; Kaufmann, Scott H.
2015-01-01
Mitochondrial outer membrane permeabilization (MOMP), a key step in the intrinsic apoptotic pathway, is incompletely understood. Current models emphasize the role of BH3-only BCL2 family members in BAX and BAK activation. Here we demonstrate concentration-dependent BAK autoactivation under cell-free conditions and provide evidence that this autoactivation plays a key role in regulating the intrinsic apoptotic pathway in intact cells. In particular, we show that up to 80% of BAK (but not BAX) in lymphohematopoietic cell lines is oligomerized and bound to anti-apoptotic BCL2 family members in the absence of exogenous death stimuli. The extent of this constitutive BAK oligomerization is diminished by BAK knockdown and unaffected by BIM or PUMA down-regulation. Further analysis indicates that sensitivity of cells to BH3 mimetics reflects the identity of the anti-apoptotic proteins to which BAK is constitutively bound, with extensive BCLXL•BAK complexes predicting navitoclax sensitivity, and extensive MCL1•BAK complexes predicting A1210477 sensitivity. Moreover, high BAK expression correlates with sensitivity of clinical acute myelogenous leukemia to chemotherapy, whereas low BAK levels correlate with resistance and relapse. Collectively, these results inform current understanding of MOMP and provide new insight into the ability of BH3 mimetics to induce apoptosis without directly activating BAX or BAK. PMID:26494789
NASA Astrophysics Data System (ADS)
Kazmi, K. R.; Khan, F. A.
2008-01-01
In this paper, using proximal-point mapping technique of P-[eta]-accretive mapping and the property of the fixed-point set of set-valued contractive mappings, we study the behavior and sensitivity analysis of the solution set of a parametric generalized implicit quasi-variational-like inclusion involving P-[eta]-accretive mapping in real uniformly smooth Banach space. Further, under suitable conditions, we discuss the Lipschitz continuity of the solution set with respect to the parameter. The technique and results presented in this paper can be viewed as extension of the techniques and corresponding results given in [R.P. Agarwal, Y.-J. Cho, N.-J. Huang, Sensitivity analysis for strongly nonlinear quasi-variational inclusions, Appl. MathE Lett. 13 (2002) 19-24; S. Dafermos, Sensitivity analysis in variational inequalities, Math. Oper. Res. 13 (1988) 421-434; X.-P. Ding, Sensitivity analysis for generalized nonlinear implicit quasi-variational inclusions, Appl. Math. Lett. 17 (2) (2004) 225-235; X.-P. Ding, Parametric completely generalized mixed implicit quasi-variational inclusions involving h-maximal monotone mappings, J. Comput. Appl. Math. 182 (2) (2005) 252-269; X.-P. Ding, C.L. Luo, On parametric generalized quasi-variational inequalities, J. Optim. Theory Appl. 100 (1999) 195-205; Z. Liu, L. Debnath, S.M. Kang, J.S. Ume, Sensitivity analysis for parametric completely generalized nonlinear implicit quasi-variational inclusions, J. Math. Anal. Appl. 277 (1) (2003) 142-154; R.N. Mukherjee, H.L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992) 299-304; M.A. Noor, Sensitivity analysis framework for general quasi-variational inclusions, Comput. Math. Appl. 44 (2002) 1175-1181; M.A. Noor, Sensitivity analysis for quasivariational inclusions, J. Math. Anal. Appl. 236 (1999) 290-299; J.Y. Park, J.U. Jeong, Parametric generalized mixed variational inequalities, Appl. Math. Lett. 17 (2004) 43-48].
FEAST: sensitive local alignment with multiple rates of evolution.
Hudek, Alexander K; Brown, Daniel G
2011-01-01
We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.
Sequence analysis of a bitter taste receptor gene repertoires in different ruminant species
USDA-ARS?s Scientific Manuscript database
Bitter taste has been extensively studied in mammalian species and is associated with sensitivity to toxins and with food choices that avoid dangerous substances in the diet. At the molecular level, bitter compounds are sensed by bitter taste receptor proteins (T2R) present at the surface of taste r...
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.
Revisiting inconsistency in large pharmacogenomic studies
Safikhani, Zhaleh; Smirnov, Petr; Freeman, Mark; El-Hachem, Nehme; She, Adrian; Rene, Quevedo; Goldenberg, Anna; Birkbak, Nicolai J.; Hatzis, Christos; Shi, Leming; Beck, Andrew H.; Aerts, Hugo J.W.L.; Quackenbush, John; Haibe-Kains, Benjamin
2017-01-01
In 2013, we published a comparative analysis of mutation and gene expression profiles and drug sensitivity measurements for 15 drugs characterized in the 471 cancer cell lines screened in the Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Cell Line Encyclopedia (CCLE). While we found good concordance in gene expression profiles, there was substantial inconsistency in the drug responses reported by the GDSC and CCLE projects. We received extensive feedback on the comparisons that we performed. This feedback, along with the release of new data, prompted us to revisit our initial analysis. We present a new analysis using these expanded data, where we address the most significant suggestions for improvements on our published analysis — that targeted therapies and broad cytotoxic drugs should have been treated differently in assessing consistency, that consistency of both molecular profiles and drug sensitivity measurements should be compared across cell lines, and that the software analysis tools provided should have been easier to run, particularly as the GDSC and CCLE released additional data. Our re-analysis supports our previous finding that gene expression data are significantly more consistent than drug sensitivity measurements. Using new statistics to assess data consistency allowed identification of two broad effect drugs and three targeted drugs with moderate to good consistency in drug sensitivity data between GDSC and CCLE. For three other targeted drugs, there were not enough sensitive cell lines to assess the consistency of the pharmacological profiles. We found evidence of inconsistencies in pharmacological phenotypes for the remaining eight drugs. Overall, our findings suggest that the drug sensitivity data in GDSC and CCLE continue to present challenges for robust biomarker discovery. This re-analysis provides additional support for the argument that experimental standardization and validation of pharmacogenomic response will be necessary to advance the broad use of large pharmacogenomic screens. PMID:28928933
Hasan, Nazim; Gopal, Judy; Wu, Hui-Fen
2011-11-01
Biofilm studies have extensive significance since their results can provide insights into the behavior of bacteria on material surfaces when exposed to natural water. This is the first attempt of using matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS) for detecting the polysaccharides formed in a complex biofilm consisting of a mixed consortium of marine microbes. MALDI-MS has been applied to directly analyze exopolysaccharides (EPS) in the biofilm formed on aluminum surfaces exposed to seawater. The optimal conditions for MALDI-MS applied to EPS analysis of biofilm have been described. In addition, microbiologically influenced corrosion of aluminum exposed to sea water by a marine fungus was also observed and the fungus identity established using MALDI-MS analysis of EPS. Rapid, sensitive and direct MALDI-MS analysis on biofilm would dramatically speed up and provide new insights into biofilm studies due to its excellent advantages such as simplicity, high sensitivity, high selectivity and high speed. This study introduces a novel, fast, sensitive and selective platform for biofilm study from natural water without the need of tedious culturing steps or complicated sample pretreatment procedures. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Diagnosing the impact of alternative calibration strategies on coupled hydrologic models
NASA Astrophysics Data System (ADS)
Smith, T. J.; Perera, C.; Corrigan, C.
2017-12-01
Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.
Heidt, Sebastiaan; Haasnoot, Geert W; Claas, Frans H J
2018-05-24
Highly sensitized patients awaiting a renal transplant have a low chance of receiving an organ offer. Defining acceptable antigens and using this information for allocation purposes can vastly enhance transplantation of this subgroup of patients, which is the essence of the Eurotransplant Acceptable Mismatch program. Acceptable antigens can be determined by extensive laboratory testing, as well as on basis of human leukocyte antigen (HLA) epitope analyses. Within the Acceptable Mismatch program, there is no effect of HLA mismatches on long-term graft survival. Furthermore, patients transplanted through the Acceptable Mismatch program have similar long-term graft survival to nonsensitized patients transplanted through regular allocation. Although HLA epitope analysis is already being used for defining acceptable HLA antigens for highly sensitized patients in the Acceptable Mismatch program, increasing knowledge on HLA antibody - epitope interactions will pave the way toward the definition of acceptable epitopes for highly sensitized patients in the future. Allocation based on acceptable antigens can facilitate transplantation of highly sensitized patients with excellent long-term graft survival.
Dai, Haiming; Ding, Husheng; Meng, X Wei; Peterson, Kevin L; Schneider, Paula A; Karp, Judith E; Kaufmann, Scott H
2015-10-15
Mitochondrial outer membrane permeabilization (MOMP), a key step in the intrinsic apoptotic pathway, is incompletely understood. Current models emphasize the role of BH3-only BCL2 family members in BAX and BAK activation. Here we demonstrate concentration-dependent BAK autoactivation under cell-free conditions and provide evidence that this autoactivation plays a key role in regulating the intrinsic apoptotic pathway in intact cells. In particular, we show that up to 80% of BAK (but not BAX) in lymphohematopoietic cell lines is oligomerized and bound to anti-apoptotic BCL2 family members in the absence of exogenous death stimuli. The extent of this constitutive BAK oligomerization is diminished by BAK knockdown and unaffected by BIM or PUMA down-regulation. Further analysis indicates that sensitivity of cells to BH3 mimetics reflects the identity of the anti-apoptotic proteins to which BAK is constitutively bound, with extensive BCLXL•BAK complexes predicting navitoclax sensitivity, and extensive MCL1•BAK complexes predicting A1210477 sensitivity. Moreover, high BAK expression correlates with sensitivity of clinical acute myelogenous leukemia to chemotherapy, whereas low BAK levels correlate with resistance and relapse. Collectively, these results inform current understanding of MOMP and provide new insight into the ability of BH3 mimetics to induce apoptosis without directly activating BAX or BAK. © 2015 Dai et al.; Published by Cold Spring Harbor Laboratory Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark
Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less
Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde
2017-01-01
Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.
Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H
2016-12-15
Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.
1991-09-30
Tool (ASSET) COMPUTER SCIENCE Vicki Sue Abel VIEWER - A User Interface for Failure 49 Lieutenant Commander, U.S. Navy Region Analysis and Medio Monti...California Current System using a Primitive Equation Model Charles C. McGlothin, Jr. Ambient Sound in the Ocean Induced by 257 Lieutenant, U.S. Navy Heavy...parameters,, and ambient flow/oscillating flow combinations using VAX-3520 and NASA’s Supercomputers. Extensive sensitivity analysis has been performed
The Advantages of Hybrid 4DEnVar in the Context of the Forecast Sensitivity to Initial Conditions
NASA Astrophysics Data System (ADS)
Song, Hyo-Jong; Shin, Seoleun; Ha, Ji-Hyun; Lim, Sujeong
2017-11-01
Hybrid four-dimensional ensemble variational data assimilation (hybrid 4DEnVar) is a prospective successor to three-dimensional variational data assimilation (3DVar) in operational weather prediction centers currently developing a new weather prediction model and those that do not operate adjoint models. In experiments using real observations, hybrid 4DEnVar improved Northern Hemisphere (NH; 20°N-90°N) 500 hPa geopotential height forecasts up to 5 days in a NH summer month compared to 3DVar, with statistical significance. This result is verified against ERA-Interim through a Monte Carlo test. By a regression analysis, the sensitivity of 5 day forecast is associated with the quality of the initial condition. The increased analysis skill for midtropospheric midlatitude temperature and subtropical moisture has the most apparent effect on forecast skill in the NH including a typhoon prediction case. Through attributing the analysis improvements by hybrid 4DEnVar separately to the ensemble background error covariance (BEC), its four-dimensional (4-D) extension, and climatological BEC, it is revealed that the ensemble BEC contributes to the subtropical moisture analysis, whereas the 4-D extension does to the midtropospheric midlatitude temperature. This result implies that hourly wind-mass correlation in 6 h analysis window is required to extract the potential of hybrid 4DEnVar for the midlatitude temperature analysis to the maximum. However, the temporal ensemble correlation, in hourly time scale, between moisture and another variable is invalid so that it could not work for improving the hybrid 4DEnVar analysis.
SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2015-01-01
The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less
Laurence Lin; J.R. Webster
2012-01-01
The constant nutrient addition technique has been used extensively to measure nutrient uptake in streams. However, this technique is impractical for large streams, and the pulse nutrient addition (PNA) has been suggested as an alternative. We developed a computer model to simulate Monod kinetics nutrient uptake in large rivers and used this model to evaluate the...
Extension of latin hypercube samples with correlated variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hora, Stephen Curtis; Helton, Jon Craig; Sallaberry, Cedric J. PhD.
2006-11-01
A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number ofmore » model evaluations.« less
Jia, Yongliang; Leung, Siu-wai; Lee, Ming-Yuen; Cui, Guozhen; Huang, Xiaohui; Pan, Fongha
2013-01-01
Objective. The randomized controlled trials (RCTs) on Guanxinning injection (GXN) in treating angina pectoris were published only in Chinese and have not been systematically reviewed. This study aims to provide a PRISMA-compliant and internationally accessible systematic review to evaluate the efficacy of GXN in treating angina pectoris. Methods. The RCTs were included according to prespecified eligibility criteria. Meta-analysis was performed to evaluate the symptomatic (SYMPTOMS) and electrocardiographic (ECG) improvements after treatment. Odds ratios (ORs) were used to measure effect sizes. Subgroup analysis, sensitivity analysis, and metaregression were conducted to evaluate the robustness of the results. Results. Sixty-five RCTs published between 2002 and 2012 with 6064 participants were included. Overall ORs comparing GXN with other drugs were 3.32 (95% CI: [2.72, 4.04]) in SYMPTOMS and 2.59 (95% CI: [2.14, 3.15]) in ECG. Subgroup analysis, sensitivity analysis, and metaregression found no statistically significant dependence of overall ORs upon specific study characteristics. Conclusion. This meta-analysis of eligible RCTs provides evidence that GXN is effective in treating angina pectoris. This evidence warrants further RCTs of higher quality, longer follow-up periods, larger sample sizes, and multicentres/multicountries for more extensive subgroup, sensitivity, and metaregression analyses. PMID:23634167
Balmant, Wellington; Sugai-Guérios, Maura Harumi; Coradin, Juliana Hey; Krieger, Nadia; Furigo Junior, Agenor; Mitchell, David Alexander
2015-01-01
Current models that describe the extension of fungal hyphae and development of a mycelium either do not describe the role of vesicles in hyphal extension or do not correctly describe the experimentally observed profile for distribution of vesicles along the hypha. The present work uses the n-tanks-in-series approach to develop a model for hyphal extension that describes the intracellular transport of nutrient to a sub-apical zone where vesicles are formed and then transported to the tip, where tip extension occurs. The model was calibrated using experimental data from the literature for the extension of reproductive aerial hyphae of three different fungi, and was able to describe different profiles involving acceleration and deceleration of the extension rate. A sensitivity analysis showed that the supply of nutrient to the sub-apical vesicle-producing zone is a key factor influencing the rate of extension of the hypha. Although this model was used to describe the extension of a single reproductive aerial hypha, the use of the n-tanks-in-series approach to representing the hypha means that the model has the flexibility to be extended to describe the growth of other types of hyphae and the branching of hyphae to form a complete mycelium.
Barratt, M D; Langowski, J J
1999-01-01
The DEREK knowledge-based computer system contains a subset of approximately 50 rules describing chemical substructures (toxophores) responsible for skin sensitization. This rulebase, based originally on Unilever historical in-house guinea pig maximization test data, has been subject to extensive validation and is undergoing refinement as the next stage of its development. As part of an ongoing program of validation and testing, the predictive ability of the sensitization rule set has been assessed by processing the structures of the 84 chemical substances in the list of contact allergens issued by the BgVV (German Federal Institute for Health Protection of Consumers). This list of chemicals is important because the biological data for each of the chemicals have been carefully scrutinized and peer reviewed, a key consideration in an area of toxicology in which much unreliable and potentially misleading data have been published. The existing DEREK rulebase for skin sensitization identified toxophores for skin sensitization in the structures of 71 out of the 84 chemicals (85%). The exercise highlighted areas of chemistry where further development of the rulebase was required, either by extension of the scope of existing rules or by generation of new rules where a sound mechanistic rationale for the biological activity could be established. Chemicals likely to be acting as photoallergens were identified, and new rules for photoallergenicity have subsequently been written. At the end of the exercise, the refined rulebase was able to identify toxophores for skin sensitization for 82 of the 84 chemicals in the BgVV list.
Tosi, L L; Detsky, A S; Roye, D P; Morden, M L
1987-01-01
Using a decision analysis model, we estimated the savings that might be derived from a mass prenatal screening program aimed at detecting open neural tube defects (NTDs) in low-risk pregnancies. Our baseline analysis showed that screening v. no screening could be expected to save approximately $8 per pregnancy given a cost of $7.50 for the maternal serum alpha-feto-protein (MSAFP) test and a cost of $42,507 for hospital and rehabilitation services for the first 10 years of life for a child with spina bifida. When a more liberal estimate of the costs of caring for such a child was used, the savings with the screening program were more substantial. We performed extensive sensitivity analyses, which showed that the savings were somewhat sensitive to the cost of the MSAFP test and highly sensitive to the specificity (but not the sensitivity) of the test. A screening program for NTDs in low-risk pregnancies may result in substantial savings in direct health care costs if the screening protocol is followed rigorously and efficiently. PMID:2433011
NASA Technical Reports Server (NTRS)
Kofal, Allen E.
1987-01-01
The purpose of this extension to the OTV Concept Definition and Systems Analysis Study was to improve the definition of the OTV Program that will be most beneficial to the nation in the 1995 to 2010 timeframe. The implications of the defined mission and defined launch vehicle are investigated. The key mission requirements identified for the Space Transportation Architecture Study (STAS) were established and reflect a need for early capability and more ambitious capability growth. The key technical objectives and related issues addressed are summarized. The analyses of selected areas including aerobrake design, proximity operations, and the balance of EVA and IVA operations used in the support of the OTV at the space-base were enhanced. Sensitivity studies were conducted to establish how the OTV program should be tailored to meet changing circumstances.
Liu, Yuanchao; Liu, Ming; Wang, Xin
2015-01-01
The objective of text clustering is to divide document collections into clusters based on the similarity between documents. In this paper, an extension-based feature modeling approach towards semantically sensitive text clustering is proposed along with the corresponding feature space construction and similarity computation method. By combining the similarity in traditional feature space and that in extension space, the adverse effects of the complexity and diversity of natural language can be addressed and clustering semantic sensitivity can be improved correspondingly. The generated clusters can be organized using different granularities. The experimental evaluations on well-known clustering algorithms and datasets have verified the effectiveness of our approach.
Liu, Yuanchao; Liu, Ming; Wang, Xin
2015-01-01
The objective of text clustering is to divide document collections into clusters based on the similarity between documents. In this paper, an extension-based feature modeling approach towards semantically sensitive text clustering is proposed along with the corresponding feature space construction and similarity computation method. By combining the similarity in traditional feature space and that in extension space, the adverse effects of the complexity and diversity of natural language can be addressed and clustering semantic sensitivity can be improved correspondingly. The generated clusters can be organized using different granularities. The experimental evaluations on well-known clustering algorithms and datasets have verified the effectiveness of our approach. PMID:25794172
[Insulinoma of the pancreas: analysis of a clinical series of 30 cases].
Andronesi, D; Andronesi, A; Tonea, A; Andrei, S; Herlea, V; Lupescu, I; Ionescu-Târgovişte, C; Coculescu, M; Fica, S; Ionescu, M; Gheorghe, C; Popescu, I
2009-01-01
Insulinoma is the most frequent neuroendocrine pancreatic tumor and is the main cause for hypoglicemia due to endogenous hyperinsulinism. We performed an analysis of a clinical series in order to study the clinical and biological spectrum of presentation, the preoperatory imagistic diagnosis and results of the surgical approach. Between 1986-2009, 30 patients with symptoms suggesting an insulinoma were hospitalized in our department. Preoperatory localization of insulinomas was possible in 16 patients. The most sensitive imagistic methods were ecoendoscopy and magnetic resonance. Intraoperatory ultrasound was performed in 16 patients and its sensitivity in detection of insulinomas was 93%; the combination between intraoperative ultrasound and manual exploration of pancreas by the surgeon reached a 100% sensitivity. Before the intraoperatory ultrasound was used the tumor excision was predominantly done by extensive pancreatic resection, while after this was available in our centre more conservative (enucleo-resection) procedures were chosen. In 1 patient the resection was done by laparoscopy, and in 1 patient by robotic surgery. The dimensions of the tumor were less than 2 cm in most of the patients; 2 had nesidioblastosis and 2 had multiple insulinomas; all 28 patients proved to have benign insulinomas at histological specimens. Following surgery, the symptoms disappear in all patients. The most common complication following extensive pancreatic resections was acute pancreatitis, while after enucleation pancreatic fistula occurred more frequently. Due to small dimensions, the preoperative diagnosis of insulinomas is usually difficult, ecoendoscopy being the most sensitive method. Intraoperative ultrasound is essential for insulinoma localization and for chosing the optimal type of excision. Enucleation is the resection method to be chosen whenever this it is technical possible. In benign insulinomas the prognosis is excellent, surgical resection being curative in all cases.
Bayesian sensitivity analysis of bifurcating nonlinear models
NASA Astrophysics Data System (ADS)
Becker, W.; Worden, K.; Rowson, J.
2013-01-01
Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
NASA Astrophysics Data System (ADS)
Knote, Christoph; Barré, Jérôme; Eckl, Max
2018-02-01
The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
Additional EIPC Study Analysis: Interim Report on High Priority Topics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, Stanton W
Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations weremore » developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 13 topics was developed for further analysis; this paper discusses the first five.« less
NASA Technical Reports Server (NTRS)
Martin, Carl J., Jr.
1996-01-01
This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.
ERIC Educational Resources Information Center
Adedokun, Omolola A.
2018-01-01
This article provides an illustrative description of the pre-post difference index (PPDI), a simple, nontechnical yet robust tool for examining the instructional sensitivity of assessment items. Extension educators often design pretest-posttest instruments to assess the impact of their curricula on participants' knowledge and understanding of the…
Spatial analysis of extension fracture systems: A process modeling approach
Ferguson, C.C.
1985-01-01
Little consensus exists on how best to analyze natural fracture spacings and their sequences. Field measurements and analyses published in geotechnical literature imply fracture processes radically different from those assumed by theoretical structural geologists. The approach adopted in this paper recognizes that disruption of rock layers by layer-parallel extension results in two spacing distributions, one representing layer-fragment lengths and another separation distances between fragments. These two distributions and their sequences reflect mechanics and history of fracture and separation. Such distributions and sequences, represented by a 2 ?? n matrix of lengthsL, can be analyzed using a method that is history sensitive and which yields also a scalar estimate of bulk extension, e (L). The method is illustrated by a series of Monte Carlo experiments representing a variety of fracture-and-separation processes, each with distinct implications for extension history. Resulting distributions of e (L)are process-specific, suggesting that the inverse problem of deducing fracture-and-separation history from final structure may be tractable. ?? 1985 Plenum Publishing Corporation.
DAKOTA Design Analysis Kit for Optimization and Terascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
Analysis of world terror networks from the reduced Google matrix of Wikipedia
NASA Astrophysics Data System (ADS)
El Zant, Samer; Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.
2018-01-01
We apply the reduced Google matrix method to analyze interactions between 95 terrorist groups and determine their relationships and influence on 64 world countries. This is done on the basis of the Google matrix of the English Wikipedia (2017) composed of 5 416 537 articles which accumulate a great part of global human knowledge. The reduced Google matrix takes into account the direct and hidden links between a selection of 159 nodes (articles) appearing due to all paths of a random surfer moving over the whole network. As a result we obtain the network structure of terrorist groups and their relations with selected countries including hidden indirect links. Using the sensitivity of PageRank to a weight variation of specific links we determine the geopolitical sensitivity and influence of specific terrorist groups on world countries. The world maps of the sensitivity of various countries to influence of specific terrorist groups are obtained. We argue that this approach can find useful application for more extensive and detailed data bases analysis.
Hiasat, Jamila G; Saleh, Alaa; Al-Hussaini, Maysa; Al Nawaiseh, Ibrahim; Mehyar, Mustafa; Qandeel, Monther; Mohammad, Mona; Deebajah, Rasha; Sultan, Iyad; Jaradat, Imad; Mansour, Asem; Yousef, Yacoub A
2018-06-01
To evaluate the predictive value of magnetic resonance imaging in retinoblastoma for the likelihood of high-risk pathologic features. A retrospective study of 64 eyes enucleated from 60 retinoblastoma patients. Contrast-enhanced magnetic resonance imaging was performed before enucleation. Main outcome measures included demographics, laterality, accuracy, sensitivity, and specificity of magnetic resonance imaging in detecting high-risk pathologic features. Optic nerve invasion and choroidal invasion were seen microscopically in 34 (53%) and 28 (44%) eyes, respectively, while they were detected in magnetic resonance imaging in 22 (34%) and 15 (23%) eyes, respectively. The accuracy of magnetic resonance imaging in detecting prelaminar invasion was 77% (sensitivity 89%, specificity 98%), 56% for laminar invasion (sensitivity 27%, specificity 94%), 84% for postlaminar invasion (sensitivity 42%, specificity 98%), and 100% for optic cut edge invasion (sensitivity100%, specificity 100%). The accuracy of magnetic resonance imaging in detecting focal choroidal invasion was 48% (sensitivity 33%, specificity 97%), and 84% for massive choroidal invasion (sensitivity 53%, specificity 98%), and the accuracy in detecting extrascleral extension was 96% (sensitivity 67%, specificity 98%). Magnetic resonance imaging should not be the only method to stratify patients at high risk from those who are not, eventhough it can predict with high accuracy extensive postlaminar optic nerve invasion, massive choroidal invasion, and extrascleral tumor extension.
STARLSE -- Starlink Extensions to the VAX Language Sensitive Editor
NASA Astrophysics Data System (ADS)
Warren-Smith, R. F.
STARLSE is a ``Starlink Sensitive'' editor based on the VAX Language Sensitive Editor (LSE). It exploits the extensibility of LSE to provide additional features which assist in the writing of portable Fortran 77 software with a standard Starlink style. STARLSE is intended mainly for use by those writing ADAM applications and subroutine libraries for distribution as part of the Starlink Software Collection, although it may also be suitable for other software projects. It is designed to integrate with the SST (Simple Software Tools) package.
Improved test of Lorentz invariance in electrodynamics
NASA Astrophysics Data System (ADS)
Wolf, Peter; Bize, Sébastien; Clairon, André; Santarelli, Giorgio; Tobar, Michael E.; Luiten, André N.
2004-09-01
We report new results of a test of Lorentz invariance based on the comparison of a cryogenic sapphire microwave resonator and a hydrogen-maser. The experimental results are shown together with an extensive analysis of systematic effects. Previously, this experiment has set the most stringent constraint on Kennedy-Thorndike type violations of Lorentz invariance. In this work we present new data and interpret our results in the general Lorentz violating extension of the standard model of particle physics (SME). Within the photon sector of the SME, our experiment is sensitive to seven SME parameters. We marginally improve present limits on four of these, and by a factor seven to ten on the other three.
ERIC Educational Resources Information Center
Young, Hannah; Fenwick, Maggi; Lambe, Loretto; Hogg, James
2011-01-01
The importance of storytelling in social, cultural and educational contexts is well established and documented. The extension of storytelling to people with profound intellectual and multiple disabilities (PIMD) has in recent years been undertaken with an emphasis on the value of sensory experience and the context storytelling provides for social…
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-01-01
Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-11-26
Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.
Defining the consequences of genetic variation on a proteome–wide scale
Chick, Joel M.; Munger, Steven C.; Simecek, Petr; Huttlin, Edward L.; Choi, Kwangbom; Gatti, Daniel M.; Raghupathy, Narayanan; Svenson, Karen L.; Churchill, Gary A.; Gygi, Steven P.
2016-01-01
Genetic variation modulates protein expression through both transcriptional and post-transcriptional mechanisms. To characterize the consequences of natural genetic diversity on the proteome, here we combine a multiplexed, mass spectrometry-based method for protein quantification with an emerging outbred mouse model containing extensive genetic variation from eight inbred founder strains. By measuring genome-wide transcript and protein expression in livers from 192 Diversity outbred mice, we identify 2,866 protein quantitative trait loci (pQTL) with twice as many local as distant genetic variants. These data support distinct transcriptional and post-transcriptional models underlying the observed pQTL effects. Using a sensitive approach to mediation analysis, we often identified a second protein or transcript as the causal mediator of distant pQTL. Our analysis reveals an extensive network of direct protein–protein interactions. Finally, we show that local genotype can provide accurate predictions of protein abundance in an independent cohort of collaborative cross mice. PMID:27309819
An analysis of the extension of a ZnO piezoelectric semiconductor nanofiber under an axial force
NASA Astrophysics Data System (ADS)
Zhang, Chunli; Wang, Xiaoyuan; Chen, Weiqiu; Yang, Jiashi
2017-02-01
This paper presents a theoretical analysis on the axial extension of an n-type ZnO piezoelectric semiconductor nanofiber under an axial force. The phenomenological theory of piezoelectric semiconductors consisting of Newton’s second law of motion, the charge equation of electrostatics and the conservation of charge was used. The equations were linearized for small axial force and hence small electron concentration perturbation, and were reduced to one-dimensional equations for thin fibers. Simple and analytical expressions for the electromechanical fields and electron concentration in the fiber were obtained. The fields are either totally or partially described by hyperbolic functions relatively large near the ends of the fiber and change rapidly there. The behavior of the fields is sensitive to the initial electron concentration and the applied axial force. For higher initial electron concentrations the fields are larger near the ends and change more rapidly there.
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yan Long
2018-05-01
When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.
Balmant, Wellington; Sugai-Guérios, Maura Harumi; Coradin, Juliana Hey; Krieger, Nadia; Furigo Junior, Agenor; Mitchell, David Alexander
2015-01-01
Current models that describe the extension of fungal hyphae and development of a mycelium either do not describe the role of vesicles in hyphal extension or do not correctly describe the experimentally observed profile for distribution of vesicles along the hypha. The present work uses the n-tanks-in-series approach to develop a model for hyphal extension that describes the intracellular transport of nutrient to a sub-apical zone where vesicles are formed and then transported to the tip, where tip extension occurs. The model was calibrated using experimental data from the literature for the extension of reproductive aerial hyphae of three different fungi, and was able to describe different profiles involving acceleration and deceleration of the extension rate. A sensitivity analysis showed that the supply of nutrient to the sub-apical vesicle-producing zone is a key factor influencing the rate of extension of the hypha. Although this model was used to describe the extension of a single reproductive aerial hypha, the use of the n-tanks-in-series approach to representing the hypha means that the model has the flexibility to be extended to describe the growth of other types of hyphae and the branching of hyphae to form a complete mycelium. PMID:25785863
On the sensitivity of annual streamflow to air temperature
Milly, Paul C.D.; Kam, Jonghun; Dunne, Krista A.
2018-01-01
Although interannual streamflow variability is primarily a result of precipitation variability, temperature also plays a role. The relative weakness of the temperature effect at the annual time scale hinders understanding, but may belie substantial importance on climatic time scales. Here we develop and evaluate a simple theory relating variations of streamflow and evapotranspiration (E) to those of precipitation (P) and temperature. The theory is based on extensions of the Budyko water‐balance hypothesis, the Priestley‐Taylor theory for potential evapotranspiration ( ), and a linear model of interannual basin storage. The theory implies that the temperature affects streamflow by modifying evapotranspiration through a Clausius‐Clapeyron‐like relation and through the sensitivity of net radiation to temperature. We apply and test (1) a previously introduced “strong” extension of the Budyko hypothesis, which requires that the function linking temporal variations of the evapotranspiration ratio (E/P) and the index of dryness ( /P) at an annual time scale is identical to that linking interbasin variations of the corresponding long‐term means, and (2) a “weak” extension, which requires only that the annual evapotranspiration ratio depends uniquely on the annual index of dryness, and that the form of that dependence need not be known a priori nor be identical across basins. In application of the weak extension, the readily observed sensitivity of streamflow to precipitation contains crucial information about the sensitivity to potential evapotranspiration and, thence, to temperature. Implementation of the strong extension is problematic, whereas the weak extension appears to capture essential controls of the temperature effect efficiently.
Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.
Testing Relativity with Electrodynamics
NASA Astrophysics Data System (ADS)
Bailey, Quentin; Kostelecky, Alan
2004-04-01
Lorentz and CPT violation is a promising candidate signal for Planck-scale physics. Low-energy effects of Lorentz and CPT violation are described by the general theoretical framework called the Standard-Model Extension (SME). This talk focuses on Lorentz-violating effects arising in the classical electrodynamics limit of the SME. Analysis of the theory shows that suitable experiments could improve by several orders of magnitude certain sensitivities achieved in modern Michelson-Morley and Kennedy-Thorndike tests.
Tests of Lorentz Symmetry with Electrodynamics
NASA Astrophysics Data System (ADS)
Bailey, Quentin; Kostelecky, Alan
2004-05-01
Lorentz and CPT violation is a promising candidate signal for Planck-scale physics. Low-energy effects of Lorentz and CPT violation are described by the general theoretical framework called the Standard-Model Extension (SME). This talk focuses on Lorentz-violating effects arising in the limit of classical electrodynamics. Analysis of the theory shows that suitable experiments could improve by several orders of magnitude on the sensitivities achieved in modern Michelson-Morley and Kennedy-Thorndike tests.
NASA Astrophysics Data System (ADS)
Liu, Bin; Harman, Michelle; Giattina, Susanne; Stamper, Debra L.; Demakis, Charles; Chilek, Mark; Raby, Stephanie; Brezinski, Mark E.
2006-06-01
Assessing tissue birefringence with imaging modality polarization-sensitive optical coherence tomography (PS-OCT) could improve the characterization of in vivo tissue pathology. Among the birefringent components, collagen may provide invaluable clinical information because of its alteration in disorders ranging from myocardial infarction to arthritis. But the features required of clinical imaging modality in these areas usually include the ability to assess the parameter of interest rapidly and without extensive data analysis, the characteristics that single-detector PS-OCT demonstrates. But beyond detecting organized collagen, which has been previously demonstrated and confirmed with the appropriate histological techniques, additional information can potentially be gained with PS-OCT, including collagen type, form versus intrinsic birefringence, the collagen angle, and the presence of multiple birefringence materials. In part I, we apply the simple but powerful fast-Fourier transform (FFT) to both PS-OCT mathematical modeling and in vitro bovine meniscus for improved PS-OCT data analysis. The FFT analysis yields, in a rapid, straightforward, and easily interpreted manner, information on the presence of multiple birefringent materials, distinguishing the true anatomical structure from patterns in image resulting from alterations in the polarization state and identifying the tissue/phantom optical axes. Therefore the use of the FFT analysis of PS-OCT data provides information on tissue composition beyond identifying the presence of organized collagen in real time and directly from the image without extensive mathematical manipulation or data analysis. In part II, Helistat phantoms (collagen type I) are analyzed with the ultimate goal of improved tissue characterization. This study, along with the data in part I, advance the insights gained from PS-OCT images beyond simply determining the presence or absence of birefringence.
Extension lifetime for dye-sensitized solar cells through multiple dye adsorption/desorption process
NASA Astrophysics Data System (ADS)
Chiang, Yi-Fang; Chen, Ruei-Tang; Shen, Po-Shen; Chen, Peter; Guo, Tzung-Fang
2013-03-01
In this study, we propose a novel concept of extending the lifetime of dye-sensitized solar cells (DSCs) and reducing the costs of re-conditioning DSCs by recycling the FTO/TiO2 substrates. The photovoltaic performances of DSCs using substrates with various cycles of dye uptake and rinse off history are tested. The results show that dye adsorption and Voc are significantly increased under multiple dye adsorption/desorption process and resulted in the improvement of power conversion efficiency. Moreover, the dyeing kinetics is faster after multiple recycling processes, which is favorable for the industrial application. With surface analysis and charge transport characteristics, we also demonstrate the optimal functionality of TiO2/dye interface for the improved Voc and efficiency. The results confirm that the improved performances are due to increased dye loading and dense packing of dye molecules. Our results are beneficial for the understanding on the extension of DSCs lifetime after long-term operation in the application of DSC modules. This approach may also be applied in the replacement of newly synthesized photosensitizes to the active cells.
Borotikar, Bhushan S.; Sheehan, Frances T.
2017-01-01
Objectives To establish an in vivo, normative patellofemoral cartilage contact mechanics database acquired during voluntary muscle control using a novel dynamic magnetic resonance (MR) imaging-based computational methodology and validate the contact mechanics sensitivity to the known sub-millimeter methodological inaccuracies. Design Dynamic cine phase-contrast and multi-plane cine images were acquired while female subjects (n=20, sample of convenience) performed an open kinetic chain (knee flexion-extension) exercise inside a 3-Tesla MR scanner. Static cartilage models were created from high resolution three-dimensional static MR data and accurately placed in their dynamic pose at each time frame based on the cine-PC data. Cartilage contact parameters were calculated based on the surface overlap. Statistical analysis was performed using paired t-test and a one-sample repeated measures ANOVA. The sensitivity of the contact parameters to the known errors in the patellofemoral kinematics was determined. Results Peak mean patellofemoral contact area was 228.7±173.6mm2 at 40° knee angle. During extension, contact centroid and peak strain locations tracked medially on the femoral and patellar cartilage and were not significantly different from each other. At 30°, 35°, and 40° of knee extension, contact area was significantly different. Contact area and centroid locations were insensitive to rotational and translational perturbations. Conclusion This study is a first step towards unfolding the biomechanical pathways to anterior patellofemoral pain and OA using dynamic, in vivo, and accurate methodologies. The database provides crucial data for future studies and for validation of, or as an input to, computational models. PMID:24012620
A New Method to Measure Crack Extension in Nuclear Graphite Based on Digital Image Correlation
Lai, Shigang; Shi, Li; Fok, Alex; ...
2017-01-01
Graphite components, used as moderators, reflectors, and core-support structures in a High-Temperature Gas-Cooled Reactor, play an important role in the safety of the reactor. Specifically, they provide channels for the fuel elements, control rods, and coolant flow. Fracture is the main failure mode for graphite, and breaching of the above channels by crack extension will seriously threaten the safety of a reactor. In this paper, a new method based on digital image correlation (DIC) is introduced for measuring crack extension in brittle materials. Cross-correlation of the displacements measured by DIC with a step function was employed to identify the advancingmore » crack tip in a graphite beam specimen under three-point bending. The load-crack extension curve, which is required for analyzing the R-curve and tension softening behaviors, was obtained for this material. Furthermore, a sensitivity analysis of the threshold value employed for the cross-correlation parameter in the crack identification process was conducted. Finally, the results were verified using the finite element method.« less
A New Method to Measure Crack Extension in Nuclear Graphite Based on Digital Image Correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Shigang; Shi, Li; Fok, Alex
Graphite components, used as moderators, reflectors, and core-support structures in a High-Temperature Gas-Cooled Reactor, play an important role in the safety of the reactor. Specifically, they provide channels for the fuel elements, control rods, and coolant flow. Fracture is the main failure mode for graphite, and breaching of the above channels by crack extension will seriously threaten the safety of a reactor. In this paper, a new method based on digital image correlation (DIC) is introduced for measuring crack extension in brittle materials. Cross-correlation of the displacements measured by DIC with a step function was employed to identify the advancingmore » crack tip in a graphite beam specimen under three-point bending. The load-crack extension curve, which is required for analyzing the R-curve and tension softening behaviors, was obtained for this material. Furthermore, a sensitivity analysis of the threshold value employed for the cross-correlation parameter in the crack identification process was conducted. Finally, the results were verified using the finite element method.« less
Additional EIPC Study Analysis. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, Stanton W; Gotham, Douglas J.; Luciani, Ralph L.
Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations weremore » developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 14 topics was developed for further analysis. This paper brings together the earlier interim reports of the first 13 topics plus one additional topic into a single final report.« less
Reanalysis, compatibility and correlation in analysis of modified antenna structures
NASA Technical Reports Server (NTRS)
Levy, R.
1989-01-01
A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.
1974-01-01
A theoretical formulation of differential and composite OMEGA error is presented to establish hypotheses about the functional relationships between various parameters and OMEGA navigational errors. Computer software developed to provide for extensive statistical analysis of the phase data is described. Results from the regression analysis used to conduct parameter sensitivity studies on differential OMEGA error tend to validate the theoretically based hypothesis concerning the relationship between uncorrected differential OMEGA error and receiver separation range and azimuth. Limited results of measurement of receiver repeatability error and line of position measurement error are also presented.
Chen, Jun; Zhou, Xueqing; Ma, Yingjun; Lin, Xiulian; Dai, Zong; Zou, Xiaoyong
2016-01-01
The sensitive and specific analysis of microRNAs (miRNAs) without using a thermal cycler instrument is significant and would greatly facilitate biological research and disease diagnostics. Although exponential amplification reaction (EXPAR) is the most attractive strategy for the isothermal analysis of miRNAs, its intrinsic limitations of detection efficiency and inevitable non-specific amplification critically restrict its use in analytical sensitivity and specificity. Here, we present a novel asymmetric EXPAR based on a new biotin/toehold featured template. A biotin tag was used to reduce the melting temperature of the primer/template duplex at the 5′ terminus of the template, and a toehold exchange structure acted as a filter to suppress the non-specific trigger of EXPAR. The asymmetric EXPAR exhibited great improvements in amplification efficiency and specificity as well as a dramatic extension of dynamic range. The limit of detection for the let-7a analysis was decreased to 6.02 copies (0.01 zmol), and the dynamic range was extended to 10 orders of magnitude. The strategy enabled the sensitive and accurate analysis of let-7a miRNA in human cancer tissues with clearly better precision than both standard EXPAR and RT-qPCR. Asymmetric EXPAR is expected to have an important impact on the development of simple and rapid molecular diagnostic applications for short oligonucleotides. PMID:27257058
Shape optimization of three-dimensional stamped and solid automotive components
NASA Technical Reports Server (NTRS)
Botkin, M. E.; Yang, R.-J.; Bennett, J. A.
1987-01-01
The shape optimization of realistic, 3-D automotive components is discussed. The integration of the major parts of the total process: modeling, mesh generation, finite element and sensitivity analysis, and optimization are stressed. Stamped components and solid components are treated separately. For stamped parts a highly automated capability was developed. The problem description is based upon a parameterized boundary design element concept for the definition of the geometry. Automatic triangulation and adaptive mesh refinement are used to provide an automated analysis capability which requires only boundary data and takes into account sensitivity of the solution accuracy to boundary shape. For solid components a general extension of the 2-D boundary design element concept has not been achieved. In this case, the parameterized surface shape is provided using a generic modeling concept based upon isoparametric mapping patches which also serves as the mesh generator. Emphasis is placed upon the coupling of optimization with a commercially available finite element program. To do this it is necessary to modularize the program architecture and obtain shape design sensitivities using the material derivative approach so that only boundary solution data is needed.
NASA Astrophysics Data System (ADS)
Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.
2018-04-01
Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.
Lee, Ho-Won; Muniyappa, Ranganath; Yan, Xu; Yue, Lilly Q.; Linden, Ellen H.; Chen, Hui; Hansen, Barbara C.
2011-01-01
The euglycemic glucose clamp is the reference method for assessing insulin sensitivity in humans and animals. However, clamps are ill-suited for large studies because of extensive requirements for cost, time, labor, and technical expertise. Simple surrogate indexes of insulin sensitivity/resistance including quantitative insulin-sensitivity check index (QUICKI) and homeostasis model assessment (HOMA) have been developed and validated in humans. However, validation studies of QUICKI and HOMA in both rats and mice suggest that differences in metabolic physiology between rodents and humans limit their value in rodents. Rhesus monkeys are a species more similar to humans than rodents. Therefore, in the present study, we evaluated data from 199 glucose clamp studies obtained from a large cohort of 86 monkeys with a broad range of insulin sensitivity. Data were used to evaluate simple surrogate indexes of insulin sensitivity/resistance (QUICKI, HOMA, Log HOMA, 1/HOMA, and 1/Fasting insulin) with respect to linear regression, predictive accuracy using a calibration model, and diagnostic performance using receiver operating characteristic. Most surrogates had modest linear correlations with SIClamp (r ≈ 0.4–0.64) with comparable correlation coefficients. Predictive accuracy determined by calibration model analysis demonstrated better predictive accuracy of QUICKI than HOMA and Log HOMA. Receiver operating characteristic analysis showed equivalent sensitivity and specificity of most surrogate indexes to detect insulin resistance. Thus, unlike in rodents but similar to humans, surrogate indexes of insulin sensitivity/resistance including QUICKI and log HOMA may be reasonable to use in large studies of rhesus monkeys where it may be impractical to conduct glucose clamp studies. PMID:21209021
An optimal search filter for retrieving systematic reviews and meta-analyses
2012-01-01
Background Health-evidence.ca is an online registry of systematic reviews evaluating the effectiveness of public health interventions. Extensive searching of bibliographic databases is required to keep the registry up to date. However, search filters have been developed to assist in searching the extensive amount of published literature indexed. Search filters can be designed to find literature related to a certain subject (i.e. content-specific filter) or particular study designs (i.e. methodological filter). The objective of this paper is to describe the development and validation of the health-evidence.ca Systematic Review search filter and to compare its performance to other available systematic review filters. Methods This analysis of search filters was conducted in MEDLINE, EMBASE, and CINAHL. The performance of thirty-one search filters in total was assessed. A validation data set of 219 articles indexed between January 2004 and December 2005 was used to evaluate performance on sensitivity, specificity, precision and the number needed to read for each filter. Results Nineteen of 31 search filters were effective in retrieving a high level of relevant articles (sensitivity scores greater than 85%). The majority achieved a high degree of sensitivity at the expense of precision and yielded large result sets. The main advantage of the health-evidence.ca Systematic Review search filter in comparison to the other filters was that it maintained the same level of sensitivity while reducing the number of articles that needed to be screened. Conclusions The health-evidence.ca Systematic Review search filter is a useful tool for identifying published systematic reviews, with further screening to identify those evaluating the effectiveness of public health interventions. The filter that narrows the focus saves considerable time and resources during updates of this online resource, without sacrificing sensitivity. PMID:22512835
Space station integrated wall design and penetration damage control
NASA Technical Reports Server (NTRS)
Coronado, A. R.; Gibbins, M. N.; Wright, M. A.; Stern, P. H.
1987-01-01
The analysis code BUMPER executes a numerical solution to the problem of calculating the probability of no penetration (PNP) of a spacecraft subject to man-made orbital debris or meteoroid impact. The codes were developed on a DEC VAX 11/780 computer that uses the Virtual Memory System (VMS) operating system, which is written in FORTRAN 77 with no VAX extensions. To help illustrate the steps involved, a single sample analysis is performed. The example used is the space station reference configuration. The finite element model (FEM) of this configuration is relatively complex but demonstrates many BUMPER features. The computer tools and guidelines are described for constructing a FEM for the space station under consideration. The methods used to analyze the sensitivity of PNP to variations in design, are described. Ways are suggested for developing contour plots of the sensitivity study data. Additional BUMPER analysis examples are provided, including FEMs, command inputs, and data outputs. The mathematical theory used as the basis for the code is described, and illustrates the data flow within the analysis.
Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.
van Dijk, R; van Assen, M; Vliegenthart, R; de Bock, G H; van der Harst, P; Oudkerk, M
2017-11-27
Stress cardiovascular magnetic resonance (CMR) perfusion imaging is a promising modality for the evaluation of coronary artery disease (CAD) due to high spatial resolution and absence of radiation. Semi-quantitative and quantitative analysis of CMR perfusion are based on signal-intensity curves produced during the first-pass of gadolinium contrast. Multiple semi-quantitative and quantitative parameters have been introduced. Diagnostic performance of these parameters varies extensively among studies and standardized protocols are lacking. This study aims to determine the diagnostic accuracy of semi- quantitative and quantitative CMR perfusion parameters, compared to multiple reference standards. Pubmed, WebOfScience, and Embase were systematically searched using predefined criteria (3272 articles). A check for duplicates was performed (1967 articles). Eligibility and relevance of the articles was determined by two reviewers using pre-defined criteria. The primary data extraction was performed independently by two researchers with the use of a predefined template. Differences in extracted data were resolved by discussion between the two researchers. The quality of the included studies was assessed using the 'Quality Assessment of Diagnostic Accuracy Studies Tool' (QUADAS-2). True positives, false positives, true negatives, and false negatives were subtracted/calculated from the articles. The principal summary measures used to assess diagnostic accuracy were sensitivity, specificity, andarea under the receiver operating curve (AUC). Data was pooled according to analysis territory, reference standard and perfusion parameter. Twenty-two articles were eligible based on the predefined study eligibility criteria. The pooled diagnostic accuracy for segment-, territory- and patient-based analyses showed good diagnostic performance with sensitivity of 0.88, 0.82, and 0.83, specificity of 0.72, 0.83, and 0.76 and AUC of 0.90, 0.84, and 0.87, respectively. In per territory analysis our results show similar diagnostic accuracy comparing anatomical (AUC 0.86(0.83-0.89)) and functional reference standards (AUC 0.88(0.84-0.90)). Only the per territory analysis sensitivity did not show significant heterogeneity. None of the groups showed signs of publication bias. The clinical value of semi-quantitative and quantitative CMR perfusion analysis remains uncertain due to extensive inter-study heterogeneity and large differences in CMR perfusion acquisition protocols, reference standards, and methods of assessment of myocardial perfusion parameters. For wide spread implementation, standardization of CMR perfusion techniques is essential. CRD42016040176 .
Spatial recurrence analysis: A sensitive and fast detection tool in digital mammography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prado, T. L.; Galuzio, P. P.; Lopes, S. R.
Efficient diagnostics of breast cancer requires fast digital mammographic image processing. Many breast lesions, both benign and malignant, are barely visible to the untrained eye and requires accurate and reliable methods of image processing. We propose a new method of digital mammographic image analysis that meets both needs. It uses the concept of spatial recurrence as the basis of a spatial recurrence quantification analysis, which is the spatial extension of the well-known time recurrence analysis. The recurrence-based quantifiers are able to evidence breast lesions in a way as good as the best standard image processing methods available, but with amore » better control over the spurious fragments in the image.« less
Wright, M J; Bishop, D T; Jackson, R C; Abernethy, B
2011-08-18
Badminton players of varying skill levels viewed normal and point-light video clips of opponents striking the shuttle towards the viewer; their task was to predict in which quadrant of the court the shuttle would land. In a whole-brain fMRI analysis we identified bilateral cortical networks sensitive to the anticipation task relative to control stimuli. This network is more extensive and localised than previously reported. Voxel clusters responding more strongly in experts than novices were associated with all task-sensitive areas, whereas voxels responding more strongly in novices were found outside these areas. Task-sensitive areas for normal and point-light video were very similar, whereas early visual areas responded differentially, indicating the primacy of kinematic information for sport-related anticipation. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Nikolopoulou, Eleni; Lorusso, Massimo; Micelli Ferrari, Luisa; Cicinelli, Maria Vittoria; Bandello, Francesco; Querques, Giuseppe; Micelli Ferrari, Tommaso
2018-01-01
Optical coherence tomography angiography (OCTA) could be a valid tool to detect choroidal neovascularization (CNV) in neovascular age-related macular degeneration (nAMD), allowing the analysis of the type, the morphology, and the extension of CNV in most of the cases. To determine the sensitivity and specificity of OCTA in detecting CNV secondary to nAMD, compared to fluorescein angiography (FA) and indocyanine green angiography (ICGA). Prospective observational study. Patients with suspected nAMD were recruited between May and December 2016. Patients underwent FA, ICGA, spectral domain OCT, and OCTA (AngioVue, Optovue, Inc.). Sensitivity and specificity of FA, with or without ICGA, were assessed and compared with OCTA. Seventy eyes of 70 consecutive patients were included: 32 eyes (45.7%) with type I CNV, 8 eyes (11.4%) with type II CNV, 4 eyes (5.7%) with type III CNV, 6 eyes (8.6%) with mixed type I and type II CNV, and 20 eyes (28.6%) with no CNV. Sensitivity of OCTA was 88% and specificity was 90%. Concordance between FA/ICGA and OCTA was very good (0,91; range 0,81-1,00). OCTA showed high sensitivity and specificity for detection of CNV. Concordance between OCTA and gold-standard dye-based techniques was excellent. OCTA may represent a first-line noninvasive method for the diagnosis of nAMD.
Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2013-01-01
The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less
Functional optical coherence tomography: principles and progress
NASA Astrophysics Data System (ADS)
Kim, Jina; Brown, William; Maher, Jason R.; Levinson, Howard; Wax, Adam
2015-05-01
In the past decade, several functional extensions of optical coherence tomography (OCT) have emerged, and this review highlights key advances in instrumentation, theoretical analysis, signal processing and clinical application of these extensions. We review five principal extensions: Doppler OCT (DOCT), polarization-sensitive OCT (PS-OCT), optical coherence elastography (OCE), spectroscopic OCT (SOCT), and molecular imaging OCT. The former three have been further developed with studies in both ex vivo and in vivo human tissues. This review emphasizes the newer techniques of SOCT and molecular imaging OCT, which show excellent potential for clinical application but have yet to be well reviewed in the literature. SOCT elucidates tissue characteristics, such as oxygenation and carcinogenesis, by detecting wavelength-dependent absorption and scattering of light in tissues. While SOCT measures endogenous biochemical distributions, molecular imaging OCT detects exogenous molecular contrast agents. These newer advances in functional OCT broaden the potential clinical application of OCT by providing novel ways to understand tissue activity that cannot be accomplished by other current imaging methodologies.
Functional Optical Coherence Tomography: Principles and Progress
Kim, Jina; Brown, William; Maher, Jason R.; Levinson, Howard; Wax, Adam
2015-01-01
In the past decade, several functional extensions of optical coherence tomography (OCT) have emerged, and this review highlights key advances in instrumentation, theoretical analysis, signal processing and clinical application of these extensions. We review five principal extensions: Doppler OCT (DOCT), polarization-sensitive OCT (PS-OCT), optical coherence elastography (OCE), spectroscopic OCT (SOCT), and molecular imaging OCT. The former three have been further developed with studies in both ex vivo and in vivo human tissues. This review emphasizes the newer techniques of SOCT and molecular imaging OCT, which show excellent potential for clinical application but have yet to be well reviewed in the literature. SOCT elucidates tissue characteristics, such as oxygenation and carcinogenesis, by detecting wavelength-dependent absorption and scattering of light in tissues. While SOCT measures endogenous biochemical distributions, molecular imaging OCT detects exogenous molecular contrast agents. These newer advances in functional OCT broaden the potential clinical application of OCT by providing novel ways to understand tissue activity that cannot be accomplished by other current imaging methodologies. PMID:25951836
Waste-to-energy: A review of life cycle assessment and its extension methods.
Zhou, Zhaozhi; Tang, Yuanjun; Chi, Yong; Ni, Mingjiang; Buekens, Alfons
2018-01-01
This article proposes a comprehensive review of evaluation tools based on life cycle thinking, as applied to waste-to-energy. Habitually, life cycle assessment is adopted to assess environmental burdens associated with waste-to-energy initiatives. Based on this framework, several extension methods have been developed to focus on specific aspects: Exergetic life cycle assessment for reducing resource depletion, life cycle costing for evaluating its economic burden, and social life cycle assessment for recording its social impacts. Additionally, the environment-energy-economy model integrates both life cycle assessment and life cycle costing methods and judges simultaneously these three features for sustainable waste-to-energy conversion. Life cycle assessment is sufficiently developed on waste-to-energy with concrete data inventory and sensitivity analysis, although the data and model uncertainty are unavoidable. Compared with life cycle assessment, only a few evaluations are conducted to waste-to-energy techniques by using extension methods and its methodology and application need to be further developed. Finally, this article succinctly summarises some recommendations for further research.
Wei, Binnian; McGuffey, James E; Blount, Benjamin C; Wang, Lanqing
2016-01-01
Maternal exposure to marijuana during the lactation period-either active or passive-has prompted concerns about transmission of cannabinoids to breastfed infants and possible subsequent adverse health consequences. Assessing these health risks requires a sensitive analytical approach that is able to quantitatively measure trace-level cannabinoids in breast milk. Here, we describe a saponification-solid phase extraction approach combined with ultra-high-pressure liquid chromatography-tandem mass spectrometry for simultaneously quantifying Δ9-tetrahydrocannabinol (THC), cannabidiol (CBD), and cannabinol (CBN) in breast milk. We demonstrate for the first time that constraints on sensitivity can be overcome by utilizing alkaline saponification of the milk samples. After extensively optimizing the saponification procedure, the validated method exhibited limits of detections of 13, 4, and 66 pg/mL for THC, CBN, and CBD, respectively. Notably, the sensitivity achieved was significantly improved, for instance, the limits of detection for THC is at least 100-fold more sensitive compared to that previously reported in the literature. This is essential for monitoring cannabinoids in breast milk resulting from passive or nonrecent active maternal exposure. Furthermore, we simultaneously acquired multiple reaction monitoring transitions for 12 C- and 13 C-analyte isotopes. This combined analysis largely facilitated data acquisition by reducing the repetitive analysis rate for samples exceeding the linear limits of 12 C-analytes. In addition to high sensitivity and broad quantitation range, this method delivers excellent accuracy (relative error within ±10%), precision (relative standard deviation <10%), and efficient analysis. In future studies, we expect this method to play a critical role in assessing infant exposure to cannabinoids through breastfeeding.
Root gravitropism in maize and Arabidopsis
NASA Technical Reports Server (NTRS)
Evans, Michael L.
1993-01-01
Research during the period 1 March 1992 to 30 November 1993 focused on improvements in a video digitizer system designed to automate the recording of surface extension in plants responding to gravistimulation. The improvements included modification of software to allow detailed analysis of localized extension patterns in roots of Arabidopsis. We used the system to analyze the role of the postmitotic isodiametric growth zone (a region between the meristem and the elongation zone) in the response of maize roots to auxin, calcium, touch and gravity. We also used the system to analyze short-term auxin and gravitropic responses in mutants of Arabidopsis with reduced auxin sensitivity. In a related project, we studied the relationship between growth rate and surface electrical currents in roots by examining the effects of gravity and thigmostimulation on surface potentials in maize roots.
Genetic Variation in Taste Sensitivity to Sugars in Drosophila melanogaster.
Uchizono, Shun; Tanimura, Teiichi
2017-05-01
Taste sensitivity plays a major role in controlling feeding behavior, and alterations in feeding habit induced by changes in taste sensitivity can drive speciation. We investigated variability in taste preferences in wild-derived inbred lines from the Drosophila melanogaster Genetic Reference Panel. Preferences for different sugars, which are essential nutrients for fruit flies, were assessed using two-choice preference tests that paired glucose with fructose, sucrose, or trehalose. The two-choice tests revealed that individual lines have differential and widely variable sugar preferences, and that sugar taste sensitivity is polygenic in the inbred population tested. We focused on 2 strains that exhibited opposing preferences for glucose and fructose, and performed proboscis extension reflex tests and electrophysiological recordings on taste sensilla upon exposure to fructose and glucose. The results indicated that taste sensitivity to fructose is dimorphic between the 2 lines. Genetic analysis showed that high sensitivity to fructose is autosomal dominant over low sensitivity, and that multiple loci on chromosomes 2 and 3 influence sensitivity. Further genetic complementation tests for fructose sensitivity on putative gustatory receptor (Gr) genes for sugars suggested that the Gr64a-Gr64f locus, not the fructose receptor gene Gr43a, might contribute to the dimorphic sensitivity to fructose between the 2 lines. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Measurement of the Muon Production Depths at the Pierre Auger Observatory
Collica, Laura
2016-09-08
The muon content of extensive air showers is an observable sensitive to the primary composition and to the hadronic interaction properties. The Pierre Auger Observatory uses water-Cherenkov detectors to measure particle densities at the ground and therefore is sensitive to the muon content of air showers. We present here a method which allows us to estimate the muon production depths by exploiting the measurement of the muon arrival times at the ground recorded with the Surface Detector of the Pierre Auger Observatory. The analysis is performed in a large range of zenith angles, thanks to the capability of estimating and subtracting the electromagnetic component, and for energies betweenmore » $$10^{19.2}$$ and $$10^{20}$$ eV.« less
Steinka-Fry, Katarzyna T; Tanner-Smith, Emily E; Dakof, Gayle A; Henderson, Craig
2017-04-01
This systematic review and meta-analysis synthesized findings from studies examining culturally sensitive substance use treatment for racial/ethnic minority youth. An extensive literature search located eight eligible studies using experimental or quasi-experimental designs. The meta-analysis quantitatively synthesized findings comparing seven culturally sensitive treatment conditions to seven alternative conditions on samples composed of at least 90% racial/ethnic minority youth. The results from the meta-analysis indicated that culturally sensitive treatments were associated with significantly larger reductions in post-treatment substance use levels relative to their comparison conditions (g=0.37, 95% CI [0.12, 0.62], k=7, total number participants=723). The average time between pretest and posttest was 21weeks (SD=11.79). There was a statistically significant amount of heterogeneity across the seven studies (Q=26.5, p=0.00, τ 2 =0.08, I 2 =77.4%). Differential effects were not statistically significant when contrasts were active generic counterparts of treatment conditions (direct "bona fide" comparisons; g=-0.08, 95% CI [-0.51, 0.35]) and 'treatment as usual' conditions (g=0.39, 95% CI [-0.14, 0.91]). Strong conclusions from the review were hindered by the small number of available studies for synthesis, variability in comparison conditions across studies, and lack of diversity in the adolescent clients served in the studies. Nonetheless, this review suggests that culturally sensitive treatments offer promise as an effective way to address substance use among racial/ethnic minority youth. Copyright © 2017 Elsevier Inc. All rights reserved.
Hydrologic sensitivity of headwater catchments to climate and landscape variability
NASA Astrophysics Data System (ADS)
Kelleher, Christa; Wagener, Thorsten; McGlynn, Brian; Nippgen, Fabian; Jencso, Kelsey
2013-04-01
Headwater streams cumulatively represent an extensive portion of the United States stream network, yet remain largely unmonitored and unmapped. As such, we have limited understanding of how these systems will respond to change, knowledge that is important for preserving these unique ecosystems, the services they provide, and the biodiversity they support. We compare responses across five adjacent headwater catchments located in Tenderfoot Creek Experimental Forest in Montana, USA, to understand how local differences may affect the sensitivity of headwaters to change. We utilize global, variance-based sensitivity analysis to understand which aspects of the physical system (e.g., vegetation, topography, geology) control the variability in hydrologic behavior across these basins, and how this varies as a function of time (and therefore climate). Basin fluxes and storages, including evapotranspiration, snow water equivalent and melt, soil moisture and streamflow, are simulated using the Distributed Hydrology-Vegetation-Soil Model (DHSVM). Sensitivity analysis is applied to quantify the importance of different physical parameters to the spatial and temporal variability of different water balance components, allowing us to map similarities and differences in these controls through space and time. Our results show how catchment influences on fluxes vary across seasons (thus providing insight into transferability of knowledge in time), and how they vary across catchments with different physical characteristics (providing insight into transferability in space).
Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.
McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817
Gender-Sensitive Approaches to Extension Programme Design
ERIC Educational Resources Information Center
Jafry, Tahseen; Sulaiman, V. Rasheed
2013-01-01
Purpose: Though women are engaged in farming and play a major role in almost every agricultural operation, they continue to receive very limited extension support. While several interventions have been made to address this "gender" bias in extension delivery, there continues to be a shortfall between the kind of support that is provided…
Li, Guimin; Li, Wangfeng; Liu, Lixia
2012-01-01
Real-time PCR has engendered wide acceptance for quantitation of hepatitis B virus (HBV) DNA in the blood due to its improved rapidity, sensitivity, reproducibility, and reduced contamination. Here we describe a cost-effective and highly sensitive HBV real-time quantitative assay based on the light upon extension real-time PCR platform and a simple and reliable HBV DNA preparation method using silica-coated magnetic beads.
2011-01-01
with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1...Research Associate at ARL with WRA, and largely completed more recently while at Dept. of Chem., SUNY, Cortland, NY. Currently unaffiliated. †Former...promised to provide an extensive, definitive review critically assessing our current understanding of DZ structure and chemistry, and providing a documented
Polymerase chain displacement reaction.
Harris, Claire L; Sanchez-Vargas, Irma J; Olson, Ken E; Alphey, Luke; Fu, Guoliang
2013-02-01
Quantitative PCR assays are now the standard method for viral diagnostics. These assays must be specific, as well as sensitive, to detect the potentially low starting copy number of viral genomic material. We describe a new technique, polymerase chain displacement reaction (PCDR), which uses multiple nested primers in a rapid, capped, one-tube reaction that increases the sensitivity of normal quantitative PCR (qPCR) assays. Sensitivity was increased by approximately 10-fold in a proof-of-principle test on dengue virus sequence. In PCDR, when extension occurs from the outer primer, it displaces the extension strand produced from the inner primer by utilizing a polymerase that has strand displacement activity. This allows a greater than 2-fold increase of amplification product for each amplification cycle and therefore increased sensitivity and speed over conventional PCR. Increased sensitivity in PCDR would be useful in nucleic acid detection for viral diagnostics.
Radomyski, Artur; Giubilato, Elisa; Ciffroy, Philippe; Critto, Andrea; Brochot, Céline; Marcomini, Antonio
2016-11-01
The study is focused on applying uncertainty and sensitivity analysis to support the application and evaluation of large exposure models where a significant number of parameters and complex exposure scenarios might be involved. The recently developed MERLIN-Expo exposure modelling tool was applied to probabilistically assess the ecological and human exposure to PCB 126 and 2,3,7,8-TCDD in the Venice lagoon (Italy). The 'Phytoplankton', 'Aquatic Invertebrate', 'Fish', 'Human intake' and PBPK models available in MERLIN-Expo library were integrated to create a specific food web to dynamically simulate bioaccumulation in various aquatic species and in the human body over individual lifetimes from 1932 until 1998. MERLIN-Expo is a high tier exposure modelling tool allowing propagation of uncertainty on the model predictions through Monte Carlo simulation. Uncertainty in model output can be further apportioned between parameters by applying built-in sensitivity analysis tools. In this study, uncertainty has been extensively addressed in the distribution functions to describe the data input and the effect on model results by applying sensitivity analysis techniques (screening Morris method, regression analysis, and variance-based method EFAST). In the exposure scenario developed for the Lagoon of Venice, the concentrations of 2,3,7,8-TCDD and PCB 126 in human blood turned out to be mainly influenced by a combination of parameters (half-lives of the chemicals, body weight variability, lipid fraction, food assimilation efficiency), physiological processes (uptake/elimination rates), environmental exposure concentrations (sediment, water, food) and eating behaviours (amount of food eaten). In conclusion, this case study demonstrated feasibility of MERLIN-Expo to be successfully employed in integrated, high tier exposure assessment. Copyright © 2016 Elsevier B.V. All rights reserved.
Nano-textured high sensitivity ion sensitive field effect transistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hajmirzaheydarali, M.; Sadeghipari, M.; Akbari, M.
2016-02-07
Nano-textured gate engineered ion sensitive field effect transistors (ISFETs), suitable for high sensitivity pH sensors, have been realized. Utilizing a mask-less deep reactive ion etching results in ultra-fine poly-Si features on the gate of ISFET devices where spacing of the order of 10 nm and less is achieved. Incorporation of these nano-sized features on the gate is responsible for high sensitivities up to 400 mV/pH in contrast to conventional planar structures. The fabrication process for this transistor is inexpensive, and it is fully compatible with standard complementary metal oxide semiconductor fabrication procedure. A theoretical modeling has also been presented to predict themore » extension of the diffuse layer into the electrolyte solution for highly featured structures and to correlate this extension with the high sensitivity of the device. The observed ultra-fine features by means of scanning electron microscopy and transmission electron microscopy tools corroborate the theoretical prediction.« less
Marrero-Alemán, Gabriel; Saavedra Santana, Pedro; Liuti, Federica; Hernández, Noelia; López-Jiménez, Esmeralda; Borrego, Leopoldo
Sensitivity to methylchloroisothiazolinone (MCI)/methylisothiazolinone (MI) has increased rapidly over recent years. This increase is mainly related to the extensive use of high concentrations of MI in cosmetic products, although a growing number of cases of occupational allergic contact dermatitis are caused by MCI/MI. The aim of this study was to examine the association between the increase in MCI/MI sensitization and the work performed by the patients in our area. A retrospective study was undertaken of the records of a total of 1179 patients who had undergone contact skin patch tests for MCI/MI from January 2005 to December 2015. A multivariate logistic regression analysis was performed to identify the factors independently associated with sensitivity to MCI/MI. A constant increase in MCI/MI sensitization was observed over the observation period. The only work associated with a significant increase in the prevalence of MCI/MI sensitization was cleaning, with 38.5% of the cleaning professionals with MCI/MI sensitization consulting for cosmetics-related dermatitis. Occupational sensitization to MCI/MI in cleaning professionals is worryingly increasing. This, in turn, could possibly account for many cases of cosmetics-associated contact dermatitis. Our findings suggest that a review of the regulations with regard to isothiazolinone concentrations in industrial and household detergents is necessary.
A conceptualisation framework for building consensus on environmental sensitivity.
González Del Campo, Ainhoa
2017-09-15
Examination of the intrinsic attributes of a system that render it more or less sensitive to potential stressors provides further insight into the baseline environment. In impact assessment, sensitivity of environmental receptors can be conceptualised on the basis of their: a) quality status according to statutory indicators and associated thresholds or targets; b) statutory protection; or c) inherent risk. Where none of these considerations are pertinent, subjective value judgments can be applied to determine sensitivity. This pragmatic conceptual framework formed the basis of a stakeholder consultation process for harmonising degrees of sensitivity of a number of environmental criteria. Harmonisation was sought to facilitate their comparative and combined analysis. Overall, full or wide agreement was reached on relative sensitivity values for the large majority of the reviewed criteria. Consensus was easier to reach on some themes (e.g. biodiversity, water and cultural heritage) than others (e.g. population and soils). As anticipated, existing statutory measures shaped the outcomes but, ultimately, knowledge-based values prevailed. The agreed relative sensitivities warrant extensive consultation but the conceptual framework provides a basis for increasing stakeholder consensus and objectivity of baseline assessments. This, in turn, can contribute to improving the evidence-base for characterising the significance of potential impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-15
... Information Collection Activity Under OMB Review: Sensitive Security Information Threat Assessments AGENCY... Transportation Security Administration (TSA) has forwarded the Information Collection Request (ICR), Office of... of a party seeking access to sensitive security information (SSI) in a civil proceeding in Federal...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Troy; Bhat, Sham; Marcy, Peter
Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less
Holland, Troy; Bhat, Sham; Marcy, Peter; ...
2017-08-25
Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less
Extension of the ADjoint Approach to a Laminar Navier-Stokes Solver
NASA Astrophysics Data System (ADS)
Paige, Cody
The use of adjoint methods is common in computational fluid dynamics to reduce the cost of the sensitivity analysis in an optimization cycle. The forward mode ADjoint is a combination of an adjoint sensitivity analysis method with a forward mode automatic differentiation (AD) and is a modification of the reverse mode ADjoint method proposed by Mader et al.[1]. A colouring acceleration technique is presented to reduce the computational cost increase associated with forward mode AD. The forward mode AD facilitates the implementation of the laminar Navier-Stokes (NS) equations. The forward mode ADjoint method is applied to a three-dimensional computational fluid dynamics solver. The resulting Euler and viscous ADjoint sensitivities are compared to the reverse mode Euler ADjoint derivatives and a complex-step method to demonstrate the reduced computational cost and accuracy. Both comparisons demonstrate the benefits of the colouring method and the practicality of using a forward mode AD. [1] Mader, C.A., Martins, J.R.R.A., Alonso, J.J., and van der Weide, E. (2008) ADjoint: An approach for the rapid development of discrete adjoint solvers. AIAA Journal, 46(4):863-873. doi:10.2514/1.29123.
Computer-aided communication satellite system analysis and optimization
NASA Technical Reports Server (NTRS)
Stagl, T. W.; Morgan, N. H.; Morley, R. E.; Singh, J. P.
1973-01-01
The capabilities and limitations of the various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. A satellite Telecommunication analysis and Modeling Program (STAMP) for costing and sensitivity analysis work in application of communication satellites to educational development is given. The modifications made to STAMP include: extension of the six beam capability to eight; addition of generation of multiple beams from a single reflector system with an array of feeds; an improved system costing to reflect the time value of money, growth in earth terminal population with time, and to account for various measures of system reliability; inclusion of a model for scintillation at microwave frequencies in the communication link loss model; and, an updated technological environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bufoni, André Luiz, E-mail: bufoni@facc.ufrj.br; Oliveira, Luciano Basto; Rosa, Luiz Pinguelli
Highlights: • Projects are not financially attractive without registration as CDMs. • WM benchmarks and indicators are converging and reducing in variance. • A sensitivity analysis reveal that revenue has more of an effect on the financial results. • Results indicate that an extensive database would reduce WM project risk and capital costs. • Disclosure standards would make information more comparable worldwide. - Abstract: This study illustrates the financial analyses for demonstration and assessment of additionality presented in the project design (PDD) and enclosed documents of the 431 large Clean Development Mechanisms (CDM) classified as the ‘waste handling and disposalmore » sector’ (13) over the past ten years (2004–2014). The expected certified emissions reductions (CER) of these projects total 63.54 million metric tons of CO{sub 2}eq, where eight countries account for 311 projects and 43.36 million metric tons. All of the projects declare themselves ‘not financially attractive’ without CER with an estimated sum of negative results of approximately a half billion US$. The results indicate that WM benchmarks and indicators are converging and reducing in variance, and the sensitivity analysis reveals that revenues have a greater effect on the financial results. This work concludes that an extensive financial database with simple standards for disclosure would greatly diminish statement problems and make information more comparable, reducing the risk and capital costs of WM projects.« less
Measuring the speed resolution of extensive air showers at the Southern Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Gesterling, Kathleen; Sarazin, Fred
2009-10-01
Ultra-high energy cosmic rays induce extensive air showers (EASs) in Earth's atmosphere which are assumed to propagate at the speed of light. The fluorescence detector (FD) at the Southern Pierre Auger Observatory detects the light signal from the EAS and directly measures the energy of the cosmic ray. When two or more FD sites observe an event, the geometry of the shower can be calculated independently of the velocity it is traveling. It is then possible to fit the time profile recorded in the FD using the shower speed as a free parameter. The analysis of a collection of stereo events allowed us to determine with what speed resolution we can measure EASs with sensitivity to subluminal components. Knowing the speed resolution we can look for objects propagating significantly below the speed of light.
Schlattmann, Peter; Verba, Maryna; Dewey, Marc; Walther, Mario
2015-01-01
Bivariate linear and generalized linear random effects are frequently used to perform a diagnostic meta-analysis. The objective of this article was to apply a finite mixture model of bivariate normal distributions that can be used for the construction of componentwise summary receiver operating characteristic (sROC) curves. Bivariate linear random effects and a bivariate finite mixture model are used. The latter model is developed as an extension of a univariate finite mixture model. Two examples, computed tomography (CT) angiography for ruling out coronary artery disease and procalcitonin as a diagnostic marker for sepsis, are used to estimate mean sensitivity and mean specificity and to construct sROC curves. The suggested approach of a bivariate finite mixture model identifies two latent classes of diagnostic accuracy for the CT angiography example. Both classes show high sensitivity but mainly two different levels of specificity. For the procalcitonin example, this approach identifies three latent classes of diagnostic accuracy. Here, sensitivities and specificities are quite different as such that sensitivity increases with decreasing specificity. Additionally, the model is used to construct componentwise sROC curves and to classify individual studies. The proposed method offers an alternative approach to model between-study heterogeneity in a diagnostic meta-analysis. Furthermore, it is possible to construct sROC curves even if a positive correlation between sensitivity and specificity is present. Copyright © 2015 Elsevier Inc. All rights reserved.
Evaluation and construction of diagnostic criteria for inclusion body myositis
Mammen, Andrew L.; Amato, Anthony A.; Weiss, Michael D.; Needham, Merrilee
2014-01-01
Objective: To use patient data to evaluate and construct diagnostic criteria for inclusion body myositis (IBM), a progressive disease of skeletal muscle. Methods: The literature was reviewed to identify all previously proposed IBM diagnostic criteria. These criteria were applied through medical records review to 200 patients diagnosed as having IBM and 171 patients diagnosed as having a muscle disease other than IBM by neuromuscular specialists at 2 institutions, and to a validating set of 66 additional patients with IBM from 2 other institutions. Machine learning techniques were used for unbiased construction of diagnostic criteria. Results: Twenty-four previously proposed IBM diagnostic categories were identified. Twelve categories all performed with high (≥97%) specificity but varied substantially in their sensitivities (11%–84%). The best performing category was European Neuromuscular Centre 2013 probable (sensitivity of 84%). Specialized pathologic features and newly introduced strength criteria (comparative knee extension/hip flexion strength) performed poorly. Unbiased data-directed analysis of 20 features in 371 patients resulted in construction of higher-performing data-derived diagnostic criteria (90% sensitivity and 96% specificity). Conclusions: Published expert consensus–derived IBM diagnostic categories have uniformly high specificity but wide-ranging sensitivities. High-performing IBM diagnostic category criteria can be developed directly from principled unbiased analysis of patient data. Classification of evidence: This study provides Class II evidence that published expert consensus–derived IBM diagnostic categories accurately distinguish IBM from other muscle disease with high specificity but wide-ranging sensitivities. PMID:24975859
A study of the stress wave factor technique for nondestructive evaluation of composite materials
NASA Technical Reports Server (NTRS)
Sarrafzadeh-Khoee, A.; Kiernan, M. T.; Duke, J. C., Jr.; Henneke, E. G., II
1986-01-01
The acousto-ultrasonic method of nondestructive evaluation is an extremely sensitive means of assessing material response. Efforts continue to complete the understanding of this method. In order to achieve the full sensitivity of the technique, extreme care must be taken in its performance. This report provides an update of the efforts to advance the understanding of this method and to increase its application to the nondestructive evaluation of composite materials. Included are descriptions of a novel optical system that is capable of measuring in-plane and out-of-plane displacements, an IBM PC-based data acquisition system, an extensive data analysis software package, the azimuthal variation of acousto-ultrasonic behavior in graphite/epoxy laminates, and preliminary examination of processing variation in graphite-aluminum tubes.
Ferrone, Cristina R; Ting, David T; Shahid, Mohammed; Konstantinidis, Ioannis T; Sabbatino, Francesco; Goyal, Lipika; Rice-Stitt, Travis; Mubeen, Ayesha; Arora, Kshitij; Bardeesey, Nabeel; Miura, John; Gamblin, T Clark; Zhu, Andrew X; Borger, Darrell; Lillemoe, Keith D; Rivera, Miguel N; Deshpande, Vikram
2016-01-01
Intrahepatic cholangiocarcinoma (ICC) often is a diagnosis determined by exclusion. Distinguishing ICC from other metastatic adenocarcinomas based on histopathologic or immunohistochemical analysis often is difficult and requires an extensive workup. This study aimed to determine whether albumin, whose expression is restricted to the liver, has potential as a biomarker for ICC using a novel and highly sensitive RNA in situ hybridization (ISH) platform. Modified branched DNA probes were developed for albumin RNA ISH. The study evaluated 467 patient samples of primary and metastatic lesions. Of the 467 samples evaluated, 83 were ICCs, 42 were hepatocellular carcinomas (HCCs), and 332 were nonhepatic carcinomas including tumors arising from the perihilar region and bile duct, pancreas, stomach, esophagus, colon, breast, ovary, endometrium, kidney, and urinary bladder. Albumin RNA ISH was highly sensitive for cancers of liver origin, staining positive in 82 (99 %) of 83 ICCs and in 42 HCCs (100 %). Perihilar and distal bile duct carcinomas as well as carcinomas arising at other sites tested negative for albumin. Notably, 6 (22 %) of 27 intrahepatic tumors previously diagnosed as carcinomas of undetermined origin tested positive for albumin. Albumin RNA ISH is a sensitive and highly specific diagnostic tool for distinguishing ICC from metastatic adenocarcinoma to the liver or carcinoma of unknown origin. Albumin RNA ISH could replace the extensive diagnostic workup, leading to timely confirmation of the ICC diagnosis. Additionally, the assay could serve as a guide to distinguish ICC from perihilar adenocarcinoma.
Infant Auditory Sensitivity to Pure Tones and Frequency-Modulated Tones
ERIC Educational Resources Information Center
Leibold, Lori J.; Werner, Lynne A.
2007-01-01
It has been suggested that infants respond preferentially to infant-directed speech because their auditory sensitivity to sounds with extensive frequency modulation (FM) is better than their sensitivity to less modulated sounds. In this experiment, auditory thresholds for FM tones and for unmodulated, or pure, tones in a background of noise were…
Tapered GRIN fiber microsensor.
Beltrán-Mejía, Felipe; Biazoli, Claudecir R; Cordeiro, Cristiano M B
2014-12-15
The sensitivity of an optical fiber microsensor based on inter-modal interference can be considerably improved by tapering a short extension of the multimode fiber. In the case of Graded Index fibers with a parabolic refractive index profile, a meridional ray exhibits a sinusoidal path. When these fibers are tapered, the period of the propagated beam decrease down-taper and increase up-taper. We take advantage of this modulation -along with the enhanced overlap between the evanescent field and the external medium- to substantially increase the sensitivity of these devices by tuning the sensor's maximum sensitivity wavelength. Moreover, the extension of this device is reduced by one order of magnitude, making it more propitious for reduced space applications. Numerical and experimental results demonstrate the success and feasibility of this approach.
NASA Technical Reports Server (NTRS)
Turflinger, T.; Schmeichel, W.; Krieg, J.; Titus, J.; Campbell, A.; Reeves, M.; Marshall (P.); Hardage, Donna (Technical Monitor)
2004-01-01
This effort is a detailed analysis of existing microelectronics and photonics test bed satellite data from one experiment, the bipolar test board, looking to improve our understanding of the enhanced low dose rate sensitivity (ELDRS) phenomenon. Over the past several years, extensive total dose irradiations of bipolar devices have demonstrated that many of these devices exhibited ELDRS. In sensitive bipolar transistors, ELDRS produced enhanced degradation of base current, resulting in enhanced gain degradation at dose rates <0.1 rd(Si)/s compared to similar transistors irradiated at dose rates >1 rd(Si)/s. This Technical Publication provides updated information about the test devices, the in-flight experiment, and both flight-and ground-based observations. Flight data are presented for the past 5 yr of the mission. These data are compared to ground-based data taken on devices from the same date code lots. Information about temperature fluctuations, power shutdowns, and other variables encountered during the space flight are documented.
ZHAO, Bin; BASTON, David S.; KHAN, Elaine; SORRENTINO, Claudio; DENISON, Michael S.
2011-01-01
Reporter genes produce a protein product in transfected cells that can be easily measured in intact or lysed cells and they have been extensively used in numerous basic and applied research applications. Over the past 10 years, reporter gene assays have been widely accepted and used for analysis of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like compounds in various types of matrices, such as biological, environmental, food and feed samples, given that high-resolution instrumental analysis techniques are impractical for large-scale screening analysis. The most sensitive cell-based reporter gene bioassay systems developed are the mechanism-based CALUX (Chemically Activated Luciferase Expression) and CAFLUX (Chemically Activated Fluorescent Expression) bioassays, which utilize recombinant cell lines containing stably transfected dioxin (AhR)-responsive firefly luciferase or enhanced green fluorescent protein (EGFP) reporter genes, respectively. While the current CALUX and CAFLUX bioassays are very sensitive, increasing their lower limit of sensitivity, magnitude of response and dynamic range for chemical detection would significantly increase their utility, particularly for those samples that contain low levels of dioxin-like HAHs (i.e., serum). In this study, we report that the addition of modulators of cell signaling pathways or modification of cell culture conditions results in significant improvement in the magnitude and overall responsiveness of the existing CALUX and CAFLUX cell bioassays. PMID:21394221
1989-01-01
Compressor Rear Frame (ClF) which exhibits extensive cract:ing of the forward flange. ThL 1988 Actuarial Function data shows CRF crackiing As the number 2...Creep-Rupture properties of Waspaloy sheet to Sharp-Edged Notches in the Temperature Range of 1O000F-14O0OF. Journal of Basle Engineering, Trans ASME ...Dependence of the Notch Sensitivity of Waspaloy at 10000F-1400F on the Gamma Prime Phase, Journal of Basic Engineering, Trans ASME (in print at time of
NASA Technical Reports Server (NTRS)
Woodcock, Gordon
1997-01-01
This study is an extension of a previous effort by the Principal Investigator to develop baseline data to support comparative analysis of Highly Reusable Space Transportation (HRST) concepts. The analyses presented herin develop baseline data bases for two two-stage-to-orbit (TSTO) concepts: (1) Assisted horizontal take-off all rocket (assisted HTOHL); and (2) Assisted vertical take-off rocket based combined cycle (RBCC). The study objectives were to: (1) Provide configuration definitions and illustrations for assisted HTOHL and assisted RBCC; (2) Develop a rationalization approach and compare these concepts with the HRST reference; and (3) Analyze TSTO configurations which try to maintain SSTO benefits while reducing inert weight sensitivity.
Greater-than-bulk melting temperatures explained: Gallium melts Gangnam style
NASA Astrophysics Data System (ADS)
Gaston, Nicola; Steenbergen, Krista
2014-03-01
The experimental discovery of superheating in gallium clusters contradicted the clear and well-demonstrated paradigm that the melting temperature of a particle should decrease with its size. However the extremely sensitive dependence of melting temperature on size also goes to the heart of cluster science, and the interplay between the effects of electronic and geometric structure. We have performed extensive first-principles molecular dynamics calculations, incorporating parallel tempering for an efficient exploration of configurational phase space. This is necessary, due to the complicated energy landscape of gallium. In the nanoparticles, melting is preceded by a transitions between phases. A structural feature, referred to here as the Gangnam motif, is found to increase with the latent heat and appears throughout the observed phase changes of this curious metal. We will present our detailed analysis of the solid-state isomers, performed using extensive statistical sampling of the trajectory data for the assignment of cluster structures to known phases of gallium. Finally, we explain the greater-than-bulk melting through analysis of the factors that stabilise the liquid structures.
NASA Astrophysics Data System (ADS)
Gaïor, R.; Al Samarai, I.; Berat, C.; Blanco Otano, M.; David, J.; Deligny, O.; Lebbolo, H.; Lecoz, S.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Mariş, I. C.; Montanet, F.; Repain, P.; Salamida, F.; Settimo, M.; Stassi, P.; Stutz, A.
2018-04-01
We present the GIGAS (Gigahertz Identification of Giant Air Shower) microwave radio sensor arrays of the EASIER project (Extensive Air Shower Identification with Electron Radiometers), deployed at the site of the Pierre Auger cosmic ray observatory. The aim of these novel arrays is to probe the intensity of the molecular bremsstrahlung radiation expected from the development of the extensive air showers produced by the interaction of ultra high energy cosmic rays in the atmosphere. In the designed setup, the sensors are embedded within the surface detector array of the Pierre Auger observatory allowing us to use the particle signals at ground level to trigger the radio system. A series of seven, then 61 sensors have been deployed in the C-band, followed by a new series of 14 higher sensitivity ones in the C-band and the L-band. The design, the operation, the calibration and the sensitivity to extensive air showers of these arrays are described in this paper.
Analysis and characterization of heparin impurities.
Beni, Szabolcs; Limtiaco, John F K; Larive, Cynthia K
2011-01-01
This review discusses recent developments in analytical methods available for the sensitive separation, detection and structural characterization of heparin contaminants. The adulteration of raw heparin with oversulfated chondroitin sulfate (OSCS) in 2007-2008 spawned a global crisis resulting in extensive revisions to the pharmacopeia monographs on heparin and prompting the FDA to recommend the development of additional physicochemical methods for the analysis of heparin purity. The analytical chemistry community quickly responded to this challenge, developing a wide variety of innovative approaches, several of which are reported in this special issue. This review provides an overview of methods of heparin isolation and digestion, discusses known heparin contaminants, including OSCS, and summarizes recent publications on heparin impurity analysis using sensors, near-IR, Raman, and NMR spectroscopy, as well as electrophoretic and chromatographic separations.
Inter-laboratory comparison of the in vivo comet assay including three image analysis systems.
Plappert-Helbig, Ulla; Guérard, Melanie
2015-12-01
To compare the extent of potential inter-laboratory variability and the influence of different comet image analysis systems, in vivo comet experiments were conducted using the genotoxicants ethyl methanesulfonate and methyl methanesulfonate. Tissue samples from the same animals were processed and analyzed-including independent slide evaluation by image analysis-in two laboratories with extensive experience in performing the comet assay. The analysis revealed low inter-laboratory experimental variability. Neither the use of different image analysis systems, nor the staining procedure of DNA (propidium iodide vs. SYBR® Gold), considerably impacted the results or sensitivity of the assay. In addition, relatively high stability of the staining intensity of propidium iodide-stained slides was found in slides that were refrigerated for over 3 months. In conclusion, following a thoroughly defined protocol and standardized routine procedures ensures that the comet assay is robust and generates comparable results between different laboratories. © 2015 Wiley Periodicals, Inc.
Extension of the Rejection Sensitivity Construct to the Interpersonal Functioning of Gay Men
ERIC Educational Resources Information Center
Pachankis, John E.; Goldfried, Marvin R.; Ramrattan, Melissa E.
2008-01-01
On the basis of recent evidence suggesting that gay men are particularly likely to fear interpersonal rejection, the authors set out to extend the "rejection sensitivity" construct to the mental health concerns of gay men. After establishing a reliable and valid measure of the gay-related rejection sensitivity construct, the authors use this to…
Bacillus anthracis lethal toxin induces TNF-α–independent hypoxia-mediated toxicity in mice
Moayeri, Mahtab; Haines, Diana; Young, Howard A.; Leppla, Stephen H.
2003-01-01
Bacillus anthracis lethal toxin (LT) is the major virulence factor of anthrax and reproduces most of the laboratory manifestations of the disease in animals. We studied LT toxicity in BALB/cJ and C57BL/6J mice. BALB/cJ mice became terminally ill earlier and with higher frequency than C57BL/6J mice. Timed histopathological analysis identified bone marrow, spleen, and liver as major affected organs in both mouse strains. LT induced extensive hypoxia. Crisis was due to extensive liver necrosis accompanied by pleural edema. There was no evidence of disseminated intravascular coagulation or renal dysfunction. Instead, analyses revealed hepatic dysfunction, hypoalbuminemia, and vascular/oxygenation insufficiency. Of 50 cytokines analyzed, BALB/cJ mice showed rapid but transitory increases in specific factors including KC, MCP-1/JE, IL-6, MIP-2, G-CSF, GM-CSF, eotaxin, FasL, and IL-1β. No changes in TNF-α occurred. The C57BL/6J mice did not mount a similar cytokine response. These factors were not induced in vitro by LT treatment of toxin-sensitive macrophages. The evidence presented shows that LT kills mice through a TNF-α–independent, FasL-independent, noninflammatory mechanism that involves hypoxic tissue injury but does not require macrophage sensitivity to toxin. PMID:12952916
A specific role for posterior dorsolateral striatum in human habit learning
Tricomi, Elizabeth; Balleine, Bernard W.; O’Doherty, John P.
2009-01-01
Habits are characterized by an insensitivity to their consequences and, as such, can be distinguished from goal-directed actions. The neural basis of the development of demonstrably outcome insensitive habitual actions in humans has not been previously characterized. In this experiment, we show that extensive training on a free-operant task reduces the sensitivity of participants’ behavior to a reduction in outcome value. Analysis of functional magnetic resonance imagine (fMRI) data acquired during training revealed a significant increase in task-related cue sensitivity in a right posterior putamen/globus pallidus region as training progressed. These results provide evidence for a shift from goal-directed to habit-based control of instrumental actions in humans, and suggest that cue-driven activation in a specific region of dorsolateral posterior putamen may contribute to the habitual control of behavior in humans. PMID:19490086
Phylogenetic study of Class Armophorea (Alveolata, Ciliophora) based on 18S-rDNA data.
da Silva Paiva, Thiago; do Nascimento Borges, Bárbara; da Silva-Neto, Inácio Domingos
2013-12-01
The 18S rDNA phylogeny of Class Armophorea, a group of anaerobic ciliates, is proposed based on an analysis of 44 sequences (out of 195) retrieved from the NCBI/GenBank database. Emphasis was placed on the use of two nucleotide alignment criteria that involved variation in the gap-opening and gap-extension parameters and the use of rRNA secondary structure to orientate multiple-alignment. A sensitivity analysis of 76 data sets was run to assess the effect of variations in indel parameters on tree topologies. Bayesian inference, maximum likelihood and maximum parsimony phylogenetic analyses were used to explore how different analytic frameworks influenced the resulting hypotheses. A sensitivity analysis revealed that the relationships among higher taxa of the Intramacronucleata were dependent upon how indels were determined during multiple-alignment of nucleotides. The phylogenetic analyses rejected the monophyly of the Armophorea most of the time and consistently indicated that the Metopidae and Nyctotheridae were related to the Litostomatea. There was no consensus on the placement of the Caenomorphidae, which could be a sister group of the Metopidae + Nyctorheridae, or could have diverged at the base of the Spirotrichea branch or the Intramacronucleata tree.
Phylogenetic study of Class Armophorea (Alveolata, Ciliophora) based on 18S-rDNA data
da Silva Paiva, Thiago; do Nascimento Borges, Bárbara; da Silva-Neto, Inácio Domingos
2013-01-01
The 18S rDNA phylogeny of Class Armophorea, a group of anaerobic ciliates, is proposed based on an analysis of 44 sequences (out of 195) retrieved from the NCBI/GenBank database. Emphasis was placed on the use of two nucleotide alignment criteria that involved variation in the gap-opening and gap-extension parameters and the use of rRNA secondary structure to orientate multiple-alignment. A sensitivity analysis of 76 data sets was run to assess the effect of variations in indel parameters on tree topologies. Bayesian inference, maximum likelihood and maximum parsimony phylogenetic analyses were used to explore how different analytic frameworks influenced the resulting hypotheses. A sensitivity analysis revealed that the relationships among higher taxa of the Intramacronucleata were dependent upon how indels were determined during multiple-alignment of nucleotides. The phylogenetic analyses rejected the monophyly of the Armophorea most of the time and consistently indicated that the Metopidae and Nyctotheridae were related to the Litostomatea. There was no consensus on the placement of the Caenomorphidae, which could be a sister group of the Metopidae + Nyctorheridae, or could have diverged at the base of the Spirotrichea branch or the Intramacronucleata tree. PMID:24385862
Mori, Yoshikazu; Ogawa, Kazuo; Warabi, Eiji; Yamamoto, Masahiro; Hirokawa, Takatsugu
2016-01-01
Transient receptor potential vanilloid type 1 (TRPV1) is a non-selective cation channel and a multimodal sensor protein. Since the precise structure of TRPV1 was obtained by electron cryo-microscopy, the binding mode of representative agonists such as capsaicin and resiniferatoxin (RTX) has been extensively characterized; however, detailed information on the binding mode of other vanilloids remains lacking. In this study, mutational analysis of human TRPV1 was performed, and four agonists (capsaicin, RTX, [6]-shogaol and [6]-gingerol) were used to identify amino acid residues involved in ligand binding and/or modulation of proton sensitivity. The detailed binding mode of each ligand was then simulated by computational analysis. As a result, three amino acids (L518, F591 and L670) were newly identified as being involved in ligand binding and/or modulation of proton sensitivity. In addition, in silico docking simulation and a subsequent mutational study suggested that [6]-gingerol might bind to and activate TRPV1 in a unique manner. These results provide novel insights into the binding mode of various vanilloids to the channel and will be helpful in developing a TRPV1 modulator. PMID:27606946
The Diagnostic Value of Gastrin-17 Detection in Atrophic Gastritis
Wang, Xu; Ling, Li; Li, Shanshan; Qin, Guiping; Cui, Wei; Li, Xiang; Ni, Hong
2016-01-01
Abstract A meta-analysis was performed to assess the diagnostic value of gastrin-17 (G-17) for the early detection of chronic atrophic gastritis (CAG). An extensive literature search was performed, with the aim of selecting publications that reported the accuracy of G-17 in predicting CAG, in the following databases: PubMed, Science Direct, Web of Science, Chinese Biological Medicine, Chinese National Knowledge Infrastructure, Wanfang, and VIP. To assess the diagnostic value of G-17, the following statistics were estimated and described: sensitivity, specificity, diagnostic odds ratios (DOR), summary receiver operating characteristic curves, area under the curve (AUC), and 95% confidence intervals (CIs). Thirteen studies that met the inclusion criteria were included in this meta-analysis, comprising 894 patients and 1950 controls. The pooled sensitivity and specificity of these studies were 0.48 (95% CI: 0.45–0.51) and 0.79 (95% CI: 0.77–0.81), respectively. The DOR was 5.93 (95% CI: 2.93–11.99), and the AUC was 0.82. G-17 may have potential diagnostic value because it has good specificity and a moderate DOR and AUC for CAG. However, more studies are needed to improve the sensitivity of this diagnostic tool in the future. PMID:27149493
Alcázar, Juan Luis; Gastón, Begoña; Navarro, Beatriz; Salas, Rocío; Aranda, Juana; Guerriero, Stefano
2017-11-01
To compare the diagnostic accuracy of transvaginal ultrasound (TVS) and magnetic resonance imaging (MRI) for detecting myometrial infiltration (MI) in endometrial carcinoma. An extensive search of papers comparing TVS and MRI in assessing MI in endometrial cancer was performed in MEDLINE (PubMed), Web of Science, and Cochrane Database from January 1989 to January 2017. Quality was assessed using Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Our extended search identified 747 citations but after exclusions we finally included in the meta-analysis 8 articles. The risk of bias for most studies was low for most 4 domains assessed in QUADAS-2. Overall, pooled estimated sensitivity and specificity for diagnosing deep MI were 75% (95% confidence interval [CI]=67%-82%) and 82% (95% CI=75%-93%) for TVS, and 83% (95% CI=76%-89%) and 82% (95% CI=72%-89%) for MRI, respectively. No statistical differences were found when comparing both methods (p=0.314). Heterogeneity was low for sensitivity and high for specificity for TVS and MRI. MRI showed a better sensitivity than TVS for detecting deep MI in women with endometrial cancer. However, the difference observed was not statistically significant. Copyright © 2017. Asian Society of Gynecologic Oncology, Korean Society of Gynecologic Oncology
2017-01-01
Objective To compare the diagnostic accuracy of transvaginal ultrasound (TVS) and magnetic resonance imaging (MRI) for detecting myometrial infiltration (MI) in endometrial carcinoma. Methods An extensive search of papers comparing TVS and MRI in assessing MI in endometrial cancer was performed in MEDLINE (PubMed), Web of Science, and Cochrane Database from January 1989 to January 2017. Quality was assessed using Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Results Our extended search identified 747 citations but after exclusions we finally included in the meta-analysis 8 articles. The risk of bias for most studies was low for most 4 domains assessed in QUADAS-2. Overall, pooled estimated sensitivity and specificity for diagnosing deep MI were 75% (95% confidence interval [CI]=67%–82%) and 82% (95% CI=75%–93%) for TVS, and 83% (95% CI=76%–89%) and 82% (95% CI=72%–89%) for MRI, respectively. No statistical differences were found when comparing both methods (p=0.314). Heterogeneity was low for sensitivity and high for specificity for TVS and MRI. Conclusion MRI showed a better sensitivity than TVS for detecting deep MI in women with endometrial cancer. However, the difference observed was not statistically significant. PMID:29027404
Highly sensitive electrochemical detection of human telomerase activity based on bio-barcode method.
Li, Ying; Liu, Bangwei; Li, Xia; Wei, Qingli
2010-07-15
In the present study, an electrochemical method for highly sensitive detection of human telomerase activity was developed based on bio-barcode amplification assay. Telomerase was extracted from HeLa cells, then the extract was mixed with telomerase substrate (TS) primer to perform extension reaction. The extension product was hybridized with the capture DNA immobilized on the Au electrode and then reacted with the signal DNA on Au nanoparticles to form a sandwich hybridization mode. Electrochemical signals were generated by chronocoulometric interrogation of [Ru(NH(3))(6)](3+) that quantitatively binds to the DNA on Au nanoparticles via electrostatic interaction. This method can detect the telomerase activity from as little as 10 cultured cancer cells without the polymerase chain reaction (PCR) amplification of telomerase extension product. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Epoxy resin monomers with reduced skin sensitizing potency.
O'Boyle, Niamh M; Niklasson, Ida B; Tehrani-Bagha, Ali R; Delaine, Tamara; Holmberg, Krister; Luthman, Kristina; Karlberg, Ann-Therese
2014-06-16
Epoxy resin monomers (ERMs), especially diglycidyl ethers of bisphenol A and F (DGEBA and DGEBF), are extensively used as building blocks for thermosetting polymers. However, they are known to commonly cause skin allergy. This research describes a number of alternative ERMs, designed with the aim of reducing the skin sensitizing potency while maintaining the ability to form thermosetting polymers. The compounds were designed, synthesized, and assessed for sensitizing potency using the in vivo murine local lymph node assay (LLNA). All six epoxy resin monomers had decreased sensitizing potencies compared to those of DGEBA and DGEBF. With respect to the LLNA EC3 value, the best of the alternative monomers had a value approximately 2.5 times higher than those of DGEBA and DGEBF. The diepoxides were reacted with triethylenetetramine, and the polymers formed were tested for technical applicability using thermogravimetric analysis and differential scanning calorimetry. Four out of the six alternative ERMs gave polymers with a thermal stability comparable to that obtained with DGEBA and DGEBF. The use of improved epoxy resin monomers with less skin sensitizing effects is a direct way to tackle the problem of contact allergy to epoxy resin systems, particularly in occupational settings, resulting in a reduction in the incidence of allergic contact dermatitis.
Keratosis reduces sensitivity of anal cytology in detecting anal intraepithelial neoplasia.
ElNaggar, Adam C; Santoso, Joseph T; Xie, Huiwen Bill
2012-02-01
To identify factors that may contribute to poor sensitivity of anal cytology in contrast to the sensitivity of anoscopy in heterosexual women. We analyzed 324 patients with biopsy confirmed diagnosis of genital intraepithelial neoplasia (either vulva, vaginal, or cervical) from 2006 to 2011 who underwent both anal cytology and anoscopy. Cytology, anoscopy, and biopsy results were recorded. Biopsy specimens underwent independent analysis for quality of specimen. Also, biopsy specimens were analyzed for characteristics that may contribute to correlation, or lack thereof, between anal cytology and anoscopic directed biopsy. 133 (41%) patients had abnormal anoscopy and underwent directed biopsy. 120 patients with normal anal cytology had anoscopy directed biopsies, resulting in 58 cases of AIN (sensitivity 9.4%; 0.039-0.199). This cohort was noted to have extensive keratosis covering the entire dysplastic anal lesion. 18 patients yielded abnormal anal cytology. Of these patients, 13 had anoscopic directed biopsies revealing 6 with AIN and absent keratosis (specificity 88.6%; 0.78-0.95). The κ statistic for anal cytology and anoscopy was -0.0213 (95% CI=-0.128-0.086). Keratosis reduces the sensitivity of anal cytology. Furthermore, anal cytology poorly correlates with anoscopy in the detection of AIN (κ statistic=-0.0213). Copyright © 2011 Elsevier Inc. All rights reserved.
Metabolomic Strategies Involving Mass Spectrometry Combined with Liquid and Gas Chromatography.
Lopes, Aline Soriano; Cruz, Elisa Castañeda Santa; Sussulini, Alessandra; Klassen, Aline
2017-01-01
Amongst all omics sciences, there is no doubt that metabolomics is undergoing the most important growth in the last decade. The advances in analytical techniques and data analysis tools are the main factors that make possible the development and establishment of metabolomics as a significant research field in systems biology. As metabolomic analysis demands high sensitivity for detecting metabolites present in low concentrations in biological samples, high-resolution power for identifying the metabolites and wide dynamic range to detect metabolites with variable concentrations in complex matrices, mass spectrometry is being the most extensively used analytical technique for fulfilling these requirements. Mass spectrometry alone can be used in a metabolomic analysis; however, some issues such as ion suppression may difficultate the quantification/identification of metabolites with lower concentrations or some metabolite classes that do not ionise as well as others. The best choice is coupling separation techniques, such as gas or liquid chromatography, to mass spectrometry, in order to improve the sensitivity and resolution power of the analysis, besides obtaining extra information (retention time) that facilitates the identification of the metabolites, especially when considering untargeted metabolomic strategies. In this chapter, the main aspects of mass spectrometry (MS), liquid chromatography (LC) and gas chromatography (GC) are discussed, and recent clinical applications of LC-MS and GC-MS are also presented.
Bacenetti, Jacopo; Cavaliere, Alessia; Falcone, Giacomo; Giovenzana, Valentina; Banterle, Alessandro; Guidetti, Riccardo
2018-06-15
Over the last years, increasing attention has been paid to environmental concerns related to food production and potential solutions to this issue. Among the different strategies being considered to reduce the impact food production has on the environment, only moderate has been paid to the extension of shelf life; a longer shelf life can reduce food losses as well as the economic and environmental impacts of the distribution logistics. The aim of this study is to assess the environmental performance of whole-wheat breadsticks with extended shelf lives and to evaluate whether the shelf-life extension is an effective mitigation solution from an environmental point of view. To this purpose, the life cycle assessment (LCA) approach was applied from a "cradle-to-grave" perspective. Rosmarinic acid was used as an antioxidant to extend the shelf life. To test the robustness of the results and to investigate the influence of the choices made in the modelling phase, a sensitivity and uncertainty analysis were carried out. The achieved results highlighted how, for 10 of the 12 evaluated impact categories, the shelf-life extension is a proper mitigation solution, and its effectiveness depends on the magnitude of product loss reduction that is achieved. The shelf-life extension doesn't allow for the reduction of environmental impact in the categories of human toxicity, cancer effects and freshwater eutrophication. Copyright © 2018 Elsevier B.V. All rights reserved.
Moghaddasi, Hanie; Nourian, Saeed
2016-06-01
Heart disease is the major cause of death as well as a leading cause of disability in the developed countries. Mitral Regurgitation (MR) is a common heart disease which does not cause symptoms until its end stage. Therefore, early diagnosis of the disease is of crucial importance in the treatment process. Echocardiography is a common method of diagnosis in the severity of MR. Hence, a method which is based on echocardiography videos, image processing techniques and artificial intelligence could be helpful for clinicians, especially in borderline cases. In this paper, we introduce novel features to detect micro-patterns of echocardiography images in order to determine the severity of MR. Extensive Local Binary Pattern (ELBP) and Extensive Volume Local Binary Pattern (EVLBP) are presented as image descriptors which include details from different viewpoints of the heart in feature vectors. Support Vector Machine (SVM), Linear Discriminant Analysis (LDA) and Template Matching techniques are used as classifiers to determine the severity of MR based on textural descriptors. The SVM classifier with Extensive Uniform Local Binary Pattern (ELBPU) and Extensive Volume Local Binary Pattern (EVLBP) have the best accuracy with 99.52%, 99.38%, 99.31% and 99.59%, respectively, for the detection of Normal, Mild MR, Moderate MR and Severe MR subjects among echocardiography videos. The proposed method achieves 99.38% sensitivity and 99.63% specificity for the detection of the severity of MR and normal subjects. Copyright © 2016 Elsevier Ltd. All rights reserved.
Resistance-associated point mutations in insecticide-insensitive acetylcholinesterase.
Mutero, A; Pralavorio, M; Bride, J M; Fournier, D
1994-06-21
Extensive utilization of pesticides against insects provides us with a good model for studying the adaptation of a eukaryotic genome to a strong selective pressure. One mechanism of resistance is the alteration of acetylcholinesterase (EC 3.1.1.7), the molecular target for organophosphates and carbamates. Here, we report the sequence analysis of the Ace gene in several resistant field strains of Drosophila melanogaster. This analysis resulted in the identification of five point mutations associated with reduced sensitivities to insecticides. In some cases, several of these mutations were found to be combined in the same protein, leading to different resistance patterns. Our results suggest that recombination between resistant alleles preexisting in natural populations is a mechanism by which insects rapidly adapt to new selective pressures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinbrink, Nicholas M.N.; Weinheimer, Christian; Glück, Ferenc
The KATRIN experiment aims to determine the absolute neutrino mass by measuring the endpoint region of the tritium β-spectrum. As a large-scale experiment with a sharp energy resolution, high source luminosity and low background it may also be capable of testing certain theories of neutrino interactions beyond the standard model (SM). An example of a non-SM interaction are right-handed currents mediated by right-handed W bosons in the left-right symmetric model (LRSM). In this extension of the SM, an additional SU(2){sub R} symmetry in the high-energy limit is introduced, which naturally includes sterile neutrinos and predicts the seesaw mechanism. In tritiummore » β decay, this leads to an additional term from interference between left- and right-handed interactions, which enhances or suppresses certain regions near the endpoint of the beta spectrum. In this work, the sensitivity of KATRIN to right-handed currents is estimated for the scenario of a light sterile neutrino with a mass of some eV. This analysis has been performed with a Bayesian analysis using Markov Chain Monte Carlo (MCMC). The simulations show that, in principle, KATRIN will be able to set sterile neutrino mass-dependent limits on the interference strength. The sensitivity is significantly increased if the Q value of the β decay can be sufficiently constrained. However, the sensitivity is not high enough to improve current upper limits from right-handed W boson searches at the LHC.« less
Verzotto, Davide; M Teo, Audrey S; Hillmer, Axel M; Nagarajan, Niranjan
2016-01-01
Resolution of complex repeat structures and rearrangements in the assembly and analysis of large eukaryotic genomes is often aided by a combination of high-throughput sequencing and genome-mapping technologies (for example, optical restriction mapping). In particular, mapping technologies can generate sparse maps of large DNA fragments (150 kilo base pairs (kbp) to 2 Mbp) and thus provide a unique source of information for disambiguating complex rearrangements in cancer genomes. Despite their utility, combining high-throughput sequencing and mapping technologies has been challenging because of the lack of efficient and sensitive map-alignment algorithms for robustly aligning error-prone maps to sequences. We introduce a novel seed-and-extend glocal (short for global-local) alignment method, OPTIMA (and a sliding-window extension for overlap alignment, OPTIMA-Overlap), which is the first to create indexes for continuous-valued mapping data while accounting for mapping errors. We also present a novel statistical model, agnostic with respect to technology-dependent error rates, for conservatively evaluating the significance of alignments without relying on expensive permutation-based tests. We show that OPTIMA and OPTIMA-Overlap outperform other state-of-the-art approaches (1.6-2 times more sensitive) and are more efficient (170-200 %) and precise in their alignments (nearly 99 % precision). These advantages are independent of the quality of the data, suggesting that our indexing approach and statistical evaluation are robust, provide improved sensitivity and guarantee high precision.
NASA Astrophysics Data System (ADS)
Steinbrink, Nicholas M. N.; Glück, Ferenc; Heizmann, Florian; Kleesiek, Marco; Valerius, Kathrin; Weinheimer, Christian; Hannestad, Steen
2017-06-01
The KATRIN experiment aims to determine the absolute neutrino mass by measuring the endpoint region of the tritium β-spectrum. As a large-scale experiment with a sharp energy resolution, high source luminosity and low background it may also be capable of testing certain theories of neutrino interactions beyond the standard model (SM). An example of a non-SM interaction are right-handed currents mediated by right-handed W bosons in the left-right symmetric model (LRSM). In this extension of the SM, an additional SU(2)R symmetry in the high-energy limit is introduced, which naturally includes sterile neutrinos and predicts the seesaw mechanism. In tritium β decay, this leads to an additional term from interference between left- and right-handed interactions, which enhances or suppresses certain regions near the endpoint of the beta spectrum. In this work, the sensitivity of KATRIN to right-handed currents is estimated for the scenario of a light sterile neutrino with a mass of some eV. This analysis has been performed with a Bayesian analysis using Markov Chain Monte Carlo (MCMC). The simulations show that, in principle, KATRIN will be able to set sterile neutrino mass-dependent limits on the interference strength. The sensitivity is significantly increased if the Q value of the β decay can be sufficiently constrained. However, the sensitivity is not high enough to improve current upper limits from right-handed W boson searches at the LHC.
Zhu, Zhaozhong; Anttila, Verneri; Smoller, Jordan W; Lee, Phil H
2018-01-01
Advances in recent genome wide association studies (GWAS) suggest that pleiotropic effects on human complex traits are widespread. A number of classic and recent meta-analysis methods have been used to identify genetic loci with pleiotropic effects, but the overall performance of these methods is not well understood. In this work, we use extensive simulations and case studies of GWAS datasets to investigate the power and type-I error rates of ten meta-analysis methods. We specifically focus on three conditions commonly encountered in the studies of multiple traits: (1) extensive heterogeneity of genetic effects; (2) characterization of trait-specific association; and (3) inflated correlation of GWAS due to overlapping samples. Although the statistical power is highly variable under distinct study conditions, we found the superior power of several methods under diverse heterogeneity. In particular, classic fixed-effects model showed surprisingly good performance when a variant is associated with more than a half of study traits. As the number of traits with null effects increases, ASSET performed the best along with competitive specificity and sensitivity. With opposite directional effects, CPASSOC featured the first-rate power. However, caution is advised when using CPASSOC for studying genetically correlated traits with overlapping samples. We conclude with a discussion of unresolved issues and directions for future research.
Kremen, Arie; Tsompanakis, Yiannis
2010-04-01
The slope-stability of a proposed vertical extension of a balefill was investigated in the present study, in an attempt to determine a geotechnically conservative design, compliant with New Jersey Department of Environmental Protection regulations, to maximize the utilization of unclaimed disposal capacity. Conventional geotechnical analytical methods are generally limited to well-defined failure modes, which may not occur in landfills or balefills due to the presence of preferential slip surfaces. In addition, these models assume an a priori stress distribution to solve essentially indeterminate problems. In this work, a different approach has been applied, which avoids several of the drawbacks of conventional methods. Specifically, the analysis was performed in a two-stage process: (a) calculation of stress distribution, and (b) application of an optimization technique to identify the most probable failure surface. The stress analysis was performed using a finite element formulation and the location of the failure surface was located by dynamic programming optimization method. A sensitivity analysis was performed to evaluate the effect of the various waste strength parameters of the underlying mathematical model on the results, namely the factor of safety of the landfill. Although this study focuses on the stability investigation of an expanded balefill, the methodology presented can easily be applied to general geotechnical investigations.
Vogiatzis, Konstantinos
2012-11-15
Attiko Metro S.A., the state company ensuring the development of the Athens Metro network, has recently initiated a new extension of 7.6 km, has planned for line 3 of Athens Metro from Haidari to Piraeus "Dimotikon Theatre" towards "University of Piraeus" (forestation), connecting the major Piraeus Port with "Eleftherios Venizelos" International Airport. The Piraeus extension consists of a Tunnel Boring Machine, 2 tracks and, tunnel sections, as well as 6 stations and a forestation (New Austrian Tunnelling Method) at the end of the alignment. In order to avoid the degradation of the urban acoustic environment from ground borne noise and vibration during metro operation, the assessment of the required track types and possible noise mitigation measures was executed, and for each section and each sensitive building, the ground borne noise and vibration levels will be numerically predicted. The calculated levels were then compared with ground borne noise and vibration level criteria. The necessary mitigation measures were defined in order to guarantee, in each location along the extension, the allowable ground borne Noise and Vibration max. levels inside nearby sensitive buildings taking into account alternative Transfer Functions for ground borne noise diffusion inside the buildings. Ground borne noise levels were proven to be higher than the criterion where special track work is present and also in the case of the sensitive receptor: "Dimotikon Theatre". In order to reduce the ground borne noise levels to allowable values in these sections, the installation of tracks and special track work on a floating slab was assessed and recommended. Copyright © 2012 Elsevier B.V. All rights reserved.
Time-Resolved Photoluminescence Microscopy for the Analysis of Semiconductor-Based Paint Layers
Mosca, Sara; Gonzalez, Victor; Eveno, Myriam
2017-01-01
In conservation, science semiconductors occur as the constituent matter of the so-called semiconductor pigments, produced following the Industrial Revolution and extensively used by modern painters. With recent research highlighting the occurrence of various degradation phenomena in semiconductor paints, it is clear that their detection by conventional optical fluorescence imaging and microscopy is limited by the complexity of historical painting materials. Here, we illustrate and prove the capabilities of time-resolved photoluminescence (TRPL) microscopy, equipped with both spectral and lifetime sensitivity at timescales ranging from nanoseconds to hundreds of microseconds, for the analysis of cross-sections of paint layers made of luminescent semiconductor pigments. The method is sensitive to heterogeneities within micro-samples and provides valuable information for the interpretation of the nature of the emissions in samples. A case study is presented on micro samples from a painting by Henri Matisse and serves to demonstrate how TRPL can be used to identify the semiconductor pigments zinc white and cadmium yellow, and to inform future investigations of the degradation of a cadmium yellow paint. PMID:29160862
Lorentz-Symmetry Test at Planck-Scale Suppression With a Spin-Polarized 133Cs Cold Atom Clock.
Pihan-Le Bars, H; Guerlin, C; Lasseri, R-D; Ebran, J-P; Bailey, Q G; Bize, S; Khan, E; Wolf, P
2018-06-01
We present the results of a local Lorentz invariance (LLI) test performed with the 133 Cs cold atom clock FO2, hosted at SYRTE. Such a test, relating the frequency shift between 133 Cs hyperfine Zeeman substates with the Lorentz violating coefficients of the standard model extension (SME), has already been realized by Wolf et al. and led to state-of-the-art constraints on several SME proton coefficients. In this second analysis, we used an improved model, based on a second-order Lorentz transformation and a self-consistent relativistic mean field nuclear model, which enables us to extend the scope of the analysis from purely proton to both proton and neutron coefficients. We have also become sensitive to the isotropic coefficient , another SME coefficient that was not constrained by Wolf et al. The resulting limits on SME coefficients improve by up to 13 orders of magnitude the present maximal sensitivities for laboratory tests and reach the generally expected suppression scales at which signatures of Lorentz violation could appear.
NASA Technical Reports Server (NTRS)
Foss, W. E., Jr.
1979-01-01
The takeoff and approach performance of an aircraft is calculated in accordance with the airworthiness standards of the Federal Aviation Regulations. The aircraft and flight constraints are represented in sufficient detail to permit realistic sensitivity studies in terms of either configuration modifications or changes in operational procedures. The program may be used to investigate advanced operational procedures for noise alleviation such as programmed throttle and flap controls. Extensive profile time history data are generated and are placed on an interface file which can be input directly to the NASA aircraft noise prediction program (ANOPP).
Development of the Burst and Transient Source Experiment (BATSE)
NASA Technical Reports Server (NTRS)
Horack, J. M.
1991-01-01
The Burst and Transient Source Experiment (BATSE), one of four instruments on the Gamma Ray Observatory, consists of eight identical detector modules mounted on the corners of the spacecraft. Developed at MSFC, BATSE is the most sensitive gamma ray burst detector flown to date. Details of the assembly and test phase of the flight hardware development are presented. Results and descriptions of calibrations performed at MSFC, TRW, and KSC are documented extensively. With the presentation of each calibration results, the reader is provided with the means to access raw calibration data for further review or analysis.
NASA Technical Reports Server (NTRS)
Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.
2005-01-01
This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).
Xie, Bing; Xiao, Shi-chu; Zhu, Shi-hui; Xia, Zhao-fan
2012-05-01
We sought to evaluate the long term health-related quality of life (HRQOL) in patients survived severely extensive burn and identify their clinical predicting factors correlated with HRQOL. A cross-sectional study was conducted in 20 patients survived more than 2 years with extensive burn involving ≥70% total body surface area (TBSA) between 1997 and 2009 in a burn center in Shanghai. Short Form-36 Medical Outcomes Survey (SF-36), Brief Version of Burn Specific Health Scale (BSHS-B) and Michigan Hand Outcome Questionnaire (MHQ) were used for the present evaluation. SF-36 scores were compared with a healthy Chinese population, and linear correlation analysis was performed to screen the clinical relating factors predicting physical and mental component summary (PCS and MCS) scores from SF-36. HRQOL scores from SF-36 were significantly lower in the domains of physical functioning, role limitations due to physical problems, pain, social functioning and role limitations due to emotional problems compared with population norms. Multiple linear regression analysis demonstrated that only return to work (RTW) predicted improved PCS. While age at injury, facial burns, skin grafting and length of hospital stay were correlated with MCS. Work, body image and heat sensitivity obtained the lowest BSHS-B scores in all 9 domains. Improvements of HRQOL could still be seen in BSHS-B scores in domains of simple abilities, hand function, work and affect even after a quite long interval between burns and testing. Hand function of extensive burn patients obtained relatively poor MHQ scores, especially in those without RTW. Patients with extensive burns have a poorer quality of life compared with that of general population. Relatively poor physical and psychological problems still exist even after a long period. Meanwhile, a trend of gradual improvements was noted. This information will aid clinicians in decision-making of comprehensive systematic regimens for long term rehabilitation and psychosocial treatment. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.
Centrifuge: rapid and sensitive classification of metagenomic sequences
Song, Li; Breitwieser, Florian P.
2016-01-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. PMID:27852649
Factors influencing atmospheric composition over subarctic North America during summer
NASA Technical Reports Server (NTRS)
Wofsy, Steven C.; Fan, S. -M.; Blake, D. R.; Bradshaw, J. D.; Sandholm, S. T.; Singh, H. B.; Sachse, G. W.; Harriss, R. C.
1994-01-01
Elevated concentrations of hydrocarbons, CO, and nitrogen oxides were observed in extensive haze layers over northeastern Canada in the summer of 1990, during ABLE 3B. Halocarbon concentrations remained near background in most layers, indicating a source from biomass wildfires. Elevated concentrations of C2Cl4 provided a sensitive indicator for pollution from urban/industrial sources. Detailed analysis of regional budgets for CO and hydrocarbons indicates that biomass fires accounted for approximately equal to 70% of the input to the subarctic for most hydrocarbons and for acetone and more than 50% for CO. Regional sources for many species (including CO) exceeded chemical sinks during summer, and the boreal region provided a net source to midlatitudes. Interannual variations and long-term trends in atmospheric composition are sensitive to climatic change; a shift to warmer, drier conditions could increase the areas burned and thus the sources of many trace gases.
Assessing the risk profiles of potentially sensitive populations requires a 'tool chest' of methodological approaches to adequately characterize and evaluate these populations. At present, there is an extensive body of literature on methodologies that apply to the evaluation of...
Extensive genetic and DNA methylation variation contribute to heterosis in triploid loquat hybrids.
Liu, Chao; Wang, Mingbo; Wang, Lingli; Guo, Qigao; Liang, Guolu
2018-04-24
We aim to overcome the unclear origin of the loquat and elucidate the heterosis mechanism of the triploid loquat. Here we investigated the genetic and epigenetic variations between the triploid plant and its parental lines using amplified fragment length polymorphism (AFLP) and methylation-sensitive amplified fragment length polymorphism (MSAP) analyses. We show that in addition to genetic variations, extensive DNA methylation variation occurred during the formation process of triploid loquat, with the triploid hybrid having increased DNA methylation compared to the parents. Furthermore, a correlation existed between genetic variation and DNA methylation remodeling, suggesting that genome instability may lead to DNA methylation variation or vice versa. Sequence analysis of the MSAP bands revealed that over 53% of them overlap with protein-coding genes, which may indicate a functional role of the differential DNA methylation in gene regulation and hence heterosis phenotypes. Consistent with this, the genetic and epigenetic alterations were associated closely to the heterosis phenotypes of triploid loquat, and this association varied for different traits. Our results suggested that the formation of triploid is accompanied by extensive genetic and DNA methylation variation, and these changes contribute to the heterosis phenotypes of the triploid loquats from the two cross lines.
Inhibition of the spider heartbeat by gravity and vibration
NASA Technical Reports Server (NTRS)
Finck, A.
1984-01-01
The rate and vigor of the spider heartbeat is controlled by an external pacemaker. A mechanical feature of the spider cardio-vascular system is the production of high serum pressure in the prosoma and the legs. This appears to be the source for leg extension. The lyriform organ on the patella of the leg is sensitive to vibratory and kinesthetic stimuli. This sensitivity depends upon the degree of leg extension. Thus the activity of the heart and the response characteristics of the sense receptor are related. The effect of a supra-threshold vibratory or gravitational stimulus is to produce an inhibition and a tachycardia of the spider heartbeat.
Shen, Qingming; Han, Li; Fan, Gaochao; Zhang, Jian-Rong; Jiang, Liping; Zhu, Jun-Jie
2015-01-01
A novel "signal-on" photoelectrochemical (PEC) biosensor for sensitive detection of human T-cell lymphotropic virus type II (HTLV-II) DNA was developed on the basis of enzymatic amplification coupled with terminal deoxynucleotidyl transferase (TdT)-mediated extension strategy. The intensity of the photocurrent signal was proportional to the concentration of the HTLV-II DNA-target DNA (tDNA) by dual signal amplification. In this protocol, GR-CdS:Mn/ZnS nanocomposites were used as photoelectric conversion material, while pDNA was used as the tDNA recognizing unit. Moreover, the TdT-mediated extension and the enzymatic signal amplification technique were used to enhance the sensitivity of detection. Using this novel dual signal amplification strategy, the prototype of PEC DNA sensor can detect as low as ∼0.033 fM of HTLV-II DNA with a linear range of 0.1-5000 fM, with excellent differentiation ability even for single-base mismatches. This PEC DNA assay opens a promising platform to detect various DNA targets at ultralow levels for early diagnoses of different diseases.
NASA Technical Reports Server (NTRS)
Brown, James L.
2014-01-01
Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danchaivijit, S.; Shetty, D.K.; Eldridge, J.
Matrix cracking was studied in a model unidirectional composite of SiC filaments in an epoxy-bonded alumina matrix. The residual clamping stress on the filaments due to the shrinkage of the epoxy was moderated with the addition of the alumina filler, and the filament surface was coated with a releasing agent to produce unbonded frictional interfaces. Uniaxial tension specimens with controlled through-cracks with bridging filaments were fabricated by a two-step casting technique. Critical stresses for extension of the filament-bridged cracks of various lengths were measured in uniaxial tension using a high-sensitivity extensometer. The measured crack-length dependence of the critical stress wasmore » in good agreement with the prediction of a stress-intensity analysis that employed a new force-displacement law for the bridging filaments. The analysis required independent experimental evaluation of the matrix fracture toughness, the interfacial sliding friction stress, and the residual tension in the matrix. The matrix-cracking stress for the test specimens without the deliberately introduced cracks was significantly higher than the steady-state cracking stress measured for the long, filament-bridged cracks.« less
An introduction to Bayesian statistics in health psychology.
Depaoli, Sarah; Rus, Holly M; Clifton, James P; van de Schoot, Rens; Tiemensma, Jitske
2017-09-01
The aim of the current article is to provide a brief introduction to Bayesian statistics within the field of health psychology. Bayesian methods are increasing in prevalence in applied fields, and they have been shown in simulation research to improve the estimation accuracy of structural equation models, latent growth curve (and mixture) models, and hierarchical linear models. Likewise, Bayesian methods can be used with small sample sizes since they do not rely on large sample theory. In this article, we discuss several important components of Bayesian statistics as they relate to health-based inquiries. We discuss the incorporation and impact of prior knowledge into the estimation process and the different components of the analysis that should be reported in an article. We present an example implementing Bayesian estimation in the context of blood pressure changes after participants experienced an acute stressor. We conclude with final thoughts on the implementation of Bayesian statistics in health psychology, including suggestions for reviewing Bayesian manuscripts and grant proposals. We have also included an extensive amount of online supplementary material to complement the content presented here, including Bayesian examples using many different software programmes and an extensive sensitivity analysis examining the impact of priors.
[Review on HSPF model for simulation of hydrology and water quality processes].
Li, Zhao-fu; Liu, Hong-Yu; Li, Yan
2012-07-01
Hydrological Simulation Program-FORTRAN (HSPF), written in FORTRAN, is one ol the best semi-distributed hydrology and water quality models, which was first developed based on the Stanford Watershed Model. Many studies on HSPF model application were conducted. It can represent the contributions of sediment, nutrients, pesticides, conservatives and fecal coliforms from agricultural areas, continuously simulate water quantity and quality processes, as well as the effects of climate change and land use change on water quantity and quality. HSPF consists of three basic application components: PERLND (Pervious Land Segment) IMPLND (Impervious Land Segment), and RCHRES (free-flowing reach or mixed reservoirs). In general, HSPF has extensive application in the modeling of hydrology or water quality processes and the analysis of climate change and land use change. However, it has limited use in China. The main problems with HSPF include: (1) some algorithms and procedures still need to revise, (2) due to the high standard for input data, the accuracy of the model is limited by spatial and attribute data, (3) the model is only applicable for the simulation of well-mixed rivers, reservoirs and one-dimensional water bodies, it must be integrated with other models to solve more complex problems. At present, studies on HSPF model development are still undergoing, such as revision of model platform, extension of model function, method development for model calibration, and analysis of parameter sensitivity. With the accumulation of basic data and imorovement of data sharing, the HSPF model will be applied more extensively in China.
Wood, Elizabeth A.; McNamara, Katharine; Kowalewska, Agata; Ludgate, Nargiza
2018-01-01
This study was conducted to research and develop recommendations for gender transformative approaches that will address misconceptions around food and nutrition, and reducing barriers around dietary diversity within rural Khatlon Province, Tajikistan. Most of the population in Tajikistan live in rural areas and spend a large part of their income on food. While stunting in children under 5 years has decreased, acute malnutrition and the number of underweight children has increased. This is a qualitative, cross-sectional study that involved secondary data analysis, key informant interviews (KIIs), and focus group discussions (FGDs) to gauge appropriate interventions for agricultural extension agents seeking to improve the nutritional outcomes of their communities. In February of 2017, data were collected from 4 KIIs and 15 FGDs that were stratified as mothers with young children, mothers-in-law, and husbands, across 12 different villages. Analysis of the KIIs and FGDs included NVivo software for coding and to uncover the most salient themes and characteristics from each. The participants of this study reported several misconceptions and taboos surrounding certain foods, especially during pregnancy, and food practices for children under the age of 5 years. Results also indicated a household hierarchy of decision-making surrounding food that included who buys, cooks, and decides what to buy. The findings of this study will be used as a springboard to launch gender-responsive and nutrition-sensitive interventions through the local agricultural extension agents. PMID:29599685
Wu, Xu; Zhu, Lin; Ma, Jiang; Ye, Yang; Lin, Ge
2017-10-25
Polyoxypregnane and its glycosides (POPs) are frequently present in plants of Asclepiadaceae family, and have a variety of biological activities. There is a great need to comprehensively profile these phytochemicals and to quantify them for monitoring their contents in the herbs and the biological samples. However, POPs undergo extensive adduct ion formation in ESI-MS, which has posed a challenge for qualitative and quantitative analysis of POPs. In the present study, we took the advantage of such extensive adduct ion formation to investigate the suitability of adduct ion-targeted analysis of POPs. For the qualitative analysis, we firstly demonstrated that the sodium and ammonium adduct ion-targeted product ion scans (PIS) provided adequate MS/MS fragmentations for structural characterization of POPs. Aided with precursor ion (PI) scans, which showed high selectivity and sensitivity and improved peak assignment confidence in conjunction with full scan (FS), the informative adduct ion-targeted PIS enabled rapid POPs profiling. For the quantification, we used formic acid rather than ammonium acetate as an additive in the mobile phase to avoid simultaneous formation of sodium and ammonium adduct ions, and greatly improved reproducibility of MS response of POPs. By monitoring the solely formed sodium adduct ions [M+Na] + , a method for simultaneous quantification of 25 POPs in the dynamic multiple reaction monitoring mode was then developed and validated. Finally, the aforementioned methods were applied to qualitative and quantitative analysis of POPs in the extract of a traditional Chinses medicinal herb, Marsdenia tenacissima (Roxb.) Wight et Arn., and in the plasma obtained from the rats treated with this herb. The results demonstrated that adduct ion formation could be optimized for the qualitative and quantitative analysis of POPs, and our developed PI/FS-PIS scanning and sole [M+Na] + ion monitoring significantly improved the analysis of POPs in both herbal and biological samples. This study also provides implications for the analysis of other compounds which undergo extensive adduct ion formation in ESI-MS. Copyright © 2017 Elsevier B.V. All rights reserved.
Assessing the risk profiles of potentially sensitive populations requires a "tool chest" of methodological approaches to adequately characterize and evaluate these populations. At present, there is an extensive body of literature on methodologies that apply to the evaluation of t...
USDA-ARS?s Scientific Manuscript database
The extensive similarities between helminth proteins and allergens are thought to contribute to helminth-driven allergic sensitization. We investigated the molecular and structural similarities between Bla g 5, a major glutathione-S transferase (GST) allergen of cockroaches, and the GST of Wucherer...
Cost-effectiveness of prucalopride in the treatment of chronic constipation in the Netherlands
Nuijten, Mark J. C.; Dubois, Dominique J.; Joseph, Alain; Annemans, Lieven
2015-01-01
Objective: To assess the cost-effectiveness of prucalopride vs. continued laxative treatment for chronic constipation in patients in the Netherlands in whom laxatives have failed to provide adequate relief. Methods: A Markov model was developed to estimate the cost-effectiveness of prucalopride in patients with chronic constipation receiving standard laxative treatment from the perspective of Dutch payers in 2011. Data sources included published prucalopride clinical trials, published Dutch price/tariff lists, and national population statistics. The model simulated the clinical and economic outcomes associated with prucalopride vs. standard treatment and had a cycle length of 1 month and a follow-up time of 1 year. Response to treatment was defined as the proportion of patients who achieved “normal bowel function”. One-way and probabilistic sensitivity analyses were conducted to test the robustness of the base case. Results: In the base case analysis, the cost of prucalopride relative to continued laxative treatment was € 9015 per quality-adjusted life-year (QALY). Extensive sensitivity analyses and scenario analyses confirmed that the base case cost-effectiveness estimate was robust. One-way sensitivity analyses showed that the model was most sensitive in response to prucalopride; incremental cost-effectiveness ratios ranged from € 6475 to 15,380 per QALY. Probabilistic sensitivity analyses indicated that there is a greater than 80% probability that prucalopride would be cost-effective compared with continued standard treatment, assuming a willingness-to-pay threshold of € 20,000 per QALY from a Dutch societal perspective. A scenario analysis was performed for women only, which resulted in a cost-effectiveness ratio of € 7773 per QALY. Conclusion: Prucalopride was cost-effective in a Dutch patient population, as well as in a women-only subgroup, who had chronic constipation and who obtained inadequate relief from laxatives. PMID:25926794
Technical needs assessment: UWMC's sensitivity analysis guides decision-making.
Alotis, Michael
2003-01-01
In today's healthcare market, it is critical for provider institutions to offer the latest and best technological services while remaining fiscally sound. In academic practices, like the University of Washington Medical Center (UWMC), there are the added responsibilities of teaching and research that require a high-tech environment to thrive. These conditions and needs require extensive analysis of not only what equipment to buy, but also when and how it should be acquired. In an organization like the UWMC, which has strategically positioned itself for growth, it is useful to build a sensitivity analysis based on the strategic plan. A common forecasting tool, the sensitivity analysis lays out existing and projected business operations with volume assumptions displayed in layers. Each layer of current and projected activity is plotted over time and placed against a background depicting the capacity of the key modality. Key elements of a sensitivity analysis include necessity, economic assessment, performance, compatibility, reliability, service and training. There are two major triggers that cause us to consider the purchase of new imaging equipment and that determine how to evaluate the equipment we buy. One trigger revolves around our ability to serve patients by seeing them on a timely basis. If we find a significant gap between demand and our capacity to meet it, or anticipate a greater increased demand based upon trends, we begin to consider enhancing that capacity. A second trigger is the release of a breakthrough or substantially improved technology that will clearly have a positive impact on clinical efficacy and efficiency, thereby benefiting the patient. Especially in radiology departments, where many technologies require large expenditures, it is no longer acceptable simply to spend on new and improved technologies. It is necessary to justify them as a strong investment in clinical management and efficacy. There is pressure to provide "proof" at the department level and beyond. By applying sensitivity analysis and other forecasting methods, we are able to spend our resources judiciously in order to get the equipment we need when we need it. This helps ensure that we have efficacious, efficient systems--and enough of them--so that our patients are examined on a timely basis and our clinics run smoothly. It also goes a long way toward making certain that the best equipment is available to our clinicians, researchers, students and patients alike.
Arum, Oge; Saleh, Jamal; Boparai, Ravneet; Turner, Jeremy; Kopchick, John; Khardori, Romesh; Bartke, Andrzej
2014-01-01
The correlation of physiological sensitivity to insulin ( vis-à-vis glycemic regulation) and longevity is extensively established, creating a justifiable gerontological interest on whether insulin sensitivity is causative, or even predictive, of some or all phenotypes of slowed senescence (including longevity). The growth hormone receptor/ binding protein gene-disrupted (GHR-KO) mouse is the most extensively investigated insulin-sensitive, attenuated aging model. It was reported that, in a manner divergent from similar mutants, GHR-KO mice fail to respond to caloric restriction (CR) by altering their insulin sensitivity. We hypothesized that maximized insulin responsiveness is what causes GHR-KO mice to exhibit a suppressed survivorship response to dietary (including caloric) restriction; and attempted to refute this hypothesis by assessing the effects of CR on GHR-KO mice for varied slow-aging-associated phenotypes. In contrast to previous reports, we found GHR-KO mice on CR to be less responsive than their ad libitum (A.L.) counterparts to the hypoglycemia-inducing effects of insulin. Further, CR had negligible effects on the metabolism or cognition of GHR-KO mice. Therefore, our data suggest that the effects of CR on the insulin sensitivity of GHR-KO mice do not concur with the effects of CR on the aging of GHR-KO mice. PMID:25789159
NASA Astrophysics Data System (ADS)
Anastasopoulos, Dimitrios; Moretti, Patrizia; Geernaert, Thomas; De Pauw, Ben; Nawrot, Urszula; De Roeck, Guido; Berghmans, Francis; Reynders, Edwin
2017-03-01
The presence of damage in a civil structure alters its stiffness and consequently its modal characteristics. The identification of these changes can provide engineers with useful information about the condition of a structure and constitutes the basic principle of the vibration-based structural health monitoring. While eigenfrequencies and mode shapes are the most commonly monitored modal characteristics, their sensitivity to structural damage may be low relative to their sensitivity to environmental influences. Modal strains or curvatures could offer an attractive alternative but current measurement techniques encounter difficulties in capturing the very small strain (sub-microstrain) levels occurring during ambient, or operational excitation, with sufficient accuracy. This paper investigates the ability to obtain sub-microstrain accuracy with standard fiber-optic Bragg gratings using a novel optical signal processing algorithm that identifies the wavelength shift with high accuracy and precision. The novel technique is validated in an extensive experimental modal analysis test on a steel I-beam which is instrumented with FBG sensors at its top and bottom flange. The raw wavelength FBG data are processed into strain values using both a novel correlation-based processing technique and a conventional peak tracking technique. Subsequently, the strain time series are used for identifying the beam's modal characteristics. Finally, the accuracy of both algorithms in identification of modal characteristics is extensively investigated.
Jain, Avani; Srivastava, Madhur Kumar; Pawaskar, Alok Suresh; Shelley, Simon; Elangovan, Indirani; Jain, Hasmukh; Pandey, Somnath; Kalal, Shilpa; Amalachandran, Jaykanth
2015-01-01
To evaluate the advantages of contrast enhanced F-18-fluorodeoxyglucose (FDG) positron emission tomography-computed tomography (PET-contrast enhanced CT [CECT]) when used as an initial imaging modality in patients presenting with metastatic malignancy of undefined primary origin (MUO). A total of 243 patients with fine needle aspiration cytology/biopsy proven MUO were included in this prospective study. Patients who were thoroughly evaluated for primary or primary tumor was detected by any other investigation were excluded from the analysis. Totally, 163 patients with pathological diagnosis of malignancy but no apparent sites of the primary tumor were finally selected for analysis. The site of probable primary malignancy suggested by PET-CECT was confirmed by biopsy/follow-up. PET-CECT suggested probable site of primary in 128/163 (78.52%) patients. In 30/35 remaining patients, primary tumor was not detected even after extensive work-up. In 5 patients, where PET-CECT was negative, primary was found on further extensive investigations or follow-up. The sensitivity, specificity, positive predictive value and negative predictive value of the study were 95.76%, 66.67%, 88.28% and 85.71% respectively. F-18 FDG PET-CECT aptly serves the purpose of initial imaging modality owing to high sensitivity, negative and positive predictive value. PET-CECT not only surveys the whole body for the primary malignancy but also stages the disease accurately. Use of contrast improves the diagnostic utility of modality as well as help in staging of the primary tumor. Although benefits of using PET-CECT as initial diagnostic modality are obvious from this study, there is a need for a larger study comparing conventional methods for diagnosing primary in patients with MUO versus PET-CECT.
Sensitivity of directed networks to the addition and pruning of edges and vertices
NASA Astrophysics Data System (ADS)
Goltsev, A. V.; Timár, G.; Mendes, J. F. F.
2017-08-01
Directed networks have various topologically different extensive components, in contrast to a single giant component in undirected networks. We study the sensitivity (response) of the sizes of these extensive components in directed complex networks to the addition and pruning of edges and vertices. We introduce the susceptibility, which quantifies this sensitivity. We show that topologically different parts of a directed network have different sensitivity to the addition and pruning of edges and vertices and, therefore, they are characterized by different susceptibilities. These susceptibilities diverge at the critical point of the directed percolation transition, signaling the appearance (or disappearance) of the giant strongly connected component in the infinite size limit. We demonstrate this behavior in randomly damaged real and synthetic directed complex networks, such as the World Wide Web, Twitter, the Caenorhabditis elegans neural network, directed Erdős-Rényi graphs, and others. We reveal a nonmonotonic dependence of the sensitivity to random pruning of edges or vertices in the case of C. elegans and Twitter that manifests specific structural peculiarities of these networks. We propose the measurements of the susceptibilities during the addition or pruning of edges and vertices as a new method for studying structural peculiarities of directed networks.
Ranking of physiotherapeutic evaluation methods as outcome measures of stifle functionality in dogs.
Hyytiäinen, Heli K; Mölsä, Sari H; Junnila, Jouni T; Laitinen-Vapaavuori, Outi M; Hielm-Björkman, Anna K
2013-04-08
Various physiotherapeutic evaluation methods are used to assess the functionality of dogs with stifle problems. Neither validity nor sensitivity of these methods has been investigated. This study aimed to determine the most valid and sensitive physiotherapeutic evaluation methods for assessing functional capacity in hind limbs of dogs with stifle problems and to serve as a basis for developing an indexed test for these dogs. A group of 43 dogs with unilateral surgically treated cranial cruciate ligament deficiency and osteoarthritic findings was used to test different physiotherapeutic evaluation methods. Twenty-one healthy dogs served as the control group and were used to determine normal variation in static weight bearing and range of motion.The protocol consisted of 14 different evaluation methods: visual evaluation of lameness, visual evaluation of diagonal movement, visual evaluation of functional active range of motion and difference in thrust of hind limbs via functional tests (sit-to-move and lie-to-move), movement in stairs, evaluation of hind limb muscle atrophy, manual evaluation of hind limb static weight bearing, quantitative measurement of static weight bearing of hind limbs with bathroom scales, and passive range of motion of hind limb stifle (flexion and extension) and tarsal (flexion and extension) joints using a universal goniometer. The results were compared with those from an orthopaedic examination, force plate analysis, radiographic evaluation, and a conclusive assessment. Congruity of the methods was assessed with a combination of three statistical approaches (Fisher's exact test and two differently calculated proportions of agreeing observations), and the components were ranked from best to worst. Sensitivities of all of the physiotherapeutic evaluation methods against each standard were calculated. Evaluation of asymmetry in a sitting and lying position, assessment of muscle atrophy, manual and measured static weight bearing, and measurement of stifle passive range of motion were the most valid and sensitive physiotherapeutic evaluation methods. Ranking of the various physiotherapeutic evaluation methods was accomplished. Several of these methods can be considered valid and sensitive when examining the functionality of dogs with stifle problems.
Ranking of physiotherapeutic evaluation methods as outcome measures of stifle functionality in dogs
2013-01-01
Background Various physiotherapeutic evaluation methods are used to assess the functionality of dogs with stifle problems. Neither validity nor sensitivity of these methods has been investigated. This study aimed to determine the most valid and sensitive physiotherapeutic evaluation methods for assessing functional capacity in hind limbs of dogs with stifle problems and to serve as a basis for developing an indexed test for these dogs. A group of 43 dogs with unilateral surgically treated cranial cruciate ligament deficiency and osteoarthritic findings was used to test different physiotherapeutic evaluation methods. Twenty-one healthy dogs served as the control group and were used to determine normal variation in static weight bearing and range of motion. The protocol consisted of 14 different evaluation methods: visual evaluation of lameness, visual evaluation of diagonal movement, visual evaluation of functional active range of motion and difference in thrust of hind limbs via functional tests (sit-to-move and lie-to-move), movement in stairs, evaluation of hind limb muscle atrophy, manual evaluation of hind limb static weight bearing, quantitative measurement of static weight bearing of hind limbs with bathroom scales, and passive range of motion of hind limb stifle (flexion and extension) and tarsal (flexion and extension) joints using a universal goniometer. The results were compared with those from an orthopaedic examination, force plate analysis, radiographic evaluation, and a conclusive assessment. Congruity of the methods was assessed with a combination of three statistical approaches (Fisher’s exact test and two differently calculated proportions of agreeing observations), and the components were ranked from best to worst. Sensitivities of all of the physiotherapeutic evaluation methods against each standard were calculated. Results Evaluation of asymmetry in a sitting and lying position, assessment of muscle atrophy, manual and measured static weight bearing, and measurement of stifle passive range of motion were the most valid and sensitive physiotherapeutic evaluation methods. Conclusions Ranking of the various physiotherapeutic evaluation methods was accomplished. Several of these methods can be considered valid and sensitive when examining the functionality of dogs with stifle problems. PMID:23566355
Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N
2016-06-14
Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
Improvement of sensitivity in PIGE analysis of steels by neutron-gamma coincidences measurement
NASA Astrophysics Data System (ADS)
Ene, Antoaneta
2004-07-01
In this work the sensitivities of minor elements in a standard steel sample EURONORM-CRM No. 085-1 irradiated with beams of 5.5 MeV protons and 5 MeV deuterons have been determined both by regular proton- (p-PIGE) and deuteron-induced prompt gamma-ray emission (d-PIGE) methods and with the selection of the (p, n) and (d, n) reaction channels, measuring the neutron-gamma coincidences. A check on the elemental composition of the steel standard has also been carried out using combined INAA and PIXE and quantitative determinations have been done for some elements whose concentrations were not specified by the manufacturer, such as Al, As, Cr, Mo, Na, Ni, W. This complex study has resulted in a significant improvement of the sensitivities for some minor elements in steel by reducing the background and increasing the peak-to-background ratio in the coincident prompt gamma-rays spectra as a result of the elimination of the competing nuclear reactions originating from isotopes of the adjacent elements in the periodic table, present in the steel target. This extension of the PIGE method could be adapted by any analyst with the necessary equipment for the analysis of a wide variety of matrices that are refractory enough to withstand the heating effect of the bombarding beam, taking into account that this type of experiment requires longer irradiation times.
Resistance-associated point mutations in insecticide-insensitive acetylcholinesterase.
Mutero, A; Pralavorio, M; Bride, J M; Fournier, D
1994-01-01
Extensive utilization of pesticides against insects provides us with a good model for studying the adaptation of a eukaryotic genome to a strong selective pressure. One mechanism of resistance is the alteration of acetylcholinesterase (EC 3.1.1.7), the molecular target for organophosphates and carbamates. Here, we report the sequence analysis of the Ace gene in several resistant field strains of Drosophila melanogaster. This analysis resulted in the identification of five point mutations associated with reduced sensitivities to insecticides. In some cases, several of these mutations were found to be combined in the same protein, leading to different resistance patterns. Our results suggest that recombination between resistant alleles preexisting in natural populations is a mechanism by which insects rapidly adapt to new selective pressures. Images PMID:8016090
Wright, Robert O; Teitelbaum, Susan; Thompson, Claudia; Balshaw, David
2018-04-01
Demonstrate the role of environment as a predictor of child health. The children's health exposure analysis resource (CHEAR) assists the Environmental influences on child health outcomes (ECHO) program in understanding the time sensitive and dynamic nature of perinatal and childhood environment on developmental trajectories by providing a central infrastructure for the analysis of biological samples from the ECHO cohort awards. CHEAR will assist ECHO cohorts in defining the critical or sensitive period for effects associated with environmental exposures. Effective incorporation of these principles into multiple existing cohorts requires extensive multidisciplinary expertise, creativity, and flexibility. The pursuit of life course - informed research within the CHEAR/ECHO structure represents a shift in focus from single exposure inquiries to one that addresses multiple environmental risk factors linked through shared vulnerabilities. CHEAR provides ECHO both targeted analyses of inorganic and organic toxicants, nutrients, and social-stress markers and untargeted analyses to assess the exposome and discovery of exposure-outcome relationships. Utilization of CHEAR as a single site for characterization of environmental exposures within the ECHO cohorts will not only support the investigation of the influence of environment on children's health but also support the harmonization of data across the disparate cohorts that comprise ECHO.
Bile acids: analysis in biological fluids and tissues
Griffiths, William J.; Sjövall, Jan
2010-01-01
The formation of bile acids/bile alcohols is of major importance for the maintenance of cholesterol homeostasis. Besides their functions in lipid absorption, bile acids/bile alcohols are regulatory molecules for a number of metabolic processes. Their effects are structure-dependent, and numerous metabolic conversions result in a complex mixture of biologically active and inactive forms. Advanced methods are required to characterize and quantify individual bile acids in these mixtures. A combination of such analyses with analyses of the proteome will be required for a better understanding of mechanisms of action and nature of endogenous ligands. Mass spectrometry is the basic detection technique for effluents from chromatographic columns. Capillary liquid chromatography-mass spectrometry with electrospray ionization provides the highest sensitivity in metabolome analysis. Classical gas chromatography-mass spectrometry is less sensitive but offers extensive structure-dependent fragmentation increasing the specificity in analyses of isobaric isomers of unconjugated bile acids. Depending on the nature of the bile acid/bile alcohol mixture and the range of concentration of individuals, different sample preparation sequences, from simple extractions to group separations and derivatizations, are applicable. We review the methods currently available for the analysis of bile acids in biological fluids and tissues, with emphasis on the combination of liquid and gas phase chromatography with mass spectrometry. PMID:20008121
Fluctuating hyperfine interactions: an updated computational implementation
NASA Astrophysics Data System (ADS)
Zacate, M. O.; Evenson, W. E.
2015-04-01
The stochastic hyperfine interactions modeling library (SHIML) is a set of routines written in the C programming language designed to assist in the analysis of stochastic models of hyperfine interactions. The routines read a text-file description of the model, set up the Blume matrix, upon which the evolution operator of the quantum mechanical system depends, and calculate the eigenvalues and eigenvectors of the Blume matrix, from which theoretical spectra of experimental techniques can be calculated. The original version of SHIML constructs Blume matrices applicable for methods that measure hyperfine interactions with only a single nuclear spin state. In this paper, we report an extension of the library to provide support for methods such as Mössbauer spectroscopy and nuclear resonant scattering of synchrotron radiation, which are sensitive to interactions with two nuclear spin states. Examples will be presented that illustrate the use of this extension of SHIML to generate Mössbauer spectra for polycrystalline samples under a number of fluctuating hyperfine field models.
Time series regression studies in environmental epidemiology.
Bhaskaran, Krishnan; Gasparrini, Antonio; Hajat, Shakoor; Smeeth, Liam; Armstrong, Ben
2013-08-01
Time series regression studies have been widely used in environmental epidemiology, notably in investigating the short-term associations between exposures such as air pollution, weather variables or pollen, and health outcomes such as mortality, myocardial infarction or disease-specific hospital admissions. Typically, for both exposure and outcome, data are available at regular time intervals (e.g. daily pollution levels and daily mortality counts) and the aim is to explore short-term associations between them. In this article, we describe the general features of time series data, and we outline the analysis process, beginning with descriptive analysis, then focusing on issues in time series regression that differ from other regression methods: modelling short-term fluctuations in the presence of seasonal and long-term patterns, dealing with time varying confounding factors and modelling delayed ('lagged') associations between exposure and outcome. We finish with advice on model checking and sensitivity analysis, and some common extensions to the basic model.
Inventory Control System for a Healthcare Apparel Service Centre with Stockout Risk: A Case Analysis
Hui, Chi-Leung
2017-01-01
Based on the real-world inventory control problem of a capacitated healthcare apparel service centre in Hong Kong which provides tailor-made apparel-making services for the elderly and disabled people, this paper studies a partial backordered continuous review inventory control problem in which the product demand follows a Poisson process with a constant lead time. The system is controlled by an (Q,r) inventory policy which incorporate the stockout risk, storage capacity, and partial backlog. The healthcare apparel service centre, under the capacity constraint, aims to minimize the inventory cost and achieving a low stockout risk. To address this challenge, an optimization problem is constructed. A real case-based data analysis is conducted, and the result shows that the expected total cost on an order cycle is reduced substantially at around 20% with our proposed optimal inventory control policy. An extensive sensitivity analysis is conducted to generate additional insights. PMID:29527283
Electrophoresis for the analysis of heparin purity and quality.
Volpi, Nicola; Maccari, Francesca; Suwan, Jiraporn; Linhardt, Robert J
2012-06-01
The adulteration of raw heparin with oversulfated chondroitin sulfate (OSCS) in 2007-2008 produced a global crisis resulting in extensive revisions to the pharmacopeia monographs and prompting the FDA to recommend the development of additional methods for the analysis of heparin purity. As a consequence, a wide variety of innovative analytical approaches have been developed for the quality assurance and purity of unfractionated and low-molecular-weight heparins. This review discusses recent developments in electrophoresis techniques available for the sensitive separation, detection, and partial structural characterization of heparin contaminants. In particular, this review summarizes recent publications on heparin quality and related impurity analysis using electrophoretic separations such as capillary electrophoresis (CE) of intact polysaccharides and hexosamines derived from their acidic hydrolysis, and polyacrylamide gel electrophoresis (PAGE) for the separation of heparin samples without and in the presence of its relatively specific depolymerization process with nitrous acid treatment. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
COINSTAC: Decentralizing the future of brain imaging analysis
Ming, Jing; Verner, Eric; Sarwate, Anand; Kelly, Ross; Reed, Cory; Kahleck, Torran; Silva, Rogers; Panta, Sandeep; Turner, Jessica; Plis, Sergey; Calhoun, Vince
2017-01-01
In the era of Big Data, sharing neuroimaging data across multiple sites has become increasingly important. However, researchers who want to engage in centralized, large-scale data sharing and analysis must often contend with problems such as high database cost, long data transfer time, extensive manual effort, and privacy issues for sensitive data. To remove these barriers to enable easier data sharing and analysis, we introduced a new, decentralized, privacy-enabled infrastructure model for brain imaging data called COINSTAC in 2016. We have continued development of COINSTAC since this model was first introduced. One of the challenges with such a model is adapting the required algorithms to function within a decentralized framework. In this paper, we report on how we are solving this problem, along with our progress on several fronts, including additional decentralized algorithms implementation, user interface enhancement, decentralized regression statistic calculation, and complete pipeline specifications. PMID:29123643
Pan, An; Hui, Chi-Leung
2017-01-01
Based on the real-world inventory control problem of a capacitated healthcare apparel service centre in Hong Kong which provides tailor-made apparel-making services for the elderly and disabled people, this paper studies a partial backordered continuous review inventory control problem in which the product demand follows a Poisson process with a constant lead time. The system is controlled by an ( Q , r ) inventory policy which incorporate the stockout risk, storage capacity, and partial backlog. The healthcare apparel service centre, under the capacity constraint, aims to minimize the inventory cost and achieving a low stockout risk. To address this challenge, an optimization problem is constructed. A real case-based data analysis is conducted, and the result shows that the expected total cost on an order cycle is reduced substantially at around 20% with our proposed optimal inventory control policy. An extensive sensitivity analysis is conducted to generate additional insights.
Mediation Analysis: A Practitioner's Guide.
VanderWeele, Tyler J
2016-01-01
This article provides an overview of recent developments in mediation analysis, that is, analyses used to assess the relative magnitude of different pathways and mechanisms by which an exposure may affect an outcome. Traditional approaches to mediation in the biomedical and social sciences are described. Attention is given to the confounding assumptions required for a causal interpretation of direct and indirect effect estimates. Methods from the causal inference literature to conduct mediation in the presence of exposure-mediator interactions, binary outcomes, binary mediators, and case-control study designs are presented. Sensitivity analysis techniques for unmeasured confounding and measurement error are introduced. Discussion is given to extensions to time-to-event outcomes and multiple mediators. Further flexible modeling strategies arising from the precise counterfactual definitions of direct and indirect effects are also described. The focus throughout is on methodology that is easily implementable in practice across a broad range of potential applications.
Electrophoresis for the analysis of heparin purity and quality
Volpi, Nicola; Maccari, Francesca; Suwan, Jiraporn; Linhardt, Robert J.
2012-01-01
The adulteration of raw heparin with oversulfated chondroitin sulfate (OSCS) in 2007–2008 produced a global crisis resulting in extensive revisions to the pharmacopeia monographs and prompting the FDA to recommend the development of additional methods for the analysis of heparin purity. As a consequence, a wide variety of innovative analytical approaches have been developed for the quality assurance and purity of unfractionated and low-molecular-weight heparins. This review discusses recent developments in electrophoresis techniques available for the sensitive separation, detection, and partial structural characterization of heparin contaminants. In particular, this review summarizes recent publications on heparin quality and related impurity analysis using electrophoretic separations such as capillary electrophoresis (CE) of intact polysaccharides and hexosamines derived from their acidic hydrolysis, and polyacrylamide gel electrophoresis (PAGE) for the separation of heparin samples without and in the presence of its relatively specific depolymerization process with nitrous acid treatment. PMID:22736353
NASA Astrophysics Data System (ADS)
Schwarz, Massimiliano; Cohen, Denis
2017-04-01
Morphology and extent of hydrological pathways, in combination with the spatio-temporal variability of rainfall events and the heterogeneities of hydro-mechanical properties of soils, has a major impact on the hydrological conditions that locally determine the triggering of shallow landslides. The coupling of these processes at different spatial scales is an enormous challenge for slope stability modeling at the catchment scale. In this work we present a sensitivity analysis of a new dual-porosity hydrological model implemented in the hydro-mechanical model SOSlope for the modeling of shallow landslides on vegetated hillslopes. The proposed model links the calculation of the saturation dynamic of preferential flow-paths based on hydrological and topographical characteristics of the landscape to the hydro-mechanical behavior of the soil along a potential failure surface due to the changes of soil matrix saturation. Furthermore, the hydro-mechanical changes of soil conditions are linked to the local stress-strain properties of the (rooted-)soil that ultimately determine the force redistribution and related deformations at the hillslope scale. The model considers forces to be redistributed through three types of solicitations: tension, compression, and shearing. The present analysis shows how the conditions of deformation due to the passive earth pressure mobilized at the toe of the landslide are particularly important in defining the timing and extension of shallow landslides. The model also shows that, in densely rooted hillslopes, lateral force redistribution under tension through the root-network may substantially contribute to stabilizing slopes, avoiding crack formation and large deformations. The results of the sensitivity analysis are discussed in the context of protection forest management and bioengineering techniques.
ChariDingari, Narahara; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P.; Kumar, G. Manoj
2012-01-01
Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real world applications, e.g. quality assurance and process monitoring. Specifically, variability in sample, system and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a non-linear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), due to its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data – highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples as well as in related areas of forensic and biological sample analysis. PMID:22292496
Dingari, Narahara Chari; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P; Kumar Gundawar, Manoj
2012-03-20
Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real-world applications, e.g., quality assurance and process monitoring. Specifically, variability in sample, system, and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a nonlinear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that the application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), because of its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data-highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples, as well as in related areas of forensic and biological sample analysis.
Reducing the overlay metrology sensitivity to perturbations of the measurement stack
NASA Astrophysics Data System (ADS)
Zhou, Yue; Park, DeNeil; Gutjahr, Karsten; Gottipati, Abhishek; Vuong, Tam; Bae, Sung Yong; Stokes, Nicholas; Jiang, Aiqin; Hsu, Po Ya; O'Mahony, Mark; Donini, Andrea; Visser, Bart; de Ruiter, Chris; Grzela, Grzegorz; van der Laan, Hans; Jak, Martin; Izikson, Pavel; Morgan, Stephen
2017-03-01
Overlay metrology setup today faces a continuously changing landscape of process steps. During Diffraction Based Overlay (DBO) metrology setup, many different metrology target designs are evaluated in order to cover the full process window. The standard method for overlay metrology setup consists of single-wafer optimization in which the performance of all available metrology targets is evaluated. Without the availability of external reference data or multiwafer measurements it is hard to predict the metrology accuracy and robustness against process variations which naturally occur from wafer-to-wafer and lot-to-lot. In this paper, the capabilities of the Holistic Metrology Qualification (HMQ) setup flow are outlined, in particular with respect to overlay metrology accuracy and process robustness. The significance of robustness and its impact on overlay measurements is discussed using multiple examples. Measurement differences caused by slight stack variations across the target area, called grating imbalance, are shown to cause significant errors in the overlay calculation in case the recipe and target have not been selected properly. To this point, an overlay sensitivity check on perturbations of the measurement stack is presented for improvement of the overlay metrology setup flow. An extensive analysis on Key Performance Indicators (KPIs) from HMQ recipe optimization is performed on µDBO measurements of product wafers. The key parameters describing the sensitivity to perturbations of the measurement stack are based on an intra-target analysis. Using advanced image analysis, which is only possible for image plane detection of μDBO instead of pupil plane detection of DBO, the process robustness performance of a recipe can be determined. Intra-target analysis can be applied for a wide range of applications, independent of layers and devices.
De Carvalho, Thays C; Tosato, Flavia; Souza, Lindamara M; Santos, Heloa; Merlo, Bianca B; Ortiz, Rafael S; Rodrigues, Rayza R T; Filgueiras, Paulo R; França, Hildegardo S; Augusti, Rodinei; Romão, Wanderson; Vaz, Boniek G
2016-05-01
Thin layer chromatography (TLC) is a simple and inexpensive type of chromatography that is extensively used in forensic laboratories for drugs of abuse analysis. In this work, TLC is optimized to analyze cocaine and its adulterants (caffeine, benzocaine, lidocaine and phenacetin) in which the sensitivity (visual determination of LOD from 0.5 to 14mgmL(-1)) and the selectivity (from the study of three different eluents: CHCl3:CH3OH:HCOOHglacial (75:20:5v%), (C2H5)2O:CHCl3 (50:50v%) and CH3OH:NH4OH (100:1.5v%)) were evaluated. Aiming to improve these figures of merit, the TLC spots were identified and quantified (linearity with R(2)>0.98) by the paper spray ionization mass spectrometry (PS-MS), reaching now lower LOD values (>1.0μgmL(-1)). The method developed in this work open up perspective of enhancing the reliability of traditional and routine TLC analysis employed in the criminal expertise units. Higher sensitivity, selectivity and rapidity can be provided in forensic reports, besides the possibility of quantitative analysis. Due to the great simplicity, the PS(+)-MS technique can also be coupled directly to other separation techniques such as the paper chromatography and can still be used in analyses of LSD blotter, documents and synthetic drugs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Prostate-cancer diagnosis by non-invasive prostatic Zinc mapping using X-Ray Fluorescence (XRF)
NASA Astrophysics Data System (ADS)
Cortesi, Marco
At present, the major screening tools (PSA, DRE, TRUS) for prostate cancer lack sensitivity and specificity, and none can distinguish between low-grade indolent cancer and high-grade lethal one. The situation calls for the promotion of alternative approaches, with better detection sensitivity and specificity, to provide more efficient selection of patients to biopsy and with possible guidance of the biopsy needles. The prime objective of the present work was the development of a novel non-invasive method and tool for promoting detection, localization, diagnosis and follow-up of PCa. The method is based on in-vivo imaging of Zn distribution in the peripheral zone of the prostate, by a trans-rectal X-ray fluorescence (XRF) probe. Local Zn levels, measured in 1--4 mm3 fresh tissue biopsy segments from an extensive clinical study involving several hundred patients, showed an unambiguous correlation with the histological classification of the tissue (Non-Cancer or PCa), and a systematic positive correlation of its depletion level with the cancer-aggressiveness grade (Gleason classification). A detailed analysis of computer-simulated Zn-concentration images (with input parameters from clinical data) disclosed the potential of the method to provide sensitive and specific detection and localization of the lesion, its grade and extension. Furthermore, it also yielded invaluable data on some requirements, such as the image resolution and counting-statistics, requested from a trans-rectal XRF probe for in-vivo recording of prostatic-Zn maps in patients. By means of systematic table-top experiments on prostate-phantoms comprising tumor-like inclusions, followed by dedicated Monte Carlo simulations, the XRF-probe and its components have been designed and optimized. Multi-parameter analysis of the experimental data confirmed the simulation estimations of the XRF detection system in terms of: delivered dose, counting statistics, scanning resolution, target-volume size and the accuracy of locating at various depths of small-volume tumor-like inclusions in tissue-phantoms. The clinical study, the Monte Carlo simulations and the analysis of Zn-map images provided essential information and promising vision on the potential performance of the Zn-based PCa detection concept. Simulations focusing on medical-probe design and its performance at permissible radiation doses yielded positive results - confirmed by a series of systematic laboratory experiments with a table-top XRF system.
State-of-the-Art Fusion-Finder Algorithms Sensitivity and Specificity
Carrara, Matteo; Beccuti, Marco; Lazzarato, Fulvio; Cavallo, Federica; Cordero, Francesca; Donatelli, Susanna; Calogero, Raffaele A.
2013-01-01
Background. Gene fusions arising from chromosomal translocations have been implicated in cancer. RNA-seq has the potential to discover such rearrangements generating functional proteins (chimera/fusion). Recently, many methods for chimeras detection have been published. However, specificity and sensitivity of those tools were not extensively investigated in a comparative way. Results. We tested eight fusion-detection tools (FusionHunter, FusionMap, FusionFinder, MapSplice, deFuse, Bellerophontes, ChimeraScan, and TopHat-fusion) to detect fusion events using synthetic and real datasets encompassing chimeras. The comparison analysis run only on synthetic data could generate misleading results since we found no counterpart on real dataset. Furthermore, most tools report a very high number of false positive chimeras. In particular, the most sensitive tool, ChimeraScan, reports a large number of false positives that we were able to significantly reduce by devising and applying two filters to remove fusions not supported by fusion junction-spanning reads or encompassing large intronic regions. Conclusions. The discordant results obtained using synthetic and real datasets suggest that synthetic datasets encompassing fusion events may not fully catch the complexity of RNA-seq experiment. Moreover, fusion detection tools are still limited in sensitivity or specificity; thus, there is space for further improvement in the fusion-finder algorithms. PMID:23555082
Rastogi, Deepa; Reddy, Mamta; Neugebauer, Richard
2006-11-01
Among Hispanics, the largest minority ethnic group in the United States, asthma prevalence is increasing, particularly in inner-city neighborhoods. Although allergen sensitization among asthmatic African Americans has been extensively studied, similar details are not available for Hispanic children. To examine patterns of allergen sensitization, including the association with illness severity, in asthmatic children overall and in Hispanic and African American children living in a socioeconomically disadvantaged area of New York City. A retrospective medical record review of asthmatic children attending a community hospital in the South Bronx area of New York City was performed. Information abstracted included demographics, asthma severity classification, reported exposures to indoor allergens, and results of allergy testing. Among 384 children in the analysis, 270 (70.3%) were Hispanic and 114 (29.7%) were African American. Sensitization to indoor and outdoor allergens, respectively, did not differ between Hispanic (58.5% and 27.0%) and African American (58.8% and 32.6%) children. Allergen sensitization exhibited a direct, significant association with asthma severity for indoor allergens for the 2 ethnic groups combined and for Hispanics separately but not between asthma severity and outdoor allergens (P < .01). No correlation was found between self-reported allergen exposure and sensitization. Patterns of allergen sensitization among inner-city Hispanic asthmatic children resemble those among African American children, a finding that is likely explained by the similarity in levels of environmental exposures. With the increasing prevalence of asthma among inner-city Hispanic children, skin testing should be used frequently for objective evaluation of asthma in this ethnic group.
ERIC Educational Resources Information Center
White, Leigh Cree, Ed.
The 4-H/Community Resource Development workshop was designed to help people conducting Extension 4-H programs on sensitive and complex community issues improve their ability to create a better climate for discussion of alternatives and development of community projects. Topics of the presentations included the role of the 4-H administrator,…
Boyce, John D.; Harper, Marina; St. Michael, Frank; John, Marietta; Aubry, Annie; Parnas, Henrietta; Logan, Susan M.; Wilkie, Ian W.; Ford, Mark; Cox, Andrew D.; Adler, Ben
2009-01-01
We previously determined the structure of the Pasteurella multocida Heddleston type 1 lipopolysaccharide (LPS) molecule and characterized some of the transferases essential for LPS biosynthesis. We also showed that P. multocida strains expressing truncated LPS display reduced virulence. Here, we have identified all of the remaining glycosyltransferases required for synthesis of the oligosaccharide extension of the P. multocida Heddleston type 1 LPS, including a novel α-1,6 glucosyltransferase, a β-1,4 glucosyltransferase, a putative bifunctional galactosyltransferase, and two heptosyltransferases. In addition, we identified a novel oligosaccharide extension expressed only in a heptosyltransferase (hptE) mutant background. All of the analyzed mutants expressing LPS with a truncated main oligosaccharide extension displayed reduced virulence, but those expressing LPS with an intact heptose side chain were able to persist for long periods in muscle tissue. The hptC mutant, which expressed LPS with the shortest oligosaccharide extension and no heptose side chain, was unable to persist on the muscle or cause any disease. Furthermore, all of the mutants displayed increased sensitivity to the chicken antimicrobial peptide fowlicidin 1, with mutants expressing highly truncated LPS being the most sensitive. PMID:19168738
Lung Reference Set A Application: LaszloTakacs - Biosystems (2010) — EDRN Public Portal
We would like to access the NCI lung cancer Combined Pre-Validation Reference Set A in order to further validate a lung cancer diagnostic test candidate. Our test is based on a panel of antibodies which have been tested on 4 different cohorts (see below, paragraph “Preliminary Data and Methods”). This Reference Set A, whose clinical setting is “Diagnosis of lung cancer”, will be used to validate the panel of monoclonal antibodies which have been demonstrated by extensive data analysis to provide the best discrimination between controls and Lung Cancer patient plasma samples, sensitivity and specificity values from ROC analyses are superior than 85 %.
Neutron-hole strength in the N = 81 isotones
NASA Astrophysics Data System (ADS)
Howard, A. M.; Freeman, S. J.; Schiffer, J. P.; Bloxham, T.; Clark, J. A.; Deibel, C. M.; Kay, B. P.; Parker, P. D.; Sharp, D. K.; Thomas, J. S.
2012-09-01
The distribution of neutron-hole strength has been studied in the N = 81 isotones 137Ba, 139Ce, 141Nd and 143Sm through the single-neutron removing reactions (p,d) and (3He,α), at energies of 23 and 34 MeV, respectively. Systematic cross section measurements were made at angles sensitive to the transferred angular momentum, and spectroscopic factors extracted through a distorted-wave Born approximation analysis. Application of the MacFarlane-French sum rules indicate an anomalously low summed g7/2 spectroscopic factor, most likely due to extensive fragmentation of the single-particle strength. Single-particle energies, based upon the centroids of observed strength, are presented.
Manned geosynchronous mission requirements and systems analysis study extension
NASA Technical Reports Server (NTRS)
1981-01-01
Turnaround requirements for the manned orbital transfer vehicle (MOTV) baseline and alternate concepts with and without a space operations center (SOC) are defined. Manned orbital transfer vehicle maintenance, refurbishment, resupply, and refueling are considered as well as the most effective combination of ground based and space based turnaround activities. Ground and flight operations requirements for abort are identified as well as low cost approaches to space and ground operations through maintenance and missions sensitivity studies. The recommended turnaround mix shows that space basing MOTV at SOC with periodic return to ground for overhaul results in minimum recurring costs. A pressurized hangar at SOC reduces labor costs by approximately 50%.
Ren, Jingzheng
2018-01-01
This objective of this study is to develop a generic multi-attribute decision analysis framework for ranking the technologies for ballast water treatment and determine their grades. An evaluation criteria system consisting of eight criteria in four categories was used to evaluate the technologies for ballast water treatment. The Best-Worst method, which is a subjective weighting method and Criteria importance through inter-criteria correlation method, which is an objective weighting method, were combined to determine the weights of the evaluation criteria. The extension theory was employed to prioritize the technologies for ballast water treatment and determine their grades. An illustrative case including four technologies for ballast water treatment, i.e. Alfa Laval (T 1 ), Hyde (T 2 ), Unitor (T 3 ), and NaOH (T 4 ), were studied by the proposed method, and the Hyde (T 2 ) was recognized as the best technology. Sensitivity analysis was also carried to investigate the effects of the combined coefficients and the weights of the evaluation criteria on the final priority order of the four technologies for ballast water treatment. The sum weighted method and the TOPSIS was also employed to rank the four technologies, and the results determined by these two methods are consistent to that determined by the proposed method in this study. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wound infection secondary to snakebite.
Wagener, M; Naidoo, M; Aldous, C
2017-03-29
Snakebites can produce severe local and systemic septic complications as well as being associated with significant overall morbidity and even mortality. A prospective audit was undertaken to determine the bacterial causation of wound infection secondary to snakebite, and attempt to quantify the burden of disease. The audit was undertaken at Ngwelezane Hospital, which provides both regional and tertiary services for north-eastern KwaZulu-Natal Province, South Africa, over a 4-month period. Records of patients who required surgical debridement for extensive skin and soft-tissue necrosis were analysed. At the time of debridement, tissue samples of necrotic or infected tissue were sent for bacteriological analysis as standard of care. Microbiology results were analysed. A total of 164 patients were admitted to hospital for management of snakebite, of whom 57 required surgical debridement and 42 were included in the final microbiological analysis. Children were found to be the most frequent victims of snakebite; 57.8% of patients in this study were aged ≤10 years and 73.7% ≤15 years. Culture showed a single organism in 32/42 cases, two organisms in 8 and no growth in 2. Eight different types of organisms were cultured, five of them more than once. Thirty-five specimens (83.3%) grew Gram-negative Enterobacteriaceae, the most frequent being Morganella morganii and Proteus species. Thirteen specimens (31.0%) grew Enterococcus faecalis. Gram-negative Enterobacteriaceae showed 31.4% sensitivity to ampicillin, 40.0% sensitivity to amoxicillin plus clavulanic acid, 34.3% sensitivity to cefuroxime, 97.1% sensitivity to ceftriaxone, and 100% sensitivity to ciprofloxacin, gentamicin and amikacin. E. faecalis was 92.3% sensitive to amoxicillin, 92.3% sensitive to amoxicillin plus clavulanic acid, 100% sensitive to ciprofloxacin, 92.3% resistant to erythromycin and 100% resistant to ceftriaxone. Children are particularly vulnerable to snakebite, and the consequences can be devastating. While the majority of patients in this study were shown to have secondary bacterial infection, debridement and subsequent wound management is considered the mainstay of treatment. Common organisms are Enterobacteriaceae and enterococci. There appears to be a role for antibiotics in the management of these patients. A good antibiotic policy is strongly advocated.
Komura, Kazumasa; Jeong, Seong Ho; Hinohara, Kunihiko; Qu, Fangfang; Wang, Xiaodong; Hiraki, Masayuki; Azuma, Haruhito; Lee, Gwo-Shu Mary; Kantoff, Philip W.; Sweeney, Christopher J.
2016-01-01
The androgen receptor (AR) plays an essential role in prostate cancer, and suppression of its signaling with androgen deprivation therapy (ADT) has been the mainstay of treatment for metastatic hormone-sensitive prostate cancer for more than 70 y. Chemotherapy has been reserved for metastatic castration-resistant prostate cancer (mCRPC). The Eastern Cooperative Oncology Group-led trial E3805: ChemoHormonal Therapy Versus Androgen Ablation Randomized Trial for Extensive Disease in Prostate Cancer (CHAARTED) showed that the addition of docetaxel to ADT prolonged overall survival compared with ADT alone in patients with metastatic hormone-sensitive prostate cancer. This finding suggests that there is an interaction between AR signaling activity and docetaxel sensitivity. Here we demonstrate that the prostate cancer cell lines LNCaP and LAPC4 display markedly different sensitivity to docetaxel with AR activation, and RNA-seq analysis of these cell lines identified KDM5D (lysine-specific demethylase 5D) encoded on the Y chromosome as a potential mediator of this sensitivity. Knocking down KDM5D expression in LNCaP leads to docetaxel resistance in the presence of dihydrotestosterone. KDM5D physically interacts with AR in the nucleus, and regulates its transcriptional activity by demethylating H3K4me3 active transcriptional marks. Attenuating KDM5D expression dysregulates AR signaling, resulting in docetaxel insensitivity. KDM5D deletion was also observed in the LNCaP-derived CRPC cell line 104R2, which displayed docetaxel insensitivity with AR activation, unlike parental LNCaP. Dataset analysis from the Oncomine database revealed significantly decreased KDM5D expression in CRPC and poorer prognosis with low KDM5D expression. Taking these data together, this work indicates that KDM5D modulates the AR axis and that this is associated with altered docetaxel sensitivity. PMID:27185910
Kapadia, Mufiza Z; Askie, Lisa; Hartling, Lisa; Contopoulos-Ioannidis, Despina; Bhutta, Zulfiqar A; Soll, Roger; Moher, David; Offringa, Martin
2016-01-01
Introduction Paediatric systematic reviews differ from adult systematic reviews in several key aspects such as considerations of child tailored interventions, justifiable comparators, valid outcomes and child sensitive search strategies. Available guidelines, including PRISMA-P (2015) and PRISMA (2009), do not cover all the complexities associated with reporting systematic reviews in the paediatric population. Using a collaborative, multidisciplinary structure, we aim to develop evidence-based and consensus-based PRISMA-P-C (Protocol for Children) and PRISMA-C (Children) Extensions to guide paediatric systematic review protocol and completed review reporting. Methods and analysis This project's methodology follows published recommendations for developing reporting guidelines and involves the following six phases; (1) establishment of a steering committee representing key stakeholder groups; (2) a scoping review to identify potential Extension items; (3) three types of consensus activities including meetings of the steering committee to achieve high-level decisions on the content and methodology of the Extensions, a survey of key stakeholders to generate a list of possible items to include in the Extensions and a formal consensus meeting to select the reporting items to add to, or modify for, the Extension; (4) the preliminary checklist items generated in phase III will be evaluated against the existing evidence and reporting practices in paediatric systematic reviews; (5) extension statements and explanation and elaboration documents will provide detailed advice for each item and examples of good reporting; (6) development and implementation of effective knowledge translation of the extension checklist, and an evaluation of the Extensions by key stakeholders. Ethics and Dissemination This protocol was considered a quality improvement project by the Hospital for Sick Children's Ethics Committee and did not require ethical review. The resultant checklists, jointly developed with all relevant stakeholders, will be disseminated through peer-reviewed journals as well as national and international conference presentations. Endorsement of the checklist will be sought simultaneously in multiple journals. PMID:27091820
Cost-effectiveness of cerebrospinal biomarkers for the diagnosis of Alzheimer's disease.
Lee, Spencer A W; Sposato, Luciano A; Hachinski, Vladimir; Cipriano, Lauren E
2017-03-16
Accurate and timely diagnosis of Alzheimer's disease (AD) is important for prompt initiation of treatment in patients with AD and to avoid inappropriate treatment of patients with false-positive diagnoses. Using a Markov model, we estimated the lifetime costs and quality-adjusted life-years (QALYs) of cerebrospinal fluid biomarker analysis in a cohort of patients referred to a neurologist or memory clinic with suspected AD who remained without a definitive diagnosis of AD or another condition after neuroimaging. Parametric values were estimated from previous health economic models and the medical literature. Extensive deterministic and probabilistic sensitivity analyses were performed to evaluate the robustness of the results. At a 12.7% pretest probability of AD, biomarker analysis after normal neuroimaging findings has an incremental cost-effectiveness ratio (ICER) of $11,032 per QALY gained. Results were sensitive to the pretest prevalence of AD, and the ICER increased to over $50,000 per QALY when the prevalence of AD fell below 9%. Results were also sensitive to patient age (biomarkers are less cost-effective in older cohorts), treatment uptake and adherence, biomarker test characteristics, and the degree to which patients with suspected AD who do not have AD benefit from AD treatment when they are falsely diagnosed. The cost-effectiveness of biomarker analysis depends critically on the prevalence of AD in the tested population. In general practice, where the prevalence of AD after clinical assessment and normal neuroimaging findings may be low, biomarker analysis is unlikely to be cost-effective at a willingness-to-pay threshold of $50,000 per QALY gained. However, when at least 1 in 11 patients has AD after normal neuroimaging findings, biomarker analysis is likely cost-effective. Specifically, for patients referred to memory clinics with memory impairment who do not present neuroimaging evidence of medial temporal lobe atrophy, pretest prevalence of AD may exceed 15%. Biomarker analysis is a potentially cost-saving diagnostic method and should be considered for adoption in high-prevalence centers.
TrackMate: An open and extensible platform for single-particle tracking.
Tinevez, Jean-Yves; Perry, Nick; Schindelin, Johannes; Hoopes, Genevieve M; Reynolds, Gregory D; Laplantine, Emmanuel; Bednarek, Sebastian Y; Shorte, Spencer L; Eliceiri, Kevin W
2017-02-15
We present TrackMate, an open source Fiji plugin for the automated, semi-automated, and manual tracking of single-particles. It offers a versatile and modular solution that works out of the box for end users, through a simple and intuitive user interface. It is also easily scriptable and adaptable, operating equally well on 1D over time, 2D over time, 3D over time, or other single and multi-channel image variants. TrackMate provides several visualization and analysis tools that aid in assessing the relevance of results. The utility of TrackMate is further enhanced through its ability to be readily customized to meet specific tracking problems. TrackMate is an extensible platform where developers can easily write their own detection, particle linking, visualization or analysis algorithms within the TrackMate environment. This evolving framework provides researchers with the opportunity to quickly develop and optimize new algorithms based on existing TrackMate modules without the need of having to write de novo user interfaces, including visualization, analysis and exporting tools. The current capabilities of TrackMate are presented in the context of three different biological problems. First, we perform Caenorhabditis-elegans lineage analysis to assess how light-induced damage during imaging impairs its early development. Our TrackMate-based lineage analysis indicates the lack of a cell-specific light-sensitive mechanism. Second, we investigate the recruitment of NEMO (NF-κB essential modulator) clusters in fibroblasts after stimulation by the cytokine IL-1 and show that photodamage can generate artifacts in the shape of TrackMate characterized movements that confuse motility analysis. Finally, we validate the use of TrackMate for quantitative lifetime analysis of clathrin-mediated endocytosis in plant cells. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Probabilistic structural analysis methods for select space propulsion system components
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Cruse, T. A.
1989-01-01
The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.
NASA Astrophysics Data System (ADS)
Honarvar, Elahe; Venter, Andre R.
2017-06-01
The analysis of protein by desorption electrospray ionization mass spectrometry (DESI-MS) is considered impractical due to a mass-dependent loss in sensitivity with increase in protein molecular weights. With the addition of ammonium bicarbonate to the DESI-MS analysis the sensitivity towards proteins by DESI was improved. The signal to noise ratio (S/N) improvement for a variety of proteins increased between 2- to 3-fold relative to solvent systems containing formic acid and more than seven times relative to aqueous methanol spray solvents. Three methods for ammonium bicarbonate addition during DESI-MS were investigated. The additive delivered improvements in S/N whether it was mixed with the analyte prior to sample deposition, applied over pre-prepared samples, or simply added to the desorption spray solvent. The improvement correlated well with protein pI but not with protein size. Other ammonium or bicarbonate salts did not produce similar improvements in S/N, nor was this improvement in S/N observed for ESI of the same samples. As was previously described for ESI, DESI also caused extensive protein unfolding upon the addition of ammonium bicarbonate. [Figure not available: see fulltext.
ERIC Educational Resources Information Center
Diesel, Vivien; Miná Dias, Marcelo
2016-01-01
Purpose: To analyze the Brazilian experience in designing and implementing a recent extension policy reform based on agroecology, and reflect on its wider theoretical implications for extension reform literature. Design/methodology/approach: Using a critical public analysis we characterize the evolution of Brazilian federal extension policy…
Autocatalytic Patterning with Silver Using Tin(II) Chloride Sensitizer
ERIC Educational Resources Information Center
Mbindyo, Jeremiah K. N.; Anna, Laura J.; Fell, B. Andrew; Patton, David A.
2011-01-01
A silver mirror can be deposited on many types of surfaces from the reduction of the silver-diammine complex by a reducing sugar as proposed by Kemp in this "Journal". Three extensions of Kemp's demonstration that highlight the role of SnCl[subscript 2] sensitizer in the deposition of a silver mirror on surfaces are presented. The demonstration…
ERIC Educational Resources Information Center
Sharma, Shiv K.; Carew, Thomas J.
2004-01-01
Synaptic plasticity is thought to contribute to memory formation. Serotonin-induced facilitation of sensory-motor (SN-MN) synapses in "Aplysia" is an extensively studied cellular analog of memory for sensitization. Serotonin, a modulatory neurotransmitter, is released in the CNS during sensitization training, and induces three temporally and…
Differential effects of context on psychomotor sensitization to ethanol and cocaine.
Didone, Vincent; Quoilin, Caroline; Dieupart, Julie; Tirelli, Ezio; Quertemont, Etienne
2016-04-01
Repeated drug injections lead to sensitization of their stimulant effects in mice, a phenomenon sometimes referred to as drug psychomotor sensitization. Previous studies showed that sensitization to cocaine is context dependent as its expression is reduced in an environment that was not paired with cocaine administration. In contrast, the effects of the test context on ethanol sensitization remain unclear. In the present study, female OF1 mice were repeatedly injected with 1.5 g/kg ethanol to test for both the effects of context novelty/familiarity and association on ethanol sensitization. A first group of mice was extensively pre-exposed to the test context before ethanol sensitization and ethanol injections were paired with the test context (familiar and paired group). A second group was not pre-exposed to the test context, but ethanol injections were paired with the test context (nonfamiliar and paired group). Finally, a third group of mice was not pre-exposed to the test context and ethanol was repeatedly injected in the home cage (unpaired group). Control groups were similarly exposed to the test context, but were injected with saline. In a second experiment, cocaine was used as a positive control. The same behavioral procedure was used, except that mice were injected with 10 mg/kg cocaine instead of ethanol. The results show a differential involvement of the test context in the sensitization to ethanol and cocaine. Cocaine sensitization is strongly context dependent and is not expressed in the unpaired group. In contrast, the expression of ethanol sensitization is independent of the context in which it was administered, but is strongly affected by the relative novelty/familiarity of the environment. Extensive pre-exposure to the test context prevented the expression of ethanol sensitization. One possible explanation is that expression of ethanol sensitization requires an arousing environment.
NASA Astrophysics Data System (ADS)
Tjiputra, Jerry F.; Polzin, Dierk; Winguth, Arne M. E.
2007-03-01
An adjoint method is applied to a three-dimensional global ocean biogeochemical cycle model to optimize the ecosystem parameters on the basis of SeaWiFS surface chlorophyll observation. We showed with identical twin experiments that the model simulated chlorophyll concentration is sensitive to perturbation of phytoplankton and zooplankton exudation, herbivore egestion as fecal pellets, zooplankton grazing, and the assimilation efficiency parameters. The assimilation of SeaWiFS chlorophyll data significantly improved the prediction of chlorophyll concentration, especially in the high-latitude regions. Experiments that considered regional variations of parameters yielded a high seasonal variance of ecosystem parameters in the high latitudes, but a low variance in the tropical regions. These experiments indicate that the adjoint model is, despite the many uncertainties, generally capable to optimize sensitive parameters and carbon fluxes in the euphotic zone. The best fit regional parameters predict a global net primary production of 36 Pg C yr-1, which lies within the range suggested by Antoine et al. (1996). Additional constraints of nutrient data from the World Ocean Atlas showed further reduction in the model-data misfit and that assimilation with extensive data sets is necessary.
Expedited quantification of mutant ribosomal RNA by binary deoxyribozyme (BiDz) sensors.
Gerasimova, Yulia V; Yakovchuk, Petro; Dedkova, Larisa M; Hecht, Sidney M; Kolpashchikov, Dmitry M
2015-10-01
Mutations in ribosomal RNA (rRNA) have traditionally been detected by the primer extension assay, which is a tedious and multistage procedure. Here, we describe a simple and straightforward fluorescence assay based on binary deoxyribozyme (BiDz) sensors. The assay uses two short DNA oligonucleotides that hybridize specifically to adjacent fragments of rRNA, one of which contains a mutation site. This hybridization results in the formation of a deoxyribozyme catalytic core that produces the fluorescent signal and amplifies it due to multiple rounds of catalytic action. This assay enables us to expedite semi-quantification of mutant rRNA content in cell cultures starting from whole cells, which provides information useful for optimization of culture preparation prior to ribosome isolation. The method requires less than a microliter of a standard Escherichia coli cell culture and decreases analysis time from several days (for primer extension assay) to 1.5 h with hands-on time of ∼10 min. It is sensitive to single-nucleotide mutations. The new assay simplifies the preliminary analysis of RNA samples and cells in molecular biology and cloning experiments and is promising in other applications where fast detection/quantification of specific RNA is required. © 2015 Gerasimova et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Analysis of alternative pathways for reducing nitrogen oxide emissions.
Loughlin, Daniel H; Kaufman, Katherine R; Lenox, Carol S; Hubbell, Bryan J
2015-09-01
Strategies for reducing tropospheric ozone (O3) typically include modifying combustion processes to reduce the formation of nitrogen oxides (NOx) and applying control devices that remove NOx from the exhaust gases of power plants, industrial sources and vehicles. For portions of the U.S., these traditional controls may not be sufficient to achieve the National Ambient Air Quality Standard for ozone. We apply the MARKet ALlocation (MARKAL) energy system model in a sensitivity analysis to explore whether additional NOx reductions can be achieved through extensive electrification of passenger vehicles, adoption of energy efficiency and conservation measures within buildings, and deployment of wind and solar power in the electric sector. Nationally and for each region of the country, we estimate the NOx implications of these measures. Energy efficiency and renewable electricity are shown to reduce NOx beyond traditional controls. Wide-spread light duty vehicle electrification produces varied results, with NOx increasing in some regions and decreasing in others. However, combining vehicle electrification with renewable electricity reduces NOx in all regions. State governments are charged with developing plans that demonstrate how air quality standards will be met and maintained. The results presented here provide an indication of the national and regional NOx reductions available beyond traditional controls via extensive adoption of energy efficiency, renewable electricity, and vehicle electrification.
2016-01-01
Long wavelength ultraviolet radiation (UVA, 320–400 nm) interacts with chromophores present in human cells to induce reactive oxygen species (ROS) that damage both DNA and proteins. ROS levels are amplified, and the damaging effects of UVA are exacerbated if the cells are irradiated in the presence of UVA photosensitizers such as 6-thioguanine (6-TG), a strong UVA chromophore that is extensively incorporated into the DNA of dividing cells, or the fluoroquinolone antibiotic ciprofloxacin. Both DNA-embedded 6-TG and ciprofloxacin combine synergistically with UVA to generate high levels of ROS. Importantly, the extensive protein damage induced by these photosensitizer+UVA combinations inhibits DNA repair. DNA is maintained in intimate contact with the proteins that effect its replication, transcription, and repair, and DNA–protein cross-links (DPCs) are a recognized reaction product of ROS. Cross-linking of DNA metabolizing proteins would compromise these processes by introducing physical blocks and by depleting active proteins. We describe a sensitive and statistically rigorous method to analyze DPCs in cultured human cells. Application of this proteomics-based analysis to cells treated with 6-TG+UVA and ciprofloxacin+UVA identified proteins involved in DNA repair, replication, and gene expression among those most vulnerable to cross-linking under oxidative conditions. PMID:27654267
Centrifuge: rapid and sensitive classification of metagenomic sequences.
Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L
2016-12-01
Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.
Numerical analysis of hypersonic turbulent film cooling flows
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Chen, C. P.; Wei, H.
1992-01-01
As a building block, numerical capabilities for predicting heat flux and turbulent flowfields of hypersonic vehicles require extensive model validations. Computational procedures for calculating turbulent flows and heat fluxes for supersonic film cooling with parallel slot injections are described in this study. Two injectant mass flow rates with matched and unmatched pressure conditions using the database of Holden et al. (1990) are considered. To avoid uncertainties associated with the boundary conditions in testing turbulence models, detailed three-dimensional flowfields of the injection nozzle were calculated. Two computational fluid dynamics codes, GASP and FDNS, with the algebraic Baldwin-Lomax and k-epsilon models with compressibility corrections were used. It was found that the B-L model which resolves near-wall viscous sublayer is very sensitive to the inlet boundary conditions at the nozzle exit face. The k-epsilon models with improved wall functions are less sensitive to the inlet boundary conditions. The testings show that compressibility corrections are necessary for the k-epsilon model to realistically predict the heat fluxes of the hypersonic film cooling problems.
NASA Astrophysics Data System (ADS)
Hsiu, Feng-Ming; Chen, Shean-Jen; Tsai, Chien-Hung; Tsou, Chia-Yuan; Su, Y.-D.; Lin, G.-Y.; Huang, K.-T.; Chyou, Jin-Jung; Ku, Wei-Chih; Chiu, S.-K.; Tzeng, C.-M.
2002-09-01
Surface plasmon resonance (SPR) imaging system is presented as a novel technique based on modified Mach-Zehnder phase-shifting interferometry (PSI) for biomolecular interaction analysis (BIA), which measures the spatial phase variation of a resonantly reflected light in biomolecular interaction. In this technique, the micro-array SPR biosensors with over a thousand probe NDA spots can be detected simultaneously. Owing to the feasible and swift measurements, the micro-array SPR biosensors can be extensively applied to the nonspecific adsorption of protein, the membrane/protein interactions, and DNA hybridization. The detection sensitivity of the SPR PSI imaging system is improved to about 1 pg/mm2 for each spot over the conventional SPR imaging systems. The SPR PSI imaging system and its SPR sensors have been successfully used to observe slightly index change in consequence of argon gas flow through the nitrogen in real time, with high sensitivity, and at high-throughout screening rates.
Comparison of aerobic fitness and space motion sickness during the Shuttle program
NASA Technical Reports Server (NTRS)
Jennings, Richard T.; Davis, Jeffrey R.; Santy, Patricia A.
1988-01-01
Space motion sickness (SMS) is an important problem for short-duration space flight; 71 percent of STS crewmembers develop SMS symptoms. The search for effective countermeasures and factors that correlate with sensitivity has been extensive. Recently, several investigators have linked aerobic fitness with motion sickness sensitivity in the 1-G or high-G environment. This paper compares the aerobic fitness of 125 Shuttle crewmembers with their SMS symptom category. Aerobic fitness data were obtained from the exercise tolerance test conducted nearest the time of launch. SMS data were derived from the medical debrief summaries. Mean maximum oxygen consumption values for crewmembers in four SMS categories (none, mild, moderate, severe) were 44.55, 44.08, 46.5, and 44.24 ml/kg per min, respectively. Scattergrams with linear regression analysis, comparing aerobic fitness and SMS symptom classification are presented. Correlation coefficients comparing SMS categories vs. aerobic fitness for men and women reveal no definite relationship between the two factors.
Klapötke, Thomas M; Stierstorfer, Jörg
2008-08-07
The highly energetic compound 1,3,5-triaminoguanidinium dinitramide (1) was prepared in high yield (82%) according to a new synthesis by the reaction of potassium dinitramide and triaminoguanidinium perchlorate. The heat of formation was calculated in an extensive computational study (CBS-4M). With this the detonation parameters of compound were computed using the EXPLO5 software: D = 8796 m s(-1), p = 299 kbar. In addition, a full characterization of the chemical properties (single X-ray diffraction, IR and Raman spectroscopy, multinuclear NMR spectroscopy, mass spectrometry and elemental analysis) as well as of the energetic characteristics (differential scanning calorimetry, thermal safety calorimetry, impact, friction and electrostatic tests) is given in this work. Due to the high impact (2 J) and friction sensitivity (24 N) several attempts to reduce these sensitivities were performed by the addition of wax. The performance of was tested applying a "Koenen" steel sleeve test resulting in a critical diameter of > or =10 mm.
A reevaluation of spectral ratios for lunar mare TiO2 mapping
NASA Technical Reports Server (NTRS)
Johnson, Jeffrey R.; Larson, Stephen M.; Singer, Robert B.
1991-01-01
The empirical relation established by Charette et al. (1974) between the 400/560-nm spectral ratio of mature mare soils and weight percent TiO2 has been used extensively to map titanium content in the lunar maria. Relative reflectance spectra of mare regions show that a reference wavelength further into the near-IR, e.g., above 700 nm, could be used in place of the 560-nm band to provide greater contrast (a greater range of ratio values) and hence a more sensitive indicator of titanium content. An analysis of 400/730-nm ratio values derived from both laboratory and telescopic relative reflectance spectra suggests that this ratio provides greater sensitivity to TiO2 content than the 400/560-nm ratio. The increased range of ratio values is manifested in higher contrast 400/730-nm ratio images compared to 400/560-nm ratio images. This potential improvement in sensivity encourages a reevaluation of the original Charette et al. (1974) relation using the 400/730-nm ratio.
Sensitivity of subject-specific models to errors in musculo-skeletal geometry.
Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N
2012-09-21
Subject-specific musculo-skeletal models of the lower extremity are an important tool for investigating various biomechanical problems, for instance the results of surgery such as joint replacements and tendon transfers. The aim of this study was to assess the potential effects of errors in musculo-skeletal geometry on subject-specific model results. We performed an extensive sensitivity analysis to quantify the effect of the perturbation of origin, insertion and via points of each of the 56 musculo-tendon parts contained in the model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by only the perturbed musculo-tendon parts and by all the remaining musculo-tendon parts, respectively, during a simulated gait cycle. Results indicated that, for each musculo-tendon part, only two points show a significant sensitivity: its origin, or pseudo-origin, point and its insertion, or pseudo-insertion, point. The most sensitive points belong to those musculo-tendon parts that act as prime movers in the walking movement (insertion point of the Achilles Tendon: LSI=15.56%, OSI=7.17%; origin points of the Rectus Femoris: LSI=13.89%, OSI=2.44%) and as hip stabilizers (insertion points of the Gluteus Medius Anterior: LSI=17.92%, OSI=2.79%; insertion point of the Gluteus Minimus: LSI=21.71%, OSI=2.41%). The proposed priority list provides quantitative information to improve the predictive accuracy of subject-specific musculo-skeletal models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Woo, Kevin L; Rieucau, Guillaume; Burke, Darren
2017-02-01
Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus , a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards' ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species.
Grimm, N. B.; Chacon, A.; Dahm, Clifford N.; Hostetler, S.W.; Lind, O.T.; Starkweather, P.L.; Wurtsbaugh, W.W.
1997-01-01
Variability and unpredictability are characteristics of the aquatic ecosystems, hydrological patterns and climate of the largely dryland region that encompasses the Basin and Range, American Southwest and western Mexico. Neither hydrological nor climatological models for the region are sufficiently developed to describe the magnitude or direction of change in response to increased carbon dioxide; thus, an attempt to predict specific responses of aquatic ecosystems is premature. Instead, we focus on the sensitivity of rivers, streams, springs, wetlands, reservoirs, and lakes of the region to potential changes in climate, especially those inducing a change in hydrological patterns such as amount, timing and predictability of stream flow. The major sensitivities of aquatic ecosystems are their permanence and even existence in the face of potential reduced net basin supply of water, stability of geomorphological structure and riparian ecotones with alterations in disturbance regimes, and water quality changes resulting from a modified water balance. In all of these respects, aquatic ecosystems of the region are also sensitive to the extensive modifications imposed by human use of water resources, which underscores the difficulty of separating this type of anthropogenic change from climate change. We advocate a focus in future research on reconstruction and analysis of past climates and associated ecosystem characteristics, long-term studies to discriminate directional change vs. year to year variability (including evidence of aquatic ecosystem responses or sensitivity to extremes), and studies of ecosystems affected by human activity. ?? 1997 by John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Carlson, Laurie A.; Harper, Kelly S.
2011-01-01
Service provision to gay, lesbian, bisexual, and transgender (GLBT) older adults is a dynamic and sensitive area, requiring rigorous and extensive inquiry and action. Examining the readiness and assets of organizations serving GLBT older adults requires not only heart and sensitivity but also resources and a clear vision. The Community Readiness…
Code of Federal Regulations, 2010 CFR
2010-01-01
... authority to order use of procedures for access by potential parties to certain sensitive unclassified... authority to order use of procedures for access by potential parties to certain sensitive unclassified... Commission or the presiding officer. (b) If this part does not prescribe a time limit for an action to be...
Enviromentally sensitive patch index of desertification risk applied to the main habitats of Sicily
NASA Astrophysics Data System (ADS)
Duro, A.; Piccione, V.; Ragusa, M. A.; Rapicavoli, V.; Veneziano, V.
2017-07-01
The authors applied the MEDALUS - Mediterranean Desertification and Land Use - procedure to the most representative sicilian habitat by extension, socio-economic and environmental importance, in order to assess the risk of desertification. Thanks to the ESPI, Environmentally Sensitive Patch Index, in this paper the authors estimate the current and future regional levels of desertification risk.
NASA Astrophysics Data System (ADS)
Hostache, R.; Hissler, C.; Matgen, P.; Guignard, C.; Bates, P.
2014-09-01
Fine sediments represent an important vector of pollutant diffusion in rivers. When deposited in floodplains and riverbeds, they can be responsible for soil pollution. In this context, this paper proposes a modelling exercise aimed at predicting transport and diffusion of fine sediments and dissolved pollutants. The model is based upon the Telemac hydro-informatic system (dynamical coupling Telemac-2D-Sysiphe). As empirical and semiempirical parameters need to be calibrated for such a modelling exercise, a sensitivity analysis is proposed. An innovative point in this study is the assessment of the usefulness of dissolved trace metal contamination information for model calibration. Moreover, for supporting the modelling exercise, an extensive database was set up during two flood events. It includes water surface elevation records, discharge measurements and geochemistry data such as time series of dissolved/particulate contaminants and suspended-sediment concentrations. The most sensitive parameters were found to be the hydraulic friction coefficients and the sediment particle settling velocity in water. It was also found that model calibration did not benefit from dissolved trace metal contamination information. Using the two monitored hydrological events as calibration and validation, it was found that the model is able to satisfyingly predict suspended sediment and dissolve pollutant transport in the river channel. In addition, a qualitative comparison between simulated sediment deposition in the floodplain and a soil contamination map shows that the preferential zones for deposition identified by the model are realistic.
Kim, Bum Soo; Kim, Tae-Hwan; Kwon, Tae Gyun; Yoo, Eun Sang
2012-05-01
Several studies have demonstrated the superiority of endorectal coil magnetic resonance imaging (MRI) over pelvic phased-array coil MRI at 1.5 Tesla for local staging of prostate cancer. However, few have studied which evaluation is more accurate at 3 Tesla MRI. In this study, we compared the accuracy of local staging of prostate cancer using pelvic phased-array coil or endorectal coil MRI at 3 Tesla. Between January 2005 and May 2010, 151 patients underwent radical prostatectomy. All patients were evaluated with either pelvic phased-array coil or endorectal coil prostate MRI prior to surgery (63 endorectal coils and 88 pelvic phased-array coils). Tumor stage based on MRI was compared with pathologic stage. We calculated the specificity, sensitivity and accuracy of each group in the evaluation of extracapsular extension and seminal vesicle invasion. Both endorectal coil and pelvic phased-array coil MRI achieved high specificity, low sensitivity and moderate accuracy for the detection of extracapsular extension and seminal vesicle invasion. There were statistically no differences in specificity, sensitivity and accuracy between the two groups. Overall staging accuracy, sensitivity and specificity were not significantly different between endorectal coil and pelvic phased-array coil MRI.
Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity
Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.
2010-01-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183
Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.
Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L
2010-02-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.
Mikhaylova, Lyudmila; Zhang, Yiming; Kobzik, Lester; Fedulov, Alexey V
2013-01-01
We investigated the link between epigenome-wide methylation aberrations at birth and genomic transcriptional changes upon allergen sensitization that occur in the neonatal dendritic cells (DC) due to maternal asthma. We previously demonstrated that neonates of asthmatic mothers are born with a functional skew in splenic DCs that can be seen even in allergen-naïve pups and can convey allergy responses to normal recipients. However, minimal-to-no transcriptional or phenotypic changes were found to explain this alteration. Here we provide in-depth analysis of genome-wide DNA methylation profiles and RNA transcriptional (microarray) profiles before and after allergen sensitization. We identified differentially methylated and differentially expressed loci and performed manually-curated matching of methylation status of the key regulatory sequences (promoters and CpG islands) to expression of their respective transcripts before and after sensitization. We found that while allergen-naive DCs from asthma-at-risk neonates have minimal transcriptional change compared to controls, the methylation changes are extensive. The substantial transcriptional change only becomes evident upon allergen sensitization, when it occurs in multiple genes with the pre-existing epigenetic alterations. We demonstrate that maternal asthma leads to both hyper- and hypomethylation in neonatal DCs, and that both types of events at various loci significantly overlap with transcriptional responses to allergen. Pathway analysis indicates that approximately 1/2 of differentially expressed and differentially methylated genes directly interact in known networks involved in allergy and asthma processes. We conclude that congenital epigenetic changes in DCs are strongly linked to altered transcriptional responses to allergen and to early-life asthma origin. The findings are consistent with the emerging paradigm that asthma is a disease with underlying epigenetic changes.
Evaluation of an Epigenetic Profile for the Detection of Bladder Cancer in Patients with Hematuria.
van Kessel, Kim E M; Van Neste, Leander; Lurkin, Irene; Zwarthoff, Ellen C; Van Criekinge, Wim
2016-03-01
Many patients enter the care cycle with gross or microscopic hematuria and undergo cystoscopy to rule out bladder cancer. Sensitivity of this invasive examination is limited, leaving many patients at risk for undetected cancer. To improve current clinical practice more sensitive and noninvasive screening methods should be applied. A total of 154 urine samples were collected from patients with hematuria, including 80 without and 74 with bladder cancer. DNA from cells in the urine was epigenetically profiled using 2 independent assays. Methylation specific polymerase chain reaction was performed on TWIST1. SNaPshot™ methylation analysis was done for different loci of OTX1 and ONECUT2. Additionally all samples were analyzed for mutation status of TERT (telomerase reverse transcriptase), PIK3CA, FGFR3 (fibroblast growth factor receptor 3), HRAS, KRAS and NRAS. The combination of TWIST1, ONECUT2 (2 loci) and OTX1 resulted in the best overall performing panel. Logistic regression analysis on these methylation markers, mutation status of FGFR3, TERT and HRAS, and patient age resulted in an accurate model with 97% sensitivity, 83% specificity and an AUC of 0.93 (95% CI 0.88-0.98). Internal validation led to an optimism corrected AUC of 0.92. With an estimated bladder cancer prevalence of 5% to 10% in a hematuria cohort the assay resulted in a 99.6% to 99.9% negative predictive value. Epigenetic profiling using TWIST1, ONECUT2 and OTX1 results in a high sensitivity and specificity. Accurate risk prediction might result in less extensive and invasive examination of patients at low risk, thereby reducing unnecessary patient burden and health care costs. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Dalziel, Kim; Round, Ali; Garside, Ruth; Stein, Ken
2005-01-01
To evaluate the cost utility of imatinib compared with interferon (IFN)-alpha or hydroxycarbamide (hydroxyurea) for first-line treatment of chronic myeloid leukaemia. A cost-utility (Markov) model within the setting of the UK NHS and viewed from a health system perspective was adopted. Transition probabilities and relative risks were estimated from published literature. Costs of drug treatment, outpatient care, bone marrow biopsies, radiography, blood transfusions and inpatient care were obtained from the British National Formulary and local hospital databases. Costs (pound, year 2001-03 values) were discounted at 6%. Quality-of-life (QOL) data were obtained from the published literature and discounted at 1.5%. The main outcome measure was cost per QALY gained. Extensive one-way sensitivity analyses were performed along with probabilistic (stochastic) analysis. The incremental cost-effectiveness ratio (ICER) of imatinib, compared with IFNalpha, was pound26,180 per QALY gained (one-way sensitivity analyses ranged from pound19,449 to pound51,870) and compared with hydroxycarbamide was pound86,934 per QALY (one-way sensitivity analyses ranged from pound69,701 to pound147,095) [ pound1=$US1.691=euro1.535 as at 31 December 2002].Based on the probabilistic sensitivity analysis, 50% of the ICERs for imatinib, compared with IFNalpha, fell below a threshold of approximately pound31,000 per QALY gained. Fifty percent of ICERs for imatinib, compared with hydroxycarbamide, fell below approximately pound95,000 per QALY gained. This model suggests, given its underlying data and assumptions, that imatinib may be moderately cost effective when compared with IFNalpha but considerably less cost effective when compared with hydroxycarbamide. There are, however, many uncertainties due to the lack of long-term data.
Acute toxicity value extrapolation with fish and aquatic invertebrates
Buckler, Denny R.; Mayer, Foster L.; Ellersieck, Mark R.; Asfaw, Amha
2005-01-01
Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled “Ecological Risk Analysis” (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more sensitive to contaminants than fish are.
Zhang, Haitao; Wu, Chenxue; Chen, Zewei; Liu, Zhao; Zhu, Yunhong
2017-01-01
Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules.
Wu, Chenxue; Liu, Zhao; Zhu, Yunhong
2017-01-01
Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules. PMID:28767687
Cost-Effectiveness Analysis of a National Newborn Screening Program for Biotinidase Deficiency.
Vallejo-Torres, Laura; Castilla, Iván; Couce, María L; Pérez-Cerdá, Celia; Martín-Hernández, Elena; Pineda, Mercé; Campistol, Jaume; Arrospide, Arantzazu; Morris, Stephen; Serrano-Aguilar, Pedro
2015-08-01
There are conflicting views as to whether testing for biotinidase deficiency (BD) ought to be incorporated into universal newborn screening (NBS) programs. The aim of this study was to evaluate the cost-effectiveness of adding BD to the panel of conditions currently screened under the national NBS program in Spain. We used information from the regional NBS program for BD that has been in place in the Spanish region of Galicia since 1987. These data, along with other sources, were used to develop a cost-effectiveness decision model that compared lifetime costs and health outcomes of a national birth cohort of newborns with and without an early detection program. The analysis took the perspective of the Spanish National Health Service. Effectiveness was measured in terms of quality-adjusted life years (QALYs). We undertook extensive sensitivity analyses around the main model assumptions, including a probabilistic sensitivity analysis. In the base case analysis, NBS for BD led to higher QALYs and higher health care costs, with an estimated incremental cost per QALY gained of $24,677. Lower costs per QALY gained were found when conservative assumptions were relaxed, yielding cost savings in some scenarios. The probability that BD screening was cost-effective was estimated to be >70% in the base case at a standard threshold value. This study indicates that NBS for BD is likely to be a cost-effective use of resources. Copyright © 2015 by the American Academy of Pediatrics.
Lazzaro, Carlo; Lopiano, Leonardo; Cocito, Dario
2014-07-01
Prior researches have suggested that home-based subcutaneous immunoglobulin (SCIG) is equally effective and can be less expensive than hospital-based intravenous immunoglobulin (IVIG) in treating chronic inflammatory demyelinating polyneuropathy (CIDP) patients. This economic evaluation aims at comparing costs of SCIG vs IVIG for CIDP patients in Italy. A 1-year model-based cost-minimization analysis basically populated via neurologists' opinion was undertaken from a societal perspective. Health care resources included immunoglobulin; drugs for premedication and complications (rash, headache, and hypertension) management; time of various health care professionals; pump for SCIG self-administration; infusion disposables. Non-health care resources encompassed transport and parking; losses of working and leisure time for patients and caregivers. Unit or yearly costs for resources valuation were mainly obtained from published sources. Costs were expressed in Euro (
Beyer, Sebastian E; Hunink, Myriam G; Schöberl, Florian; von Baumgarten, Louisa; Petersen, Steffen E; Dichgans, Martin; Janssen, Hendrik; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H
2015-07-01
This study evaluated the cost-effectiveness of different noninvasive imaging strategies in patients with possible basilar artery occlusion. A Markov decision analytic model was used to evaluate long-term outcomes resulting from strategies using computed tomographic angiography (CTA), magnetic resonance imaging, nonenhanced CT, or duplex ultrasound with intravenous (IV) thrombolysis being administered after positive findings. The analysis was performed from the societal perspective based on US recommendations. Input parameters were derived from the literature. Costs were obtained from United States costing sources and published literature. Outcomes were lifetime costs, quality-adjusted life-years (QALYs), incremental cost-effectiveness ratios, and net monetary benefits, with a willingness-to-pay threshold of $80,000 per QALY. The strategy with the highest net monetary benefit was considered the most cost-effective. Extensive deterministic and probabilistic sensitivity analyses were performed to explore the effect of varying parameter values. In the reference case analysis, CTA dominated all other imaging strategies. CTA yielded 0.02 QALYs more than magnetic resonance imaging and 0.04 QALYs more than duplex ultrasound followed by CTA. At a willingness-to-pay threshold of $80,000 per QALY, CTA yielded the highest net monetary benefits. The probability that CTA is cost-effective was 96% at a willingness-to-pay threshold of $80,000/QALY. Sensitivity analyses showed that duplex ultrasound was cost-effective only for a prior probability of ≤0.02 and that these results were only minimally influenced by duplex ultrasound sensitivity and specificity. Nonenhanced CT and magnetic resonance imaging never became the most cost-effective strategy. Our results suggest that CTA in patients with possible basilar artery occlusion is cost-effective. © 2015 The Authors.
Balabin, Roman M; Smirnov, Sergey V
2011-07-15
Melamine (2,4,6-triamino-1,3,5-triazine) is a nitrogen-rich chemical implicated in the pet and human food recalls and in the global food safety scares involving milk products. Due to the serious health concerns associated with melamine consumption and the extensive scope of affected products, rapid and sensitive methods to detect melamine's presence are essential. We propose the use of spectroscopy data-produced by near-infrared (near-IR/NIR) and mid-infrared (mid-IR/MIR) spectroscopies, in particular-for melamine detection in complex dairy matrixes. None of the up-to-date reported IR-based methods for melamine detection has unambiguously shown its wide applicability to different dairy products as well as limit of detection (LOD) below 1 ppm on independent sample set. It was found that infrared spectroscopy is an effective tool to detect melamine in dairy products, such as infant formula, milk powder, or liquid milk. ALOD below 1 ppm (0.76±0.11 ppm) can be reached if a correct spectrum preprocessing (pretreatment) technique and a correct multivariate (MDA) algorithm-partial least squares regression (PLS), polynomial PLS (Poly-PLS), artificial neural network (ANN), support vector regression (SVR), or least squares support vector machine (LS-SVM)-are used for spectrum analysis. The relationship between MIR/NIR spectrum of milk products and melamine content is nonlinear. Thus, nonlinear regression methods are needed to correctly predict the triazine-derivative content of milk products. It can be concluded that mid- and near-infrared spectroscopy can be regarded as a quick, sensitive, robust, and low-cost method for liquid milk, infant formula, and milk powder analysis. Copyright © 2011 Elsevier B.V. All rights reserved.
Pérez-Rodríguez, F; van Asselt, E D; Garcia-Gimeno, R M; Zurera, G; Zwietering, M H
2007-05-01
The risk assessment study of Listeria monocytogenes in ready-to-eat foods conducted by the U.S. Food and Drug Administration is an example of an extensive quantitative microbiological risk assessment that could be used by risk analysts and other scientists to obtain information and by managers and stakeholders to make decisions on food safety management. The present study was conducted to investigate how detailed sensitivity analysis can be used by assessors to extract more information on risk factors and how results can be communicated to managers and stakeholders in an understandable way. The extended sensitivity analysis revealed that the extremes at the right side of the dose distribution (at consumption, 9 to 11.5 log CFU per serving) were responsible for most of the cases of listeriosis simulated. For concentration at retail, values below the detection limit of 0.04 CFU/g and the often used limit for L. monocytogenes of 100 CFU/g (also at retail) were associated with a high number of annual cases of listeriosis (about 29 and 82%, respectively). This association can be explained by growth of L. monocytogenes at both average and extreme values of temperature and time, indicating that a wide distribution can lead to high risk levels. Another finding is the importance of the maximal population density (i.e., the maximum concentration of L. monocytogenes assumed at a certain temperature) for accurately estimating the risk of infection by opportunistic pathogens such as L. monocytogenes. According to the obtained results, mainly concentrations corresponding to the highest maximal population densities caused risk in the simulation. However, sensitivity analysis applied to the uncertainty parameters revealed that prevalence at retail was the most important source of uncertainty in the model.
Very high energy gamma ray extension of GRO observations
NASA Technical Reports Server (NTRS)
Weekes, Trevor C.
1992-01-01
This has been an exiciting year for high energy gamma-ray astronomy, both from space and from ground-based observatories. It has been a particularly active period for the Whipple Observatory gamma-ray group. In phase 1 of the Compton Gamma Ray Observatory (GRO), there has not been too much opportunity for overlapping observations with the Energetic Gamma Ray Experiment Telescope (EGRET) and the other GRO telescopes; however, significant progress was made in the development of data analysis techniques and in improving the sensitivity of the technique which will have direct application in correlative observations in phase 2. Progress made during the period 1 Jul. 1991 - 31 Dec. 1991 is presented.
Zamorano, Anna M.; Riquelme, Inmaculada; Kleber, Boris; Altenmüller, Eckart; Hatem, Samar M.; Montoya, Pedro
2015-01-01
Extensive training of repetitive and highly skilled movements, as it occurs in professional classical musicians, may lead to changes in tactile sensitivity and corresponding cortical reorganization of somatosensory cortices. It is also known that professional musicians frequently experience musculoskeletal pain and pain-related symptoms during their careers. The present study aimed at understanding the complex interaction between chronic pain and music training with respect to somatosensory processing. For this purpose, tactile thresholds (mechanical detection, grating orientation, two-point discrimination) and subjective ratings to thermal and pressure pain stimuli were assessed in 17 professional musicians with chronic pain, 30 pain-free musicians, 20 non-musicians with chronic pain, and 18 pain-free non-musicians. We found that pain-free musicians displayed greater touch sensitivity (i.e., lower mechanical detection thresholds), lower tactile spatial acuity (i.e., higher grating orientation thresholds) and increased pain sensitivity to pressure and heat compared to pain-free non-musicians. Moreover, we also found that musicians and non-musicians with chronic pain presented lower tactile spatial acuity and increased pain sensitivity to pressure and heat compared to pain-free non-musicians. The significant increment of pain sensitivity together with decreased spatial discrimination in pain-free musicians and the similarity of results found in chronic pain patients, suggests that the extensive training of repetitive and highly skilled movements in classical musicians could be considered as a risk factor for developing chronic pain, probably due to use-dependent plastic changes elicited in somatosensory pathways. PMID:25610384
Motor Oil Classification using Color Histograms and Pattern Recognition Techniques.
Ahmadi, Shiva; Mani-Varnosfaderani, Ahmad; Habibi, Biuck
2018-04-20
Motor oil classification is important for quality control and the identification of oil adulteration. In thiswork, we propose a simple, rapid, inexpensive and nondestructive approach based on image analysis and pattern recognition techniques for the classification of nine different types of motor oils according to their corresponding color histograms. For this, we applied color histogram in different color spaces such as red green blue (RGB), grayscale, and hue saturation intensity (HSI) in order to extract features that can help with the classification procedure. These color histograms and their combinations were used as input for model development and then were statistically evaluated by using linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and support vector machine (SVM) techniques. Here, two common solutions for solving a multiclass classification problem were applied: (1) transformation to binary classification problem using a one-against-all (OAA) approach and (2) extension from binary classifiers to a single globally optimized multilabel classification model. In the OAA strategy, LDA, QDA, and SVM reached up to 97% in terms of accuracy, sensitivity, and specificity for both the training and test sets. In extension from binary case, despite good performances by the SVM classification model, QDA and LDA provided better results up to 92% for RGB-grayscale-HSI color histograms and up to 93% for the HSI color map, respectively. In order to reduce the numbers of independent variables for modeling, a principle component analysis algorithm was used. Our results suggest that the proposed method is promising for the identification and classification of different types of motor oils.
Linking animal-borne video to accelerometers reveals prey capture variability.
Watanabe, Yuuki Y; Takahashi, Akinori
2013-02-05
Understanding foraging is important in ecology, as it determines the energy gains and, ultimately, the fitness of animals. However, monitoring prey captures of individual animals is difficult. Direct observations using animal-borne videos have short recording periods, and indirect signals (e.g., stomach temperature) are never validated in the field. We took an integrated approach to monitor prey captures by a predator by deploying a video camera (lasting for 85 min) and two accelerometers (on the head and back, lasting for 50 h) on free-swimming Adélie penguins. The movies showed that penguins moved the heads rapidly to capture krill in midwater and fish (Pagothenia borchgrevinki) underneath the sea ice. Captures were remarkably fast (two krill per second in swarms) and efficient (244 krill or 33 P. borchgrevinki in 78-89 min). Prey captures were detected by the signal of head acceleration relative to body acceleration with high sensitivity and specificity (0.83-0.90), as shown by receiver-operating characteristic analysis. Extension of signal analysis to the entire behavioral records showed that krill captures were spatially and temporally more variable than P. borchgrevinki captures. Notably, the frequency distribution of krill capture rate closely followed a power-law model, indicating that the foraging success of penguins depends on a small number of very successful dives. The three steps illustrated here (i.e., video observations, linking video to behavioral signals, and extension of signal analysis) are unique approaches to understanding the spatial and temporal variability of ecologically important events such as foraging.
Guan, Jiuqiang; Long, Keren; Ma, Jideng; Zhang, Jinwei; He, Dafang; Jin, Long; Tang, Qianzi; Jiang, Anan; Wang, Xun; Hu, Yaodong; Tian, Shilin; Jiang, Zhi; Li, Mingzhou; Luo, Xiaolin
2017-01-01
Extensive and in-depth investigations of high-altitude adaptation have been carried out at the level of morphology, anatomy, physiology and genomics, but few investigations focused on the roles of microRNA (miRNA) in high-altitude adaptation. We examined the differences in the miRNA transcriptomes of two representative hypoxia-sensitive tissues (heart and lung) between yak and cattle, two closely related species that live in high and low altitudes, respectively. In this study, we identified a total of 808 mature miRNAs, which corresponded to 715 pre-miRNAs in the two species. The further analysis revealed that both tissues showed relatively high correlation coefficient between yak and cattle, but a greater differentiation was present in lung than heart between the two species. In addition, miRNAs with significantly differentiated patterns of expression in two tissues exhibited co-operation effect in high altitude adaptation based on miRNA family and cluster. Functional analysis revealed that differentially expressed miRNAs were enriched in hypoxia-related pathways, such as the HIF-1α signaling pathway, the insulin signaling pathway, the PI3K-Akt signaling pathway, nucleotide excision repair, cell cycle, apoptosis and fatty acid metabolism, which indicated the important roles of miRNAs in high altitude adaptation. These results suggested the diverse degrees of miRNA transcriptome variation in different tissues between yak and cattle, and suggested extensive roles of miRNAs in high altitude adaptation.
Zhang, Jinwei; He, Dafang; Jin, Long; Tang, Qianzi; Jiang, Anan; Wang, Xun; Hu, Yaodong; Tian, Shilin; Jiang, Zhi
2017-01-01
Extensive and in-depth investigations of high-altitude adaptation have been carried out at the level of morphology, anatomy, physiology and genomics, but few investigations focused on the roles of microRNA (miRNA) in high-altitude adaptation. We examined the differences in the miRNA transcriptomes of two representative hypoxia-sensitive tissues (heart and lung) between yak and cattle, two closely related species that live in high and low altitudes, respectively. In this study, we identified a total of 808 mature miRNAs, which corresponded to 715 pre-miRNAs in the two species. The further analysis revealed that both tissues showed relatively high correlation coefficient between yak and cattle, but a greater differentiation was present in lung than heart between the two species. In addition, miRNAs with significantly differentiated patterns of expression in two tissues exhibited co-operation effect in high altitude adaptation based on miRNA family and cluster. Functional analysis revealed that differentially expressed miRNAs were enriched in hypoxia-related pathways, such as the HIF-1α signaling pathway, the insulin signaling pathway, the PI3K-Akt signaling pathway, nucleotide excision repair, cell cycle, apoptosis and fatty acid metabolism, which indicated the important roles of miRNAs in high altitude adaptation. These results suggested the diverse degrees of miRNA transcriptome variation in different tissues between yak and cattle, and suggested extensive roles of miRNAs in high altitude adaptation. PMID:29109913
Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis.
Pfister, Alexandra; West, Alexandre M; Bronner, Shaw; Noah, Jack Adam
2014-07-01
Biomechanical analysis is a powerful tool in the evaluation of movement dysfunction in orthopaedic and neurologic populations. Three-dimensional (3D) motion capture systems are widely used, accurate systems, but are costly and not available in many clinical settings. The Microsoft Kinect™ has the potential to be used as an alternative low-cost motion analysis tool. The purpose of this study was to assess concurrent validity of the Kinect™ with Brekel Kinect software in comparison to Vicon Nexus during sagittal plane gait kinematics. Twenty healthy adults (nine male, 11 female) were tracked while walking and jogging at three velocities on a treadmill. Concurrent hip and knee peak flexion and extension and stride timing measurements were compared between Vicon and Kinect™. Although Kinect measurements were representative of normal gait, the Kinect™ generally under-estimated joint flexion and over-estimated extension. Kinect™ and Vicon hip angular displacement correlation was very low and error was large. Kinect™ knee measurements were somewhat better than hip, but were not consistent enough for clinical assessment. Correlation between Kinect™ and Vicon stride timing was high and error was fairly small. Variability in Kinect™ measurements was smallest at the slowest velocity. The Kinect™ has basic motion capture capabilities and with some minor adjustments will be an acceptable tool to measure stride timing, but sophisticated advances in software and hardware are necessary to improve Kinect™ sensitivity before it can be implemented for clinical use.
Westerhout, K Y; Verheggen, B G; Schreder, C H; Augustin, M
2012-01-01
An economic evaluation was conducted to assess the outcomes and costs as well as cost-effectiveness of the following grass-pollen immunotherapies: OA (Oralair; Stallergenes S.A., Antony, France) vs GRZ (Grazax; ALK-Abelló, Hørsholm, Denmark), and ALD (Alk Depot SQ; ALK-Abelló) (immunotherapy agents alongside symptomatic medication) and symptomatic treatment alone for grass pollen allergic rhinoconjunctivitis. The costs and outcomes of 3-year treatment were assessed for a period of 9 years using a Markov model. Treatment efficacy was estimated using an indirect comparison of available clinical trials with placebo as a common comparator. Estimates for immunotherapy discontinuation, occurrence of asthma, health state utilities, drug costs, resource use, and healthcare costs were derived from published sources. The analysis was conducted from the insurant's perspective including public and private health insurance payments and co-payments by insurants. Outcomes were reported as quality-adjusted life years (QALYs) and symptom-free days. The uncertainty around incremental model results was tested by means of extensive deterministic univariate and probabilistic multivariate sensitivity analyses. In the base case analysis the model predicted a cost-utility ratio of OA vs symptomatic treatment of €14,728 per QALY; incremental costs were €1356 (95%CI: €1230; €1484) and incremental QALYs 0.092 (95%CI: 0.052; 0.140). OA was the dominant strategy compared to GRZ and ALD, with estimated incremental costs of -€1142 (95%CI: -€1255; -€1038) and -€54 (95%CI: -€188; €85) and incremental QALYs of 0.015 (95%CI: -0.025; 0.056) and 0.027 (95%CI: -0.022; 0.075), respectively. At a willingness-to-pay threshold of €20,000, the probability of OA being the most cost-effective treatment was predicted to be 79%. Univariate sensitivity analyses show that incremental outcomes were moderately sensitive to changes in efficacy estimates. The main study limitation was the requirement of an indirect comparison involving several steps to assess relative treatment effects. The analysis suggests OA to be cost-effective compared to GRZ and ALD, and a symptomatic treatment. Sensitivity analyses showed that uncertainty surrounding treatment efficacy estimates affected the model outcomes.
Morais, João; Aguiar, Carlos; McLeod, Euan; Chatzitheofilou, Ismini; Fonseca Santos, Isabel; Pereira, Sónia
2014-09-01
To project the long-term cost-effectiveness of treating non-valvular atrial fibrillation (AF) patients for stroke prevention with rivaroxaban compared to warfarin in Portugal. A Markov model was used that included health and treatment states describing the management and consequences of AF and its treatment. The model's time horizon was set at a patient's lifetime and each cycle at three months. The analysis was conducted from a societal perspective and a 5% discount rate was applied to both costs and outcomes. Treatment effect data were obtained from the pivotal phase III ROCKET AF trial. The model was also populated with utility values obtained from the literature and with cost data derived from official Portuguese sources. The outcomes of the model included life-years, quality-adjusted life-years (QALYs), incremental costs, and associated incremental cost-effectiveness ratios (ICERs). Extensive sensitivity analyses were undertaken to further assess the findings of the model. As there is evidence indicating underuse and underprescription of warfarin in Portugal, an additional analysis was performed using a mixed comparator composed of no treatment, aspirin, and warfarin, which better reflects real-world prescribing in Portugal. This cost-effectiveness analysis produced an ICER of €3895/QALY for the base-case analysis (vs. warfarin) and of €6697/QALY for the real-world prescribing analysis (vs. mixed comparator). The findings were robust when tested in sensitivity analyses. The results showed that rivaroxaban may be a cost-effective alternative compared with warfarin or real-world prescribing in Portugal. Copyright © 2014 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.
Impact of production strategies and animal performance on economic values of dairy sheep traits.
Krupová, Z; Wolfová, M; Krupa, E; Oravcová, M; Daňo, J; Huba, J; Polák, P
2012-03-01
The objective of this study was to carry out a sensitivity analysis on the impact of various production strategies and performance levels on the relative economic values (REVs) of traits in dairy sheep. A bio-economic model implemented in the program package ECOWEIGHT was used to simulate the profit function for a semi-extensive production system with the Slovak multi-purpose breed Improved Valachian and to calculate the REV of 14 production and functional traits. The following production strategies were analysed: differing proportions of milk processed to cheese, customary weaning and early weaning of lambs with immediate sale or sale after artificial rearing, seasonal lambing in winter and aseasonal lambing in autumn. Results of the sensitivity analysis are presented in detail for the four economically most important traits: 150 days milk yield, conception rate of ewes, litter size and ewe productive lifetime. Impacts of the differences in the mean value of each of these four traits on REVs of all other traits were also examined. Simulated changes in the production circumstances had a higher impact on the REV for milk yield than on REVs of the other traits investigated. The proportion of milk processed to cheese, weaning management strategy for lambs and level of milk yield were the main factors influencing the REV of milk yield. The REVs for conception rate of ewes were highly sensitive to the current mean level of the trait. The REV of ewe productive lifetime was most sensitive to variation in ewe conception rate, and the REV of litter size was most affected by weaning strategy for lambs. On the basis of the results of sensitivity analyses, it is recommended that economic values of traits for the overall breeding objective for dairy sheep be calculated as the weighted average of the economic values obtained for the most common production strategies of Slovak dairy sheep farms and that economic values be adjusted after substantial changes in performance levels of the traits.
Extreme events in a vortex gas simulation of a turbulent half-jet
NASA Astrophysics Data System (ADS)
Suryanarayanan, Saikishan; Pathikonda, Gokul; Narasimha, Roddam
2012-11-01
Extensive simulations [
NASA Astrophysics Data System (ADS)
Pihan-Le Bars, H.; Guerlin, C.; Lasseri, R.-D.; Ebran, J.-P.; Bailey, Q. G.; Bize, S.; Khan, E.; Wolf, P.
2017-04-01
We introduce an improved model that links the frequency shift of the 133Cs hyperfine Zeeman transitions |F =3 ,mF ⟩↔|F =4 ,mF ⟩ to the Lorentz-violating Standard Model extension (SME) coefficients of the proton and neutron. The new model uses Lorentz transformations developed to second order in boost and additionally takes the nuclear structure into account, beyond the simple Schmidt model used previously in Standard Model extension analyses, thereby providing access to both proton and neutron SME coefficients including the isotropic coefficient c˜T T. Using this new model in a second analysis of the data delivered by the FO2 dual Cs/Rb fountain at Paris Observatory and previously analyzed in [1], we improve by up to 13 orders of magnitude the present maximum sensitivities for laboratory tests [2] on the c˜Q, c˜T J, and c˜T T coefficients for the neutron and on the c˜Q coefficient for the proton, reaching respectively 10-20, 10-17, 10-13, and 10-15 GeV .
T-p phase diagrams and the barocaloric effect in materials with successive phase transitions
NASA Astrophysics Data System (ADS)
Gorev, M. V.; Bogdanov, E. V.; Flerov, I. N.
2017-09-01
An analysis of the extensive and intensive barocaloric effect (BCE) at successive structural phase transitions in some complex fluorides and oxyfluorides was performed. The high sensitivity of these compounds to a change in the chemical pressure allows one to vary the succession and parameters of the transformations (temperature, entropy, baric coefficient) over a wide range and obtain optimal values of the BCE. A comparison of different types of schematic T-p phase diagrams with the complicated T( p) dependences observed experimentally shows that in some ranges of temperature and pressure the BCE in compounds undergoing successive transformations can be increased due to a summation of caloric effects associated with distinct phase transitions. The maximum values of the extensive and intensive BCE in complex fluorides and oxyfluorides can be realized at rather low pressure (0.1-0.3 GPa). In a narrow temperature range around the triple points conversion from conventional BCE to inverse BCE is observed, which is followed by a gigantic change of both \\vertΔ S_BCE\\vert and \\vertΔ T_AD\\vert .
ERIC Educational Resources Information Center
Balogh, R.; Brownell, M.; Ouellette-Kuntz, H.; Colantonio, A.
2010-01-01
Background: There is evidence that persons with an intellectual disability (ID) face barriers to primary care; however, this has not been extensively studied at the population level. Rates of hospitalisation for ambulatory care sensitive conditions are used as an indicator of access to, and quality of, primary care. The objective of the study was…
PRADA: pipeline for RNA sequencing data analysis.
Torres-García, Wandaliz; Zheng, Siyuan; Sivachenko, Andrey; Vegesna, Rahulsimham; Wang, Qianghu; Yao, Rong; Berger, Michael F; Weinstein, John N; Getz, Gad; Verhaak, Roel G W
2014-08-01
Technological advances in high-throughput sequencing necessitate improved computational tools for processing and analyzing large-scale datasets in a systematic automated manner. For that purpose, we have developed PRADA (Pipeline for RNA-Sequencing Data Analysis), a flexible, modular and highly scalable software platform that provides many different types of information available by multifaceted analysis starting from raw paired-end RNA-seq data: gene expression levels, quality metrics, detection of unsupervised and supervised fusion transcripts, detection of intragenic fusion variants, homology scores and fusion frame classification. PRADA uses a dual-mapping strategy that increases sensitivity and refines the analytical endpoints. PRADA has been used extensively and successfully in the glioblastoma and renal clear cell projects of The Cancer Genome Atlas program. http://sourceforge.net/projects/prada/ gadgetz@broadinstitute.org or rverhaak@mdanderson.org Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Orthogonal sparse linear discriminant analysis
NASA Astrophysics Data System (ADS)
Liu, Zhonghua; Liu, Gang; Pu, Jiexin; Wang, Xiaohong; Wang, Haijun
2018-03-01
Linear discriminant analysis (LDA) is a linear feature extraction approach, and it has received much attention. On the basis of LDA, researchers have done a lot of research work on it, and many variant versions of LDA were proposed. However, the inherent problem of LDA cannot be solved very well by the variant methods. The major disadvantages of the classical LDA are as follows. First, it is sensitive to outliers and noises. Second, only the global discriminant structure is preserved, while the local discriminant information is ignored. In this paper, we present a new orthogonal sparse linear discriminant analysis (OSLDA) algorithm. The k nearest neighbour graph is first constructed to preserve the locality discriminant information of sample points. Then, L2,1-norm constraint on the projection matrix is used to act as loss function, which can make the proposed method robust to outliers in data points. Extensive experiments have been performed on several standard public image databases, and the experiment results demonstrate the performance of the proposed OSLDA algorithm.
Adapting Human Reliability Analysis from Nuclear Power to Oil and Gas Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald Laurids
2015-09-01
ABSTRACT: Human reliability analysis (HRA), as currently used in risk assessments, largely derives its methods and guidance from application in the nuclear energy domain. While there are many similarities be-tween nuclear energy and other safety critical domains such as oil and gas, there remain clear differences. This paper provides an overview of HRA state of the practice in nuclear energy and then describes areas where refinements to the methods may be necessary to capture the operational context of oil and gas. Many key distinctions important to nuclear energy HRA such as Level 1 vs. Level 2 analysis may prove insignifi-cantmore » for oil and gas applications. On the other hand, existing HRA methods may not be sensitive enough to factors like the extensive use of digital controls in oil and gas. This paper provides an overview of these con-siderations to assist in the adaptation of existing nuclear-centered HRA methods to the petroleum sector.« less
Lipidomic analysis of glycerolipid and cholesteryl ester autooxidation products.
Kuksis, Arnis; Suomela, Jukka-Pekka; Tarvainen, Marko; Kallio, Heikki
2009-06-01
Thin-layer chromatography (TLC), gas chromatography (GC), and liquid chromatography (LC) in combination with mass spectrometry (MS) have been adopted for the isolation and identification of oxolipids and for determining their functionality. TLC provides a rapid separation and access to most oxolipids as intact molecules and has recently been effectively interfaced with time-of-flight (TOF) MS (TOF-MS). GC with flame ionization (FI) (GC/FI) and electron impact (EI) MS (GC/EI-MS) has been extensively utilized in the analysis of isoprostanes and other low-molecular-weight oxolipids, although these methods require derivatization of the analytes. In contrast, LC with ultraviolet (UV) absorption (LC/UV) or evaporate light scattering detection (ELSD) (LC/ELSD) as well as electrospray ionization (ESI) or atmospheric pressure chemical ionization (APCI) MS (LC/ESI-MS) or LC/APCI-MS has proven to be well suited for the analysis of intact oxolipids and their conjugates without or with minimal derivatization. Nevertheless, kit-based colorimetric and fluorescent procedures continue to serve as sensitive indicators of the presence of hydroperoxides and aldehydes.
Computational analysis of a multistage axial compressor
NASA Astrophysics Data System (ADS)
Mamidoju, Chaithanya
Turbomachines are used extensively in Aerospace, Power Generation, and Oil & Gas Industries. Efficiency of these machines is often an important factor and has led to the continuous effort to improve the design to achieve better efficiency. The axial flow compressor is a major component in a gas turbine with the turbine's overall performance depending strongly on compressor performance. Traditional analysis of axial compressors involves throughflow calculations, isolated blade passage analysis, Quasi-3D blade-to-blade analysis, single-stage (rotor-stator) analysis, and multi-stage analysis involving larger design cycles. In the current study, the detailed flow through a 15 stage axial compressor is analyzed using a 3-D Navier Stokes CFD solver in a parallel computing environment. Methodology is described for steady state (frozen rotor stator) analysis of one blade passage per component. Various effects such as mesh type and density, boundary conditions, tip clearance and numerical issues such as turbulence model choice, advection model choice, and parallel processing performance are analyzed. A high sensitivity of the predictions to the above was found. Physical explanation to the flow features observed in the computational study are given. The total pressure rise verses mass flow rate was computed.
Chuang, Hui-Ping; Hsu, Mao-Hsuan; Chen, Wei-Yu
2013-01-01
In this study, we established a rapid multiplex method to detect the relative abundances of amplified 16S rRNA genes from known cultivatable methanogens at hierarchical specificities in anaerobic digestion systems treating industrial wastewater and sewage sludge. The method was based on the hierarchical oligonucleotide primer extension (HOPE) technique and combined with a set of 27 primers designed to target the total archaeal populations and methanogens from 22 genera within 4 taxonomic orders. After optimization for their specificities and detection sensitivity under the conditions of multiple single-nucleotide primer extension reactions, the HOPE approach was applied to analyze the methanogens in 19 consortium samples from 7 anaerobic treatment systems (i.e., 513 reactions). Among the samples, the methanogen populations detected with order-level primers accounted for >77.2% of the PCR-amplified 16S rRNA genes detected using an Archaea-specific primer. The archaeal communities typically consisted of 2 to 7 known methanogen genera within the Methanobacteriales, Methanomicrobiales, and Methanosarcinales and displayed population dynamic and spatial distributions in anaerobic reactor operations. Principal component analysis of the HOPE data further showed that the methanogen communities could be clustered into 3 distinctive groups, in accordance with the distribution of the Methanosaeta, Methanolinea, and Methanomethylovorans, respectively. This finding suggested that in addition to acetotrophic and hydrogenotrophic methanogens, the methylotrophic methanogens might play a key role in the anaerobic treatment of industrial wastewater. Overall, the results demonstrated that the HOPE approach is a specific, rapid, and multiplexing platform to determine the relative abundances of targeted methanogens in PCR-amplified 16S rRNA gene products. PMID:24077716
Extended analytical solutions for effective elastic moduli of cracked porous media
NASA Astrophysics Data System (ADS)
Nguyen, Sy-Tuan; To, Quy Dong; Vu, Minh Ngoc
2017-05-01
Extended solutions are derived, on the basis of the micromechanical methods, for the effective elastic moduli of porous media containing stiff pores and both open and closed cracks. Analytical formulas of the overall bulk and shear moduli are obtained as functions of the elastic moduli of the solid skeleton, porosity and the densities of open and closed cracks families. We show that the obtained results are extensions of the classical widely used Walsh's (JGR, 1965) and Budiansky-O‧Connell's (JGR, 1974) solutions. Parametric sensitivity analysis clarifies the impact of the model parameters on the effective elastic properties. An inverse analysis, using sonic and density data, is considered to quantify the density of both open and closed cracks. It is observed that the density of closed cracks depends strongly on stress condition while the dependence of open cracks on the confining stress is negligible.
NASA Technical Reports Server (NTRS)
1984-01-01
The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.
Dynamic properties of ceramic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grady, D.E.
1995-02-01
The present study offers new data and analysis on the transient shock strength and equation-of-state properties of ceramics. Various dynamic data on nine high strength ceramics are provided with wave profile measurements, through velocity interferometry techniques, the principal observable. Compressive failure in the shock wave front, with emphasis on brittle versus ductile mechanisms of deformation, is examined in some detail. Extensive spall strength data are provided and related to the theoretical spall strength, and to energy-based theories of the spall process. Failure waves, as a mechanism of deformation in the transient shock process, are examined. Strength and equation-of-state analysis ofmore » shock data on silicon carbide, boron carbide, tungsten carbide, silicon dioxide and aluminum nitride is presented with particular emphasis on phase transition properties for the latter two. Wave profile measurements on selected ceramics are investigated for evidence of rate sensitive elastic precursor decay in the shock front failure process.« less
Wang, Yongming; Lin, Xiuyun; Dong, Bo; Wang, Yingdian; Liu, Bao
2004-01-01
RAPD (randomly amplified polymorphic DNA) and ISSR (inter-simple sequence repeat) fingerprinting on HpaII/MspI-digested genomic DNA of nine elite japonica rice cultivars implies inter-cultivar DNA methylation polymorphism. Using both DNA fragments isolated from RAPD or ISSR gels and selected low-copy sequences as probes, methylation-sensitive Southern blot analysis confirms the existence of extensive DNA methylation polymorphism in both genes and DNA repeats among the rice cultivars. The cultivar-specific methylation patterns are stably maintained, and can be used as reliable molecular markers. Transcriptional analysis of four selected sequences (RdRP, AC9, HSP90 and MMR) on leaves and roots from normal and 5-azacytidine-treated seedlings of three representative cultivars shows an association between the transcriptional activity of one of the genes, the mismatch repair (MMR) gene, and its CG methylation patterns.
Developing Privacy Solutions for Sharing and Analyzing Healthcare Data
Motiwalla, Luvai; Li, Xiao-Bai
2013-01-01
The extensive use of electronic health data has increased privacy concerns. While most healthcare organizations are conscientious in protecting their data in their databases, very few organizations take enough precautions to protect data that is shared with third party organizations. Recently the regulatory environment has tightened the laws to enforce privacy protection. The goal of this research is to explore the application of data masking solutions for protecting patient privacy when data is shared with external organizations for research, analysis and other similar purposes. Specifically, this research project develops a system that protects data without removing sensitive attributes. Our application allows high quality data analysis with the masked data. Dataset-level properties and statistics remain approximately the same after data masking; however, individual record-level values are altered to prevent privacy disclosure. A pilot evaluation study on large real-world healthcare data shows the effectiveness of our solution in privacy protection. PMID:24285983
Spectroscopic vector analysis for fast pattern quality monitoring
NASA Astrophysics Data System (ADS)
Sohn, Younghoon; Ryu, Sungyoon; Lee, Chihoon; Yang, Yusin
2018-03-01
In semiconductor industry, fast and effective measurement of pattern variation has been key challenge for assuring massproduct quality. Pattern measurement techniques such as conventional CD-SEMs or Optical CDs have been extensively used, but these techniques are increasingly limited in terms of measurement throughput and time spent in modeling. In this paper we propose time effective pattern monitoring method through the direct spectrum-based approach. In this technique, a wavelength band sensitive to a specific pattern change is selected from spectroscopic ellipsometry signal scattered by pattern to be measured, and the amplitude and phase variation in the wavelength band are analyzed as a measurement index of the pattern change. This pattern change measurement technique is applied to several process steps and verified its applicability. Due to its fast and simple analysis, the methods can be adapted to the massive process variation monitoring maximizing measurement throughput.
Wáng, Yì Xiáng J; Chung, Myung Jin; Skrahin, Aliaksandr; Rosenthal, Alex; Gabrielian, Andrei; Tartakovsky, Michael
2018-03-01
Despite that confirmative diagnosis of pulmonary drug-sensitive tuberculosis (DS-TB) and multidrug resistant tuberculosis (MDR-TB) is determined by microbiological testing, early suspicions of MDR-TB by chest imaging are highly desirable in order to guide diagnostic process. We aim to perform an analysis of currently available literatures on radiological signs associated with pulmonary MDR-TB. A literature search was performed using PubMed on January 29, 2018. The search words combination was "((extensive* drug resistant tuberculosis) OR (multidrug-resistant tuberculosis)) AND (CT or radiograph or imaging or X-ray or computed tomography)". We analyzed English language articles reported sufficient information of radiological signs of DS-TB vs. MDR-TB. Seventeen articles were found to be sufficiently relevant and included for analysis. The reported pulmonary MDR-TB cases were grouped into four categories: (I) previously treated (or 'secondary', or 'acquired') MDR-TB in HIV negative (-) adults; (II) new (or 'primary') MDR-TB in HIV(-) adults; (III) MDR-TB in HIV positive (+) adults; and (IV) MDR-TB in child patients. The common radiological findings of pulmonary MDR-TB included centrilobular small nodules, branching linear and nodular opacities (tree-in-bud sign), patchy or lobular areas of consolidation, cavitation, and bronchiectasis. While overall MDR-TB cases tended to have more extensive disease, more likely to be bilateral, to have pleural involvement, to have bronchiectasis, and to have lung volume loss; these signs alone were not sufficient for differential diagnosis of MDR-TB. Current literatures suggest that the radiological sign which may offer good specificity for pulmonary MDR-TB diagnosis, though maybe at the cost of low sensitivity, would be thick-walled multiple cavities, particularly if the cavity number is ≥3. For adult HIV(-) patients, new MDR-TB appear to show similar prevalence of cavity lesion, which was estimated to be around 70%, compared with previously treated MDR-TB. Thick-walled multiple cavity lesions present the most promising radiological sign for MDR-TB diagnosis. For future studies cavity lesion characteristics should be quantified in details.
Expanding the bovine milk proteome through extensive fractionation.
Nissen, Asger; Bendixen, Emøke; Ingvartsen, Klaus Lønne; Røntved, Christine Maria
2013-01-01
Bovine milk is an agricultural product of tremendous value worldwide. It contains proteins, fat, lactose, vitamins, and minerals. It provides nutrition and immunological protection (e.g., in the gastrointestinal tract) to the newborn and young calf. It also forms an important part of human nutrition. The repertoire of proteins in milk (i.e., its proteome) is vast and complex. The milk proteome can be described in detail by mass spectrometry-based proteomics. However, the high concentration of dominating proteins in milk reduces mass spectrometry detection sensitivity and limits detection of low abundant proteins. Further, the general health and udder health of the dairy cows delivering the milk may influence the composition of the milk proteome. To gain a more exhaustive and true picture of the milk proteome, we performed an extensive preanalysis fractionation of raw composite milk collected from documented healthy cows in early lactation. Four simple and industrially applicable techniques exploring the physical and chemical properties of milk, including acidification, filtration, and centrifugation, were used for separation of the proteins. This resulted in 5 different fractions, whose content of proteins were compared with the proteins of nonfractionated milk using 2-dimensional liquid chromatography tandem mass spectrometry analysis. To validate the proteome analysis, spectral counts and ELISA were performed on 7 proteins using the ELISA for estimation of the detection sensitivity limit of the 2-dimensional liquid chromatography tandem mass spectrometry analysis. Each fractionation technique resulted in identification of a unique subset of proteins. However, high-speed centrifugation of milk to whey was by far the best method to achieve high and repeatable proteome coverage. The total number of milk proteins initially detected in nonfractionated milk and the fractions were 635 in 2 replicates. Removal of dominant proteins and filtering for redundancy across the different fractions reduced the number to 376 unique proteins in 2 replicates. In addition, 366 proteins were detected by this process in 1 replicate. Hence, by applying different fractionation techniques to milk, we expanded the milk proteome. The milk proteome map may serve as a reference for scientists working in the dairy sector. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Underwood, Mair
2014-04-01
It is increasingly recognized that community attitudes impact on the research trajectory, entry, and reception of new biotechnologies. Yet biogerontologists have generally been dismissive of public concerns about life extension. There is some evidence that biogerontological research agendas have not been communicated effectively, with studies finding that most community members have little or no knowledge of life extension research. In the absence of knowledge, community members' attitudes may well be shaped by issues raised in popular portrayals of life extension (e.g., in movies). To investigate how popular portrayals of life extension may influence community attitudes, I conducted an analysis of 19 films depicting human life extension across different genres. I focussed on how the pursuit of life extension was depicted, how life extension was achieved, the levels of interest in life extension shown by characters in the films, and the experiences of extended life depicted both at an individual and societal level. This paper compares the results of this analysis with the literature on community attitudes to life extension and makes recommendations about the issues in which the public may require reassurance if they are to support and accept life extension technologies.
Jain, Avani; Srivastava, Madhur Kumar; Pawaskar, Alok Suresh; Shelley, Simon; Elangovan, Indirani; Jain, Hasmukh; Pandey, Somnath; Kalal, Shilpa; Amalachandran, Jaykanth
2015-01-01
Background: To evaluate the advantages of contrast enhanced F-18-fluorodeoxyglucose (FDG) positron emission tomography-computed tomography (PET-contrast enhanced CT [CECT]) when used as an initial imaging modality in patients presenting with metastatic malignancy of undefined primary origin (MUO). Materials and Methods: A total of 243 patients with fine needle aspiration cytology/biopsy proven MUO were included in this prospective study. Patients who were thoroughly evaluated for primary or primary tumor was detected by any other investigation were excluded from the analysis. Totally, 163 patients with pathological diagnosis of malignancy but no apparent sites of the primary tumor were finally selected for analysis. The site of probable primary malignancy suggested by PET-CECT was confirmed by biopsy/follow-up. Results: PET-CECT suggested probable site of primary in 128/163 (78.52%) patients. In 30/35 remaining patients, primary tumor was not detected even after extensive work-up. In 5 patients, where PET-CECT was negative, primary was found on further extensive investigations or follow-up. The sensitivity, specificity, positive predictive value and negative predictive value of the study were 95.76%, 66.67%, 88.28% and 85.71% respectively. Conclusions: F-18 FDG PET-CECT aptly serves the purpose of initial imaging modality owing to high sensitivity, negative and positive predictive value. PET-CECT not only surveys the whole body for the primary malignancy but also stages the disease accurately. Use of contrast improves the diagnostic utility of modality as well as help in staging of the primary tumor. Although benefits of using PET-CECT as initial diagnostic modality are obvious from this study, there is a need for a larger study comparing conventional methods for diagnosing primary in patients with MUO versus PET-CECT. PMID:26170563
Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models
Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong
2015-01-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.
Wavelet-Bayesian inference of cosmic strings embedded in the cosmic microwave background
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Peiris, H. V.; Wiaux, Y.; Ringeval, C.; Bouchet, F. R.
2017-12-01
Cosmic strings are a well-motivated extension to the standard cosmological model and could induce a subdominant component in the anisotropies of the cosmic microwave background (CMB), in addition to the standard inflationary component. The detection of strings, while observationally challenging, would provide a direct probe of physics at very high-energy scales. We develop a framework for cosmic string inference from observations of the CMB made over the celestial sphere, performing a Bayesian analysis in wavelet space where the string-induced CMB component has distinct statistical properties to the standard inflationary component. Our wavelet-Bayesian framework provides a principled approach to compute the posterior distribution of the string tension Gμ and the Bayesian evidence ratio comparing the string model to the standard inflationary model. Furthermore, we present a technique to recover an estimate of any string-induced CMB map embedded in observational data. Using Planck-like simulations, we demonstrate the application of our framework and evaluate its performance. The method is sensitive to Gμ ∼ 5 × 10-7 for Nambu-Goto string simulations that include an integrated Sachs-Wolfe contribution only and do not include any recombination effects, before any parameters of the analysis are optimized. The sensitivity of the method compares favourably with other techniques applied to the same simulations.
Wammes, Joost J G; Siregar, Adiatma Y; Hidayat, Teddy; Raya, Reynie P; van Crevel, Reinout; van der Ven, André J; Baltussen, Rob
2012-09-01
Indonesia faces an HIV epidemic that is in rapid transition. Injecting drug users (IDUs) are among the most heavily affected risk populations, with estimated prevalence of HIV reaching 50% or more in most parts of the country. Although Indonesia started opening methadone clinics in 2003, coverage remains low. We used the Asian Epidemic Model and Resource Needs Model to evaluate the long-term population-level preventive impact of expanding Methadone Maintenance Therapy (MMT) in West Java (43 million people). We compared intervention costs and the number of incident HIV cases in the intervention scenario with current practice to establish the cost per infection averted by expanding MMT. An extensive sensitivity analysis was performed on costs and epidemiological input, as well as on the cost-effectiveness calculation itself. Our analysis shows that expanding MMT from 5% coverage now to 40% coverage in 2019 would avert approximately 2400 HIV infections, at a cost of approximately US$7000 per HIV infection averted. Sensitivity analyses demonstrate that the use of alternative assumptions does not change the study conclusions. Our analyses suggest that expanding MMT is cost-effective, and support government policies to make MMT widely available as an integrated component of HIV/AIDS control in West Java. Copyright © 2012 Elsevier B.V. All rights reserved.
Rennert, Hanna; Eng, Kenneth; Zhang, Tuo; Tan, Adrian; Xiang, Jenny; Romanel, Alessandro; Kim, Robert; Tam, Wayne; Liu, Yen-Chun; Bhinder, Bhavneet; Cyrta, Joanna; Beltran, Himisha; Robinson, Brian; Mosquera, Juan Miguel; Fernandes, Helen; Demichelis, Francesca; Sboner, Andrea; Kluk, Michael; Rubin, Mark A; Elemento, Olivier
2016-01-01
We describe Exome Cancer Test v1.0 (EXaCT-1), the first New York State-Department of Health-approved whole-exome sequencing (WES)-based test for precision cancer care. EXaCT-1 uses HaloPlex (Agilent) target enrichment followed by next-generation sequencing (Illumina) of tumour and matched constitutional control DNA. We present a detailed clinical development and validation pipeline suitable for simultaneous detection of somatic point/indel mutations and copy-number alterations (CNAs). A computational framework for data analysis, reporting and sign-out is also presented. For the validation, we tested EXaCT-1 on 57 tumours covering five distinct clinically relevant mutations. Results demonstrated elevated and uniform coverage compatible with clinical testing as well as complete concordance in variant quality metrics between formalin-fixed paraffin embedded and fresh-frozen tumours. Extensive sensitivity studies identified limits of detection threshold for point/indel mutations and CNAs. Prospective analysis of 337 cancer cases revealed mutations in clinically relevant genes in 82% of tumours, demonstrating that EXaCT-1 is an accurate and sensitive method for identifying actionable mutations, with reasonable costs and time, greatly expanding its utility for advanced cancer care. PMID:28781886
Modic Type 1 Changes: Detection Performance of Fat-Suppressed Fluid-Sensitive MRI Sequences.
Finkenstaedt, Tim; Del Grande, Filippo; Bolog, Nicolae; Ulrich, Nils; Tok, Sina; Kolokythas, Orpheus; Steurer, Johann; Andreisek, Gustav; Winklhofer, Sebastian
2018-02-01
To assess the performance of fat-suppressed fluid-sensitive MRI sequences compared to T1-weighted (T1w) / T2w sequences for the detection of Modic 1 end-plate changes on lumbar spine MRI. Sagittal T1w, T2w, and fat-suppressed fluid-sensitive MRI images of 100 consecutive patients (consequently 500 vertebral segments; 52 female, mean age 74 ± 7.4 years; 48 male, mean age 71 ± 6.3 years) were retrospectively evaluated. We recorded the presence (yes/no) and extension (i. e., Likert-scale of height, volume, and end-plate extension) of Modic I changes in T1w/T2w sequences and compared the results to fat-suppressed fluid-sensitive sequences (McNemar/Wilcoxon-signed-rank test). Fat-suppressed fluid-sensitive sequences revealed significantly more Modic I changes compared to T1w/T2w sequences (156 vs. 93 segments, respectively; p < 0.001). The extension of Modic I changes in fat-suppressed fluid-sensitive sequences was significantly larger compared to T1w/T2w sequences (height: 2.53 ± 0.82 vs. 2.27 ± 0.79, volume: 2.35 ± 0.76 vs. 2.1 ± 0.65, end-plate: 2.46 ± 0.76 vs. 2.19 ± 0.81), (p < 0.05). Modic I changes that were only visible in fat-suppressed fluid-sensitive sequences but not in T1w/T2w sequences were significantly smaller compared to Modic I changes that were also visible in T1w/T2w sequences (p < 0.05). In conclusion, fat-suppressed fluid-sensitive MRI sequences revealed significantly more Modic I end-plate changes and demonstrated a greater extent compared to standard T1w/T2w imaging. · When the Modic classification was defined in 1988, T2w sequences were heavily T2-weighted and thus virtually fat-suppressed.. · Nowadays, the bright fat signal in T2w images masks edema-like changes.. · The conventional definition of Modic I changes is not fully applicable anymore.. · Fat-suppressed fluid-sensitive MRI sequences revealed more/greater extent of Modic I changes.. · Finkenstaedt T, Del Grande F, Bolog N et al. Modic Type 1 Changes: Detection Performance of Fat-Suppressed Fluid-Sensitive MRI Sequences. Fortschr Röntgenstr 2018; 190: 152 - 160. © Georg Thieme Verlag KG Stuttgart · New York.
The Intelligent System of Cardiovascular Disease Diagnosis Based on Extension Data Mining
NASA Astrophysics Data System (ADS)
Sun, Baiqing; Li, Yange; Zhang, Lin
This thesis gives the general definition of the concepts of extension knowledge, extension data mining and extension data mining theorem in high dimension space, and also builds the IDSS integrated system by the rough set, expert system and neural network, develops the relevant computer software. From the diagnosis tests, according to the common diseases of myocardial infarctions, angina pectoris and hypertension, and made the test result with physicians, the results shows that the sensitivity, specific and accuracy diagnosis by the IDSS are all higher than the physicians. It can improve the rate of the accuracy diagnosis of physician with the auxiliary help of this system, which have the obvious meaning in low the mortality, disability rate and high the survival rate, and has strong practical values and further social benefits.
Richardson, Michael L; Petscavage, Jonelle M
2011-11-01
The sensitivity and specificity of magnetic resonance imaging (MRI) for diagnosis of meniscal tears has been studied extensively, with tears usually verified by surgery. However, surgically unverified cases are often not considered in these studies, leading to verification bias, which can falsely increase the sensitivity and decrease the specificity estimates. Our study suggests that such bias may be very common in the meniscal MRI literature, and illustrates techniques to detect and correct for such bias. PubMed was searched for articles estimating sensitivity and specificity of MRI for meniscal tears. These were assessed for verification bias, deemed potentially present if a study included any patients whose MRI findings were not surgically verified. Retrospective global sensitivity analysis (GSA) was performed when possible. Thirty-nine of the 314 studies retrieved from PubMed specifically dealt with meniscal tears. All 39 included unverified patients, and hence, potential verification bias. Only seven articles included sufficient information to perform GSA. Of these, one showed definite verification bias, two showed no bias, and four others showed bias within certain ranges of disease prevalence. Only 9 of 39 acknowledged the possibility of verification bias. Verification bias is underrecognized and potentially common in published estimates of the sensitivity and specificity of MRI for the diagnosis of meniscal tears. When possible, it should be avoided by proper study design. If unavoidable, it should be acknowledged. Investigators should tabulate unverified as well as verified data. Finally, verification bias should be estimated; if present, corrected estimates of sensitivity and specificity should be used. Our online web-based calculator makes this process relatively easy. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.
Single mode variable-sensitivity fiber optic sensors
NASA Technical Reports Server (NTRS)
Murphy, K. A.; Fogg, B. R.; Gunther, M. F.; Claus, R. O.
1992-01-01
We review spatially-weighted optical fiber sensors that filter specific vibration modes from one dimensional beams placed in clamped-free and clamped-clamped configurations. The sensitivity of the sensor is varied along the length of the fiber by tapering circular-core, dual-mode optical fibers. Selective vibration mode suppression on the order of 10 dB was obtained. We describe experimental results and propose future extensions to single mode sensor applications.
Kim, Bum Soo; Kim, Tae-Hwan; Kwon, Tae Gyun
2012-01-01
Purpose Several studies have demonstrated the superiority of endorectal coil magnetic resonance imaging (MRI) over pelvic phased-array coil MRI at 1.5 Tesla for local staging of prostate cancer. However, few have studied which evaluation is more accurate at 3 Tesla MRI. In this study, we compared the accuracy of local staging of prostate cancer using pelvic phased-array coil or endorectal coil MRI at 3 Tesla. Materials and Methods Between January 2005 and May 2010, 151 patients underwent radical prostatectomy. All patients were evaluated with either pelvic phased-array coil or endorectal coil prostate MRI prior to surgery (63 endorectal coils and 88 pelvic phased-array coils). Tumor stage based on MRI was compared with pathologic stage. We calculated the specificity, sensitivity and accuracy of each group in the evaluation of extracapsular extension and seminal vesicle invasion. Results Both endorectal coil and pelvic phased-array coil MRI achieved high specificity, low sensitivity and moderate accuracy for the detection of extracapsular extension and seminal vesicle invasion. There were statistically no differences in specificity, sensitivity and accuracy between the two groups. Conclusion Overall staging accuracy, sensitivity and specificity were not significantly different between endorectal coil and pelvic phased-array coil MRI. PMID:22476999
Che, Jun; Smith, Stephanie; Kim, Yoo Jung; Shim, Eun Yong; Myung, Kyungjae; Lee, Sang Eun
2015-01-01
Break-induced replication (BIR) has been implicated in restoring eroded telomeres and collapsed replication forks via single-ended invasion and extensive DNA synthesis on the recipient chromosome. Unlike other recombination subtypes, DNA synthesis in BIR likely relies heavily on mechanisms enabling efficient fork progression such as chromatin modification. Herein we report that deletion of HST3 and HST4, two redundant de-acetylases of histone H3 Lysine 56 (H3K56), inhibits BIR, sensitizes checkpoint deficient cells to deoxyribonucleotide triphosphate pool depletion, and elevates translocation-type gross chromosomal rearrangements (GCR). The basis for deficiency in BIR and gene conversion with long gap synthesis in hst3Δ hst4Δ cells can be traced to a defect in extensive DNA synthesis. Distinct from other cellular defects associated with deletion of HST3 and HST4 including thermo-sensitivity and elevated spontaneous mutagenesis, the BIR defect in hst3Δ hst4Δ cannot be offset by the deletion of RAD17 or MMS22, but rather by the loss of RTT109 or ASF1, or in combination with the H3K56R mutation, which also restores tolerance to replication stress in mrc1 mutants. Our studies suggest that acetylation of H3K56 limits extensive repair synthesis and interferes with efficient fork progression in BIR. PMID:25705897
Systematic review with meta-analysis: proximal disease extension in limited ulcerative colitis.
Roda, G; Narula, N; Pinotti, R; Skamnelos, A; Katsanos, K H; Ungaro, R; Burisch, J; Torres, J; Colombel, J-F
2017-06-01
Disease extent in ulcerative colitis is one of the major factors determining prognosis over the long-term. Disease extent is dynamic and a proportion of patients presenting with limited disease progress to more extensive forms of disease over time. To perform a systematic review and meta-analysis of epidemiological studies reporting on extension of ulcerative colitis to determine frequency of disease extension in patients with limited ulcerative colitis at diagnosis. We performed a systematic literature search to identify studies on disease extension of ulcerative colitis (UC) and predictors of disease progression. Overall, 41 studies were eligible for systematic review but only 30 for meta-analysis. The overall pooled frequency of UC extension was 22.8% with colonic extension being 17.8% at 5 years and 31% at 10 years. Extension was 17.8% (95% CI 11.2-27.3) from E1 to E3, 27.5% (95% CI 7.6-45.6) from E2 to E3 and 20.8% (95% CI 11.4-26.8) from E1 to E2. Rate of extension was significantly higher in patients younger than 18 years (29.2% (CI 6.4-71.3) compared to older patients (20.2% (CI 13.0-30.1) (P<.0001). Risk of extension was significantly higher in patients from North America (37.8%) than from Europe (19.6%) (P<.0001). In this meta-analysis, approximately one quarter of patients with limited UC extend over time with most extension occurring during the first 10 years. Rate of extension depends on age at diagnosis and geographic origin. Predicting those at high risk of disease extension from diagnosis could lead to personalised therapeutic strategies. © 2017 John Wiley & Sons Ltd.
Gao, Wenyue; Muzyka, Kateryna; Ma, Xiangui; Lou, Baohua; Xu, Guobao
2018-04-28
Developing low-cost and simple electrochemical systems is becoming increasingly important but still challenged for multiplex experiments. Here we report a single-electrode electrochemical system (SEES) using only one electrode not only for a single experiment but also for multiplex experiments based on a resistance induced potential difference. SEESs for a single experiment and multiplex experiments are fabricated by attaching a self-adhesive label with a hole and multiple holes onto an ITO electrode, respectively. This enables multiplex electrochemiluminescence analysis with high sensitivity at a very low safe voltage using a smartphone as a detector. For the multiplex analysis, the SEES using a single electrode is much simpler, cheaper and more user-friendly than conventional electrochemical systems and bipolar electrochemical systems using electrode arrays. Moreover, SEESs are free from the electrochemiluminescent background problem from driving electrodes in bipolar electrochemical systems. Since numerous electrodes and cover materials can be used to fabricate SEESs readily and electrochemistry is being extensively used, SEESs are very promising for broad applications, such as drug screening and high throughput analysis.
Siciliani, Luigi
2006-01-01
Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.
Sample preparation techniques for the determination of trace residues and contaminants in foods.
Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M
2007-06-15
The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.
Renjith, V R; Madhu, G; Nayagam, V Lakshmana Gomathi; Bhasi, A B
2010-11-15
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Miller, J. B.; Bowling, D. R.; Tans, P. P.; Baker, I. T.
2013-12-01
The global cycles of water and carbon are inextricably linked through photosynthesis. This link is largely governed by stomatal conductance that regulates water loss to the atmosphere and carbon gain to the biosphere. Although extensive research has focused on the response of stomatal conductance to increased atmospheric CO2, much less research has focused on the response of stomatal conductance to concomitant climate change. Here we make use of intensive and extensive measurements of C isotopes in source CO2 to the atmosphere (del-bio) to make inferences about stomatal response to climatic factors at a single forest site and across a network of global observation sites. Based on intensive observations at the Niwot Ridge Ameriflux site we discover that del-bio is an excellent physical proxy of stomatal response during the growing season and this response is highly sensitive to atmospheric water vapor pressure deficit (VPD). We use these intensive single forest site observations to inform our analysis of the global observation network, focusing in on the growing season across an array of terrestrial sites. We find that stomatal response across most of these terrestrial sites is also highly sensitive to VPD. Lastly, we simulate the response of future climate change on stomatal response and discover that future increases in VPD may limit the biosphere's capacity to assimilate future CO2 emissions. These results have direct implications for the benchmarking of Earth System Models as stomatal conductance in many of these models does not vary as a function of VPD.
Characterizing the Sensitivity of Groundwater Storage to Climate variation in the Indus Basin
NASA Astrophysics Data System (ADS)
Huang, L.; Sabo, J. L.
2017-12-01
Indus Basin represents an extensive groundwater aquifer facing the challenge of effective management of limited water resources. Groundwater storage is one of the most important variables of water balance, yet its sensitivity to climate change has rarely been explored. To better estimate present and future groundwater storage and its sensitivity to climate change in the Indus Basin, we analyzed groundwater recharge/discharge and their historical evolution in this basin. Several methods are applied to specify the aquifer system including: water level change and storativity estimates, gravity estimates (GRACE), flow model (MODFLOW), water budget analysis and extrapolation. In addition, all of the socioeconomic and engineering aspects are represented in the hydrological system through the change of temporal and spatial distributions of recharge and discharge (e.g., land use, crop structure, water allocation, etc.). Our results demonstrate that the direct impacts of climate change will result in unevenly distributed but increasing groundwater storage in the short term through groundwater recharge. In contrast, long term groundwater storage will decrease as a result of combined indirect and direct impacts of climate change (e.g. recharge/discharge and human activities). The sensitivity of groundwater storage to climate variation is characterized by topography, aquifer specifics and land use. Furthermore, by comparing possible outcomes of different human interventions scenarios, our study reveals human activities play an important role in affecting the sensitivity of groundwater storage to climate variation. Over all, this study presents the feasibility and value of using integrated hydrological methods to support sustainable water resource management under climate change.
Rieucau, Guillaume; Burke, Darren
2017-01-01
Abstract Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus, a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards’ ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species. PMID:29491965
Gaber, Rok; Majerle, Andreja; Jerala, Roman; Benčina, Mojca
2013-01-01
To effectively fight against the human immunodeficiency virus infection/acquired immunodeficiency syndrome (HIV/AIDS) epidemic, ongoing development of novel HIV protease inhibitors is required. Inexpensive high-throughput screening assays are needed to quickly scan large sets of chemicals for potential inhibitors. We have developed a Förster resonance energy transfer (FRET)-based, HIV protease-sensitive sensor using a combination of a fluorescent protein pair, namely mCerulean and mCitrine. Through extensive in vitro characterization, we show that the FRET-HIV sensor can be used in HIV protease screening assays. Furthermore, we have used the FRET-HIV sensor for intracellular quantitative detection of HIV protease activity in living cells, which more closely resembles an actual viral infection than an in vitro assay. We have developed a high-throughput method that employs a ratiometric flow cytometry for analyzing large populations of cells that express the FRET-HIV sensor. The method enables FRET measurement of single cells with high sensitivity and speed and should be used when subpopulation-specific intracellular activity of HIV protease needs to be estimated. In addition, we have used a confocal microscopy sensitized emission FRET technique to evaluate the usefulness of the FRET-HIV sensor for spatiotemporal detection of intracellular HIV protease activity. PMID:24287545
Gabriel, Gabriele V M; Lopes, P S; Viviani, V R
2014-01-15
Bioluminescence is widely used in biosensors. For water toxicity analysis, the naturally bioluminescent bacteria Vibrio fischeri have been used extensively. We investigated the suitability of two new beetle luciferases for Escherichia coli light off biosensors: Macrolampis firefly and Pyrearinus termitilluminans click beetle luciferases. The bioluminescence detection assay using this system is very sensitive, being comparable or superior to V. fischeri. The luciferase of P. termitilluminans produces a strong and sustained bioluminescence that is useful for less sensitive and inexpensive assays that require integration of the emission, whereas Macrolampis luciferase displays a flash-like luminescence that is useful for fast and more sensitive assays. The effect of heavy metals and sanitizing agents was analyzed. Zinc, copper, 1-propanol, and iodide had inhibitory effects on bioluminescence and growth assays; however, in these cases the bioluminescence was not a very reliable indicator of cell growth and metabolic activity because these agents also inhibited the luciferase. On the other hand, mercury and silver strongly affected cell bioluminescence and growth but not the luciferase activity, indicating that bioluminescence was a reliable indicator of cell growth and metabolic activity in this case. Finally, bioluminescent E. coli immobilized in agarose matrix gave a more stable format for environmental assays. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adekunle, S.S.A.; Wyandt, H.; Mark, H.F.L.
1994-09-01
Recently we mapped the telomeric repeat sequences to 111 interstitial sites in the human genome and to sites of gaps and breaks induced by aphidicolin and sister chromatid exchange sites detected by BrdU. Many of these sites correspond to conserved fragile sites in man, gorilla and chimpazee, to sites of conserved sister chromatid exchange in the mammalian X chromosome, to mutagenic sensitive sites, mapped locations of proto-oncogenes, breakpoints implicated in primate evolution and to breakpoints indicated as the sole anomaly in neoplasia. This observation prompted us to investigate if the interstitial telomeric sites cluster with these sites. An extensive literaturemore » search was carried out to find all the available published sites mentioned above. For comparison, we also carried out a statistical analysis of the clustering of the sites of the telomeric repeats with the gene locations where only nucleotide mutations have been observed as the only chromosomal abnormality. Our results indicate that the telomeric repeats cluster most with fragile sites, mutagenic sensitive sites and breakpoints implicated in primate evolution and least with cancer breakpoints, mapped locations of proto-oncogenes and other genes with nucleotide mutations.« less
Advanced polarization sensitive analysis in optical coherence tomography
NASA Astrophysics Data System (ADS)
Wieloszyńska, Aleksandra; StrÄ kowski, Marcin R.
2017-08-01
The optical coherence tomography (OCT) is an optical imaging method, which is widely applied in variety applications. This technology is used to cross-sectional or surface imaging with high resolution in non-contact and non-destructive way. OCT is very useful in medical applications like ophthalmology, dermatology or dentistry, as well as beyond biomedical fields like stress mapping in polymers or protective coatings defects detection. Standard OCT imaging is based on intensity images which can visualize the inner structure of scattering devices. However, there is a number of extensions improving the OCT measurement abilities. The main of them are the polarization sensitive OCT (PS-OCT), Doppler enable OCT (D-OCT) or spectroscopic OCT (S-OCT). Our research activities have been focused on PS-OCT systems. The polarization sensitive analysis delivers an useful information about optical anisotropic properties of the evaluated sample. This kind of measurements is very important for inner stress monitoring or e.g. tissue recognition. Based on our research results and knowledge the standard PS-OCT provide only data about birefringence of the measured sample. However, based on the OCT measurements more information including depolarization and diattenuation might be obtained. In our work, the method based on Jones formalism are going to be presented. It is used to determine birefringence, dichroism and optic axis orientation of the tested sample. In this contribution the setup of the optical system, as well as tests results verifying the measurements abilities of the system are going to be presented. The brief discussion about the effectiveness and usefulness of this approach will be carried out.
A Methodological Review of US Budget-Impact Models for New Drugs.
Mauskopf, Josephine; Earnshaw, Stephanie
2016-11-01
A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.
Knowledge-guided gene prioritization reveals new insights into the mechanisms of chemoresistance.
Emad, Amin; Cairns, Junmei; Kalari, Krishna R; Wang, Liewei; Sinha, Saurabh
2017-08-11
Identification of genes whose basal mRNA expression predicts the sensitivity of tumor cells to cytotoxic treatments can play an important role in individualized cancer medicine. It enables detailed characterization of the mechanism of action of drugs. Furthermore, screening the expression of these genes in the tumor tissue may suggest the best course of chemotherapy or a combination of drugs to overcome drug resistance. We developed a computational method called ProGENI to identify genes most associated with the variation of drug response across different individuals, based on gene expression data. In contrast to existing methods, ProGENI also utilizes prior knowledge of protein-protein and genetic interactions, using random walk techniques. Analysis of two relatively new and large datasets including gene expression data on hundreds of cell lines and their cytotoxic responses to a large compendium of drugs reveals a significant improvement in prediction of drug sensitivity using genes identified by ProGENI compared to other methods. Our siRNA knockdown experiments on ProGENI-identified genes confirmed the role of many new genes in sensitivity to three chemotherapy drugs: cisplatin, docetaxel, and doxorubicin. Based on such experiments and extensive literature survey, we demonstrate that about 73% of our top predicted genes modulate drug response in selected cancer cell lines. In addition, global analysis of genes associated with groups of drugs uncovered pathways of cytotoxic response shared by each group. Our results suggest that knowledge-guided prioritization of genes using ProGENI gives new insight into mechanisms of drug resistance and identifies genes that may be targeted to overcome this phenomenon.
Clearance of the cervical spine in clinically unevaluable trauma patients.
Halpern, Casey H; Milby, Andrew H; Guo, Wensheng; Schuster, James M; Gracias, Vicente H; Stein, Sherman C
2010-08-15
Meta-analytic costeffectiveness analysis. Our goal was to compare the results of different management strategies for trauma patients in whom the cervical spine was not clinically evaluable due to impaired consciousness, endotracheal intubation, or painful distracting injuries. We performed a structured literature review related to cervical spine trauma, radiographic clearance techniques (plain radiography, flexion/extension, CT, and MRI), and complications associated with semirigid collar use. Meta-analytic techniques were used to pool data from multiple sources to calculate pooled mean estimates of sensitivities and specificities of imaging techniques for cervical spinal clearance, rates of complications from various clearance strategies and from empirical use of semirigid collars. A decision analysis model was used to compare outcomes and costs among these strategies. Slightly more than 7.5% of patients who are clinically unevaluable have cervical spine injuries, and 42% of these injuries are associated with spinal instability. Sensitivity of plain radiography or fluoroscopy for spinal clearance was 57% (95% CI: 57%-60%). Sensitivities for CT and MRI alone were 83% (82%-84%) and 87% (84%-89%), respectively. Complications associated with collar use ranged from 1.3% (2 days) to 7.1% (10 days) but were usually minor and short-lived. Quadriplegia resulting from spinal instability missed by a clearance test had enormous impacts on longevity, quality of life, and costs. These impacts overshadowed the effects of prolonged collar application, even when the incidence of quadriplegia was extremely low. As currently used, neuroimaging studies for cervical spinal clearance in clinically unevaluable patients are not cost-effective compared with empirical immobilization in a semirigid collar.
A Multidisciplinary Approach to High Throughput Nuclear Magnetic Resonance Spectroscopy
Pourmodheji, Hossein; Ghafar-Zadeh, Ebrahim; Magierowski, Sebastian
2016-01-01
Nuclear Magnetic Resonance (NMR) is a non-contact, powerful structure-elucidation technique for biochemical analysis. NMR spectroscopy is used extensively in a variety of life science applications including drug discovery. However, existing NMR technology is limited in that it cannot run a large number of experiments simultaneously in one unit. Recent advances in micro-fabrication technologies have attracted the attention of researchers to overcome these limitations and significantly accelerate the drug discovery process by developing the next generation of high-throughput NMR spectrometers using Complementary Metal Oxide Semiconductor (CMOS). In this paper, we examine this paradigm shift and explore new design strategies for the development of the next generation of high-throughput NMR spectrometers using CMOS technology. A CMOS NMR system consists of an array of high sensitivity micro-coils integrated with interfacing radio-frequency circuits on the same chip. Herein, we first discuss the key challenges and recent advances in the field of CMOS NMR technology, and then a new design strategy is put forward for the design and implementation of highly sensitive and high-throughput CMOS NMR spectrometers. We thereafter discuss the functionality and applicability of the proposed techniques by demonstrating the results. For microelectronic researchers starting to work in the field of CMOS NMR technology, this paper serves as a tutorial with comprehensive review of state-of-the-art technologies and their performance levels. Based on these levels, the CMOS NMR approach offers unique advantages for high resolution, time-sensitive and high-throughput bimolecular analysis required in a variety of life science applications including drug discovery. PMID:27294925
NASA Astrophysics Data System (ADS)
Borge, Rafael; Alexandrov, Vassil; José del Vas, Juan; Lumbreras, Julio; Rodríguez, Encarnacion
Meteorological inputs play a vital role on regional air quality modelling. An extensive sensitivity analysis of the Weather Research and Forecasting (WRF) model was performed, in the framework of the Integrated Assessment Modelling System for the Iberian Peninsula (SIMCA) project. Up to 23 alternative model configurations, including Planetary Boundary Layer schemes, Microphysics, Land-surface models, Radiation schemes, Sea Surface Temperature and Four-Dimensional Data Assimilation were tested in a 3 km spatial resolution domain. Model results for the most significant meteorological variables, were assessed through a series of common statistics. The physics options identified to produce better results (Yonsei University Planetary Boundary Layer, WRF Single-Moment 6-class microphysics, Noah Land-surface model, Eta Geophysical Fluid Dynamics Laboratory longwave radiation and MM5 shortwave radiation schemes) along with other relevant user settings (time-varying Sea Surface Temperature and combined grid-observational nudging) where included in a "best case" configuration. This setup was tested and found to produce more accurate estimation of temperature, wind and humidity fields at surface level than any other configuration for the two episodes simulated. Planetary Boundary Layer height predictions showed a reasonable agreement with estimations derived from routine atmospheric soundings. Although some seasonal and geographical differences were observed, the model showed an acceptable behaviour overall. Despite being useful to define the most appropriate setup of the WRF model for air quality modelling over the Iberian Peninsula, this study provides a general overview of WRF sensitivity and can constitute a reference for future mesoscale meteorological modelling exercises.
NASA Technical Reports Server (NTRS)
1979-01-01
The objectives, conclusions, and approaches for accomplishing 19 specific design and analysis activities related to the installation of the power extension package (PEP) into the Orbiter cargo bay are described as well as those related to its deployment, extension, and retraction. The proposed cable handling system designed to transmit power from PEP to the Orbiter by way of the shuttle remote manipulator system is described and a preliminary specification for the gimbal assembly, solar array drive is included.
Servo control of an optical trap.
Wulff, Kurt D; Cole, Daniel G; Clark, Robert L
2007-08-01
A versatile optical trap has been constructed to control the position of trapped objects and ultimately to apply specified forces using feedback control. While the design, development, and use of optical traps has been extensive and feedback control has played a critical role in pushing the state of the art, few comprehensive examinations of feedback control of optical traps have been undertaken. Furthermore, as the requirements are pushed to ever smaller distances and forces, the performance of optical traps reaches limits. It is well understood that feedback control can result in both positive and negative effects in controlled systems. We give an analysis of the trapping limits as well as introducing an optical trap with a feedback control scheme that dramatically improves an optical trap's sensitivity at low frequencies.
NASA Technical Reports Server (NTRS)
1981-01-01
A study was performed to determine the types of manned missions that will likely be performed in the late 1980's or early 1990's timeframe, to define MOTV configurations which satisfy these missions requirements, and to develop a program plan for its development. Twenty generic missions were originally defined for MOTV but, to simplify the selection process, five of these missions were selected as typical and used as Design Reference Missions. Systems and subsystems requirements were re-examined and sensitivity analyses performed to determine optimum point designs. Turnaround modes were considered to determine the most effective combination of ground based and spaced based activities. A preferred concept for the crew capsule and for the mission mode was developed.
Commercial Aircraft Maintenance Experience Relating to Engine External Hardware
NASA Technical Reports Server (NTRS)
Soditus, Sharon M.
2006-01-01
Airlines are extremely sensitive to the amount of dollars spent on maintaining the external engine hardware in the field. Analysis reveals that many problems revolve around a central issue, reliability. Fuel and oil leakage due to seal failure and electrical fault messages due to wire harness failures play a major role in aircraft delays and cancellations (D&C's) and scheduled maintenance. Correcting these items on the line requires a large investment of engineering resources and manpower after the fact. The smartest and most cost effective philosophy is to build the best hardware the first time. The only way to do that is to completely understand and model the operating environment, study the field experience of similar designs and to perform extensive testing.
Job Superscheduler Architecture and Performance in Computational Grid Environments
NASA Technical Reports Server (NTRS)
Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak
2003-01-01
Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.
Plasmablasts and plasma cells: reconsidering teleost immune system organization.
Ye, Jianmin; Kaattari, Ilsa; Kaattari, Stephen
2011-12-01
Comparative immunologists have expended extensive efforts in the characterization of early fish B cell development; however, analysis of the post-antigen induction stages of antibody secreting cell (ASC) differentiation has been limited. In contrast, work with murine ASCs has resolved the physically and functionally distinct cells known as plasmablasts, the short-lived plasma cells and long-lived plasma cells. Teleost ASCs are now known to also possess comparable subpopulations, which can greatly differ in such basic functions as lifespan, antigen sensitivity, antibody secretion rate, differentiative potential, and distribution within the body. Understanding the mechanisms by which these subpopulations are produced and distributed is essential for both basic understanding in comparative immunology and practical vaccine engineering. Copyright © 2011 Elsevier Ltd. All rights reserved.
Mounts, W M; Liebman, M N
1997-07-01
We have developed a method for representing biological pathways and simulating their behavior based on the use of stochastic activity networks (SANs). SANs, an extension of the original Petri net, have been used traditionally to model flow systems including data-communications networks and manufacturing processes. We apply the methodology to the blood coagulation cascade, a biological flow system, and present the representation method as well as results of simulation studies based on published experimental data. In addition to describing the dynamic model, we also present the results of its utilization to perform simulations of clinical states including hemophilia's A and B as well as sensitivity analysis of individual factors and their impact on thrombin production.
A Flight Dynamics Model for a Multi-Actuated Flexible Rocket Vehicle
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2011-01-01
A comprehensive set of motion equations for a multi-actuated flight vehicle is presented. The dynamics are derived from a vector approach that generalizes the classical linear perturbation equations for flexible launch vehicles into a coupled three-dimensional model. The effects of nozzle and aerosurface inertial coupling, sloshing propellant, and elasticity are incorporated without restrictions on the position, orientation, or number of model elements. The present formulation is well suited to matrix implementation for large-scale linear stability and sensitivity analysis and is also shown to be extensible to nonlinear time-domain simulation through the application of a special form of Lagrange s equations in quasi-coordinates. The model is validated through frequency-domain response comparison with a high-fidelity planar implementation.
A selection model for accounting for publication bias in a full network meta-analysis.
Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia
2014-12-30
Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency. Copyright © 2014 John Wiley & Sons, Ltd.
Ljungvall, Ingrid; Ahlstrom, Christer; Höglund, Katja; Hult, Peter; Kvart, Clarence; Borgarelli, Michele; Ask, Per; Häggström, Jens
2009-05-01
To investigate use of signal analysis of heart sounds and murmurs in assessing severity of mitral valve regurgitation (mitral regurgitation [MR]) in dogs with myxomatous mitral valve disease (MMVD). 77 client-owned dogs. Cardiac sounds were recorded from dogs evaluated by use of auscultatory and echocardiographic classification systems. Signal analysis techniques were developed to extract 7 sound variables (first frequency peak, murmur energy ratio, murmur duration > 200 Hz, sample entropy and first minimum of the auto mutual information function of the murmurs, and energy ratios of the first heart sound [S1] and second heart sound [S2]). Significant associations were detected between severity of MR and all sound variables, except the energy ratio of S1. An increase in severity of MR resulted in greater contribution of higher frequencies, increased signal irregularity, and decreased energy ratio of S2. The optimal combination of variables for distinguishing dogs with high-intensity murmurs from other dogs was energy ratio of S2 and murmur duration > 200 Hz (sensitivity, 79%; specificity, 71%) by use of the auscultatory classification. By use of the echocardiographic classification, corresponding variables were auto mutual information, first frequency peak, and energy ratio of S2 (sensitivity, 88%; specificity, 82%). Most of the investigated sound variables were significantly associated with severity of MR, which indicated a powerful diagnostic potential for monitoring MMVD. Signal analysis techniques could be valuable for clinicians when performing risk assessment or determining whether special care and more extensive examinations are required.
NASA Astrophysics Data System (ADS)
Nazarzadeh Zare, Mohsen; Dorrani, Kamal; Gholamali Lavasani, Masoud
2012-11-01
Background and purpose : This study examines the views of farmers and extension agents participating in extension education courses in Dezful, Iran, with regard to problems with these courses. It relies upon a descriptive methodology, using a survey as its instrument. Sample : The statistical population consisted of 5060 farmers and 50 extension agents; all extension agents were studied owing to their small population and a sample of 466 farmers was selected based on the stratified ratio sampling method. For the data analysis, statistical procedures including the t-test and factor analysis were used. Results : The results of factor analysis on the views of farmers indicated that these courses have problems such as inadequate use of instructional materials by extension agents, insufficient employment of knowledgeable and experienced extension agents, bad and inconvenient timing of courses for farmers, lack of logical connection between one curriculum and prior ones, negligence in considering the opinions of farmers in arranging the courses, and lack of information about the time of courses. The findings of factor analysis on the views of extension agents indicated that these courses suffer from problems such as use of consistent methods of instruction for teaching curricula, and lack of continuity between courses and their levels and content. Conclusions : Recommendations include: listening to the views of farmers when planning extension courses; providing audiovisual aids, pamphlets and CDs; arranging courses based on convenient timing for farmers; using incentives to encourage participation; and employing extension agents with knowledge of the latest agricultural issues.
Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús
2014-01-01
The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key Points Overall sit-and-reach tests have a moderate mean criterion-related validity for estimating hamstring extensibility, but they have a low mean validity for estimating lumbar extensibility. Among all the sit-and-reach test protocols, the Classic sit-and-reach test seems to be the best option to estimate hamstring extensibility. End scores (e.g., the Classic sit-and-reach test) are a better indicator of hamstring extensibility than the modifications that incorporate fingers-to-box distance (e.g., the Modified sit-and-reach test). When angular tests such as straight leg raise or knee extension tests cannot be used, sit-and-reach tests seem to be a useful field test alternative to estimate hamstring extensibility, but not to estimate lumbar extensibility. PMID:24570599
MANGALATHU-ARUMANA, J.; BEARDSLEY, S. A.; LIEBENTHAL, E.
2012-01-01
The integration of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) can contribute to characterizing neural networks with high temporal and spatial resolution. This research aimed to determine the sensitivity and limitations of applying joint independent component analysis (jICA) within-subjects, for ERP and fMRI data collected simultaneously in a parametric auditory frequency oddball paradigm. In a group of 20 subjects, an increase in ERP peak amplitude ranging 1–8 μV in the time window of the P300 (350–700ms), and a correlated increase in fMRI signal in a network of regions including the right superior temporal and supramarginal gyri, was observed with the increase in deviant frequency difference. JICA of the same ERP and fMRI group data revealed activity in a similar network, albeit with stronger amplitude and larger extent. In addition, activity in the left pre- and post- central gyri, likely associated with right hand somato-motor response, was observed only with the jICA approach. Within-subject, the jICA approach revealed significantly stronger and more extensive activity in the brain regions associated with the auditory P300 than the P300 linear regression analysis. The results suggest that with the incorporation of spatial and temporal information from both imaging modalities, jICA may be a more sensitive method for extracting common sources of activity between ERP and fMRI. PMID:22377443
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Beran; John Christenson; Dragos Nica
2002-12-15
The goal of the project is to enable plant operators to detect with high sensitivity and reliability the onset of decalibration drifts in all of the instrumentation used as input to the reactor heat balance calculations. To achieve this objective, the collaborators developed and implemented at DBNPS an extension of the Multivariate State Estimation Technique (MSET) pattern recognition methodology pioneered by ANAL. The extension was implemented during the second phase of the project and fully achieved the project goal.
Stream vulnerability to widespread and emergent stressors: a focus on unconventional oil and gas
Entrekin, Sally; Maloney, Kelly O.; Katherine E. Kapo,; Walters, Annika W.; Evans-White, Michelle A.; Klemow, Kenneth M.
2015-01-01
Multiple stressors threaten stream physical and biological quality, including elevated nutrients and other contaminants, riparian and in-stream habitat degradation and altered natural flow regime. Unconventional oil and gas (UOG) development is one emerging stressor that spans the U.S. UOG development could alter stream sedimentation, riparian extent and composition, in-stream flow, and water quality. We developed indices to describe the watershed sensitivity and exposure to natural and anthropogenic disturbances and computed a vulnerability index from these two scores across stream catchments in six productive shale plays. We predicted that catchment vulnerability scores would vary across plays due to climatic, geologic and anthropogenic differences. Across-shale averages supported this prediction revealing differences in catchment sensitivity, exposure, and vulnerability scores that resulted from different natural and anthropogenic environmental conditions. For example, semi-arid Western shale play catchments (Mowry, Hilliard, and Bakken) tended to be more sensitive to stressors due to low annual average precipitation and extensive grassland. Catchments in the Barnett and Marcellus-Utica were naturally sensitive from more erosive soils and steeper catchment slopes, but these catchments also experienced areas with greater UOG densities and urbanization. Our analysis suggested Fayetteville and Barnett catchments were vulnerable due to existing anthropogenic exposure. However, all shale plays had catchments that spanned a wide vulnerability gradient. Our results identify vulnerable catchments that can help prioritize stream protection and monitoring efforts. Resource managers can also use these findings to guide local development activities to help reduce possible environmental effects.
Stream Vulnerability to Widespread and Emergent Stressors: A Focus on Unconventional Oil and Gas
Entrekin, Sally A.; Maloney, Kelly O.; Kapo, Katherine E.; Walters, Annika W.; Evans-White, Michelle A.; Klemow, Kenneth M.
2015-01-01
Multiple stressors threaten stream physical and biological quality, including elevated nutrients and other contaminants, riparian and in-stream habitat degradation and altered natural flow regime. Unconventional oil and gas (UOG) development is one emerging stressor that spans the U.S. UOG development could alter stream sedimentation, riparian extent and composition, in-stream flow, and water quality. We developed indices to describe the watershed sensitivity and exposure to natural and anthropogenic disturbances and computed a vulnerability index from these two scores across stream catchments in six productive shale plays. We predicted that catchment vulnerability scores would vary across plays due to climatic, geologic and anthropogenic differences. Across-shale averages supported this prediction revealing differences in catchment sensitivity, exposure, and vulnerability scores that resulted from different natural and anthropogenic environmental conditions. For example, semi-arid Western shale play catchments (Mowry, Hilliard, and Bakken) tended to be more sensitive to stressors due to low annual average precipitation and extensive grassland. Catchments in the Barnett and Marcellus-Utica were naturally sensitive from more erosive soils and steeper catchment slopes, but these catchments also experienced areas with greater UOG densities and urbanization. Our analysis suggested Fayetteville and Barnett catchments were vulnerable due to existing anthropogenic exposure. However, all shale plays had catchments that spanned a wide vulnerability gradient. Our results identify vulnerable catchments that can help prioritize stream protection and monitoring efforts. Resource managers can also use these findings to guide local development activities to help reduce possible environmental effects. PMID:26397727
Steele, James; Fisher, James; Perrin, Craig; Conway, Rebecca; Bruce-Low, Stewart; Smith, Dave
2018-01-12
Secondary analysis of data from studies utilising isolated lumbar extension exercise interventions for correlations among changes in isolated lumbar extension strength, pain, and disability. Studies reporting isolated lumbar extension strength changes were examined for inclusion criteria including: (1) participants with chronic low back pain, (2) intervention ≥ four weeks including isolated lumbar extension exercise, (3) outcome measures including isolated lumbar extension strength, pain (Visual Analogue Scale), and disability (Oswestry Disability Index). Six studies encompassing 281 participants were included. Correlations among change in isolated lumbar extension strength, pain, and disability. Participants were grouped as "met" or "not met" based on minimal clinically important changes and between groups comparisons conducted. Isolated lumbar extension strength and Visual Analogue Scale pooled analysis showed significant weak to moderate correlations (r = -0.391 to -0.539, all p < 0.001). Isolated lumbar extension strength and Oswestry Disability Index pooled analysis showed significant weak correlations (r = -0.349 to -0.470, all p < 0.001). For pain and disability, isolated lumbar extension strength changes were greater for those "met" compared with those "not met" (p < 0.001-0.008). Improvements in isolated lumbar extension strength may be related to positive and meaningful clinical outcomes. As many other performance outcomes and clinical outcomes are not related, isolated lumbar extension strength change may be a mechanism of action affecting symptom improvement. Implications for Rehabilitation Chronic low back pain is often associated with deconditioning of the lumbar extensor musculature. Isolated lumbar extension exercise has been shown to condition this musculature and also reduce pain and disability. This study shows significant correlations between increases in isolated lumbar extension strength and reductions in pain and disability. Strengthening of the lumbar extensor musculature could be considered an important target for exercise interventions.
Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS
NASA Astrophysics Data System (ADS)
Barani, T.; Bruschi, E.; Pizzocri, D.; Pastore, G.; Van Uffelen, P.; Williamson, R. L.; Luzzi, L.
2017-04-01
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel performance analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. In particular, experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of the burst release process in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which is applied as an extension of conventional diffusion-based models to introduce the burst release effect. The concept and governing equations of the model are presented, and the sensitivity of results to the newly introduced parameters is evaluated through an analytic sensitivity analysis. The model is assessed for application to integral fuel rod analysis by implementation in two structurally different fuel performance codes: BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D code). Model assessment is based on the analysis of 19 light water reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the quantitative predictions of integral fuel rod FGR and the qualitative representation of the FGR kinetics with the transient model relative to the canonical, purely diffusion-based models of the codes. The overall quantitative improvement of the integral FGR predictions in the two codes is comparable. Moreover, calculated radial profiles of xenon concentration after irradiation are investigated and compared to experimental data, illustrating the underlying representation of the physical mechanisms of burst release.
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
Costa, Marta; Manton, James D; Ostrovsky, Aaron D; Prohaska, Steffen; Jefferis, Gregory S X E
2016-07-20
Neural circuit mapping is generating datasets of tens of thousands of labeled neurons. New computational tools are needed to search and organize these data. We present NBLAST, a sensitive and rapid algorithm, for measuring pairwise neuronal similarity. NBLAST considers both position and local geometry, decomposing neurons into short segments; matched segments are scored using a probabilistic scoring matrix defined by statistics of matches and non-matches. We validated NBLAST on a published dataset of 16,129 single Drosophila neurons. NBLAST can distinguish neuronal types down to the finest level (single identified neurons) without a priori information. Cluster analysis of extensively studied neuronal classes identified new types and unreported topographical features. Fully automated clustering organized the validation dataset into 1,052 clusters, many of which map onto previously described neuronal types. NBLAST supports additional query types, including searching neurons against transgene expression patterns. Finally, we show that NBLAST is effective with data from other invertebrates and zebrafish. VIDEO ABSTRACT. Copyright © 2016 MRC Laboratory of Molecular Biology. Published by Elsevier Inc. All rights reserved.
Polarization sensitive optical coherence tomography in equine bone
NASA Astrophysics Data System (ADS)
Jacobs, J. W.; Matcher, S. J.
2009-02-01
Optical coherence tomography (OCT) has been used to image equine bone samples. OCT and polarization sensitive OCT (PS-OCT) images of equine bone samples, before and after demineralization, are presented. Using a novel approach, taking a series of images at different angles of illumination, the polar angle and true birefringence of collagen within the tissue is determined, at one site in the sample. The images were taken before and after the bones were passed through a demineralization process. The images show an improvement in depth penetration after demineralization allowing better visualization of the internal structure of the bone and the optical orientation of the collagen. A quantitative measurement of true birefringence has been made of the bone; true birefringence was shown to be 1.9x10-3 before demineralization increasing to 2.7x10-3 after demineralization. However, determined collagen fiber orientation remains the same before and after demineralization. The study of bone is extensive within the field of tissue engineering where an understanding of the internal structures is essential. OCT in bone, and improved depth penetration through demineralization, offers a useful approach to bone analysis.
Magneto-optical contrast in liquid-state optically detected NMR spectroscopy
Pagliero, Daniela; Meriles, Carlos A.
2011-01-01
We use optical Faraday rotation (OFR) to probe nuclear spins in real time at high-magnetic field in a range of diamagnetic sample fluids. Comparison of OFR-detected NMR spectra reveals a correlation between the relative signal amplitude and the fluid Verdet constant, which we interpret as a manifestation of the variable detuning between the probe beam and the sample optical transitions. The analysis of chemical-shift-resolved, optically detected spectra allows us to set constraints on the relative amplitudes of hyperfine coupling constants, both for protons at chemically distinct sites and other lower-gyromagnetic-ratio nuclei including carbon, fluorine, and phosphorous. By considering a model binary mixture we observe a complex dependence of the optical response on the relative concentration, suggesting that the present approach is sensitive to the solvent-solute dynamics in ways complementary to those known in inductive NMR. Extension of these experiments may find application in solvent suppression protocols, sensitivity-enhanced NMR of metalloproteins in solution, the investigation of solvent-solute interactions, or the characterization of molecular orbitals in diamagnetic systems. PMID:22100736
Couderc, François; Ong-Meang, Varravaddheay; Poinsot, Véréna
2017-01-01
Native laser-induced fluorescence using UV lasers associated to CE offers now a large related literature, for now 30 years. The main works have been performed using very expensive Ar-ion lasers emitting at 257 and 275 nm. They are not affordable for routine analyses, but have numerous applications such as protein, catecholamine, and indolamine analysis. Some other lasers such as HeCd 325 nm have been used but only for few applications. Diode lasers, emitting at 266 nm, cheaper, are extensively used for the same topics, even if the obtained sensitivity is lower than the one observed using the costly UV-Ar-ion lasers. This review presents various CE or microchips applications and different UV lasers used for the excitation of native fluorescence. We showed that CE/Native UV laser induced fluorescence detection is very sensitive for detection as well as small aromatic biomolecules than proteins containing Trp and Tyr amino acids. Moreover, it is a simple way to analyze biomolecules without derivatization. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The mere exposure effect in the domain of haptics.
Jakesch, Martina; Carbon, Claus-Christian
2012-01-01
Zajonc showed that the attitude towards stimuli that one had been previously exposed to is more positive than towards novel stimuli. This mere exposure effect (MEE) has been tested extensively using various visual stimuli. Research on the MEE is sparse, however, for other sensory modalities. We used objects of two material categories (stone and wood) and two complexity levels (simple and complex) to test the influence of exposure frequency (F0 = novel stimuli, F2 = stimuli exposed twice, F10 = stimuli exposed ten times) under two sensory modalities (haptics only and haptics & vision). Effects of exposure frequency were found for high complex stimuli with significantly increasing liking from F0 to F2 and F10, but only for the stone category. Analysis of "Need for Touch" data showed the MEE in participants with high need for touch, which suggests different sensitivity or saturation levels of MEE. This different sensitivity or saturation levels might also reflect the effects of expertise on the haptic evaluation of objects. It seems that haptic and cross-modal MEEs are influenced by factors similar to those in the visual domain indicating a common cognitive basis.
Information theory analysis of sensor-array imaging systems for computer vision
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.
1983-01-01
Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.
Nanotopography-guided tissue engineering and regenerative medicine☆
Kim, Hong Nam; Jiao, Alex; Hwang, Nathaniel S.; Kim, Min Sung; Kang, Do Hyun; Kim, Deok-Ho; Suh, Kahp-Yang
2017-01-01
Human tissues are intricate ensembles of multiple cell types embedded in complex and well-defined structures of the extracellular matrix (ECM). The organization of ECM is frequently hierarchical from nano to macro, with many proteins forming large scale structures with feature sizes up to several hundred microns. Inspired from these natural designs of ECM, nanotopography-guided approaches have been increasingly investigated for the last several decades. Results demonstrate that the nanotopography itself can activate tissue-specific function in vitro as well as promote tissue regeneration in vivo upon transplantation. In this review, we provide an extensive analysis of recent efforts to mimic functional nanostructures in vitro for improved tissue engineering and regeneration of injured and damaged tissues. We first characterize the role of various nanostructures in human tissues with respect to each tissue-specific function. Then, we describe various fabrication methods in terms of patterning principles and material characteristics. Finally, we summarize the applications of nanotopography to various tissues, which are classified into four types depending on their functions: protective, mechano-sensitive, electro-active, and shear stress-sensitive tissues. Some limitations and future challenges are briefly discussed at the end. PMID:22921841
A colorimetric sensor array for detection of triacetone triperoxide vapor.
Lin, Hengwei; Suslick, Kenneth S
2010-11-10
Triacetone triperoxide (TATP), one of the most dangerous primary explosives, has emerged as an explosive of choice for terrorists in recent years. Owing to the lack of UV absorbance, fluorescence, or facile ionization, TATP is extremely difficult to detect directly. Techniques that are able to detect generally require expensive instrumentation, need extensive sample preparation, or cannot detect TATP in the gas phase. Here we report a simple and highly sensitive colorimetric sensor for the detection of TATP vapor with semiquantitative analysis from 50 ppb to 10 ppm. By using a solid acid catalyst to pretreat a gas stream, we have discovered that a colorimetric sensor array of redox sensitive dyes can detect even very low levels of TATP vapor from its acid decomposition products (e.g., H(2)O(2)) with limits of detection (LOD) below 2 ppb (i.e., <0.02% of its saturation vapor pressure). Common potential interferences (e.g., humidity, personal hygiene products, perfume, laundry supplies, volatile organic compounds, etc.) do not generate an array response, and the array can also differentiate TATP from other chemical oxidants (e.g., hydrogen peroxide, bleach, tert-butylhydroperoxide, peracetic acid).
Loudig, Olivier; Brandwein-Gensler, Margaret; Kim, Ryung S; Lin, Juan; Isayeva, Tatyana; Liu, Christina; Segall, Jeffrey E; Kenny, Paraic A; Prystowsky, Michael B
2011-12-01
High-throughput gene expression profiling from formalin-fixed, paraffin-embedded tissues has become a reality, and several methods are now commercially available. The Illumina whole-genome complementary DNA-mediated annealing, selection, extension and ligation assay (Illumina, Inc) is a full-transcriptome version of the original 512-gene complementary DNA-mediated annealing, selection, extension and ligation assay, allowing high-throughput profiling of 24,526 annotated genes from degraded and formalin-fixed, paraffin-embedded RNA. This assay has the potential to allow identification of novel gene signatures associated with clinical outcome using banked archival pathology specimen resources. We tested the reproducibility of the whole-genome complementary DNA-mediated annealing, selection, extension and ligation assay and its sensitivity for detecting differentially expressed genes in RNA extracted from matched fresh and formalin-fixed, paraffin-embedded cells, after 1 and 13 months of storage, using the human breast cell lines MCF7 and MCF10A. Then, using tumor worst pattern of invasion as a classifier, 1 component of the "risk model," we selected 12 formalin-fixed, paraffin-embedded oral squamous cell carcinomas for whole-genome complementary DNA-mediated annealing, selection, extension and ligation assay analysis. We profiled 5 tumors with nonaggressive, nondispersed pattern of invasion, and 7 tumors with aggressive dispersed pattern of invasion and satellites scattered at least 1 mm apart. To minimize variability, the formalin-fixed, paraffin-embedded specimens were prepared from snap-frozen tissues, and RNA was obtained within 24 hours of fixation. One hundred four down-regulated genes and 72 up-regulated genes in tumors with aggressive dispersed pattern of invasion were identified. We performed quantitative reverse transcriptase polymerase chain reaction validation of 4 genes using Taqman assays and in situ protein detection of 1 gene by immunohistochemistry. Functional cluster analysis of genes up-regulated in tumors with aggressive pattern of invasion suggests presence of genes involved in cellular cytoarchitecture, some of which already associated with tumor invasion. Identification of these genes provides biologic rationale for our histologic classification, with regard to tumor invasion, and demonstrates that the whole-genome complementary DNA-mediated annealing, selection, extension and ligation assay is a powerful assay for profiling degraded RNA from archived specimens when combined with quantitative reverse transcriptase polymerase chain reaction validation. Copyright © 2011 Elsevier Inc. All rights reserved.
No Evidence for Extensions to the Standard Cosmological Model.
Heavens, Alan; Fantaye, Yabebal; Sellentin, Elena; Eggers, Hans; Hosenie, Zafiirah; Kroon, Steve; Mootoovaloo, Arrykrishna
2017-09-08
We compute the Bayesian evidence for models considered in the main analysis of Planck cosmic microwave background data. By utilizing carefully defined nearest-neighbor distances in parameter space, we reuse the Monte Carlo Markov chains already produced for parameter inference to compute Bayes factors B for many different model-data set combinations. The standard 6-parameter flat cold dark matter model with a cosmological constant (ΛCDM) is favored over all other models considered, with curvature being mildly favored only when cosmic microwave background lensing is not included. Many alternative models are strongly disfavored by the data, including primordial correlated isocurvature models (lnB=-7.8), nonzero scalar-to-tensor ratio (lnB=-4.3), running of the spectral index (lnB=-4.7), curvature (lnB=-3.6), nonstandard numbers of neutrinos (lnB=-3.1), nonstandard neutrino masses (lnB=-3.2), nonstandard lensing potential (lnB=-4.6), evolving dark energy (lnB=-3.2), sterile neutrinos (lnB=-6.9), and extra sterile neutrinos with a nonzero scalar-to-tensor ratio (lnB=-10.8). Other models are less strongly disfavored with respect to flat ΛCDM. As with all analyses based on Bayesian evidence, the final numbers depend on the widths of the parameter priors. We adopt the priors used in the Planck analysis, while performing a prior sensitivity analysis. Our quantitative conclusion is that extensions beyond the standard cosmological model are disfavored by Planck data. Only when newer Hubble constant measurements are included does ΛCDM become disfavored, and only mildly, compared with a dynamical dark energy model (lnB∼+2).
No Evidence for Extensions to the Standard Cosmological Model
NASA Astrophysics Data System (ADS)
Heavens, Alan; Fantaye, Yabebal; Sellentin, Elena; Eggers, Hans; Hosenie, Zafiirah; Kroon, Steve; Mootoovaloo, Arrykrishna
2017-09-01
We compute the Bayesian evidence for models considered in the main analysis of Planck cosmic microwave background data. By utilizing carefully defined nearest-neighbor distances in parameter space, we reuse the Monte Carlo Markov chains already produced for parameter inference to compute Bayes factors B for many different model-data set combinations. The standard 6-parameter flat cold dark matter model with a cosmological constant (Λ CDM ) is favored over all other models considered, with curvature being mildly favored only when cosmic microwave background lensing is not included. Many alternative models are strongly disfavored by the data, including primordial correlated isocurvature models (ln B =-7.8 ), nonzero scalar-to-tensor ratio (ln B =-4.3 ), running of the spectral index (ln B =-4.7 ), curvature (ln B =-3.6 ), nonstandard numbers of neutrinos (ln B =-3.1 ), nonstandard neutrino masses (ln B =-3.2 ), nonstandard lensing potential (ln B =-4.6 ), evolving dark energy (ln B =-3.2 ), sterile neutrinos (ln B =-6.9 ), and extra sterile neutrinos with a nonzero scalar-to-tensor ratio (ln B =-10.8 ). Other models are less strongly disfavored with respect to flat Λ CDM . As with all analyses based on Bayesian evidence, the final numbers depend on the widths of the parameter priors. We adopt the priors used in the Planck analysis, while performing a prior sensitivity analysis. Our quantitative conclusion is that extensions beyond the standard cosmological model are disfavored by Planck data. Only when newer Hubble constant measurements are included does Λ CDM become disfavored, and only mildly, compared with a dynamical dark energy model (ln B ˜+2 ).
Linking animal-borne video to accelerometers reveals prey capture variability
Watanabe, Yuuki Y.; Takahashi, Akinori
2013-01-01
Understanding foraging is important in ecology, as it determines the energy gains and, ultimately, the fitness of animals. However, monitoring prey captures of individual animals is difficult. Direct observations using animal-borne videos have short recording periods, and indirect signals (e.g., stomach temperature) are never validated in the field. We took an integrated approach to monitor prey captures by a predator by deploying a video camera (lasting for 85 min) and two accelerometers (on the head and back, lasting for 50 h) on free-swimming Adélie penguins. The movies showed that penguins moved the heads rapidly to capture krill in midwater and fish (Pagothenia borchgrevinki) underneath the sea ice. Captures were remarkably fast (two krill per second in swarms) and efficient (244 krill or 33 P. borchgrevinki in 78–89 min). Prey captures were detected by the signal of head acceleration relative to body acceleration with high sensitivity and specificity (0.83–0.90), as shown by receiver-operating characteristic analysis. Extension of signal analysis to the entire behavioral records showed that krill captures were spatially and temporally more variable than P. borchgrevinki captures. Notably, the frequency distribution of krill capture rate closely followed a power-law model, indicating that the foraging success of penguins depends on a small number of very successful dives. The three steps illustrated here (i.e., video observations, linking video to behavioral signals, and extension of signal analysis) are unique approaches to understanding the spatial and temporal variability of ecologically important events such as foraging. PMID:23341596
A Meta-Analysis of Extensive Reading Research
ERIC Educational Resources Information Center
Nakanishi, Takayuki
2015-01-01
The purposes of this study were to investigate the overall effectiveness of extensive reading, whether learners' age impacts learning, and whether the length of time second language learners engage in extensive reading influences test scores. The author conducted a meta-analysis to answer research questions and to identify future research…
Two endogenous proteins that induce cell wall extension in plants
NASA Technical Reports Server (NTRS)
McQueen-Mason, S.; Durachko, D. M.; Cosgrove, D. J.
1992-01-01
Plant cell enlargement is regulated by wall relaxation and yielding, which is thought to be catalyzed by elusive "wall-loosening" enzymes. By employing a reconstitution approach, we found that a crude protein extract from the cell walls of growing cucumber seedlings possessed the ability to induce the extension of isolated cell walls. This activity was restricted to the growing region of the stem and could induce the extension of isolated cell walls from various dicot stems and the leaves of amaryllidaceous monocots, but was less effective on grass coleoptile walls. Endogenous and reconstituted wall extension activities showed similar sensitivities to pH, metal ions, thiol reducing agents, proteases, and boiling in methanol or water. Sequential HPLC fractionation of the active wall extract revealed two proteins with molecular masses of 29 and 30 kD associated with the activity. Each protein, by itself, could induce wall extension without detectable hydrolytic breakdown of the wall. These proteins appear to mediate "acid growth" responses of isolated walls and may catalyze plant cell wall extension by a novel biochemical mechanism.
Control of growth of juvenile leaves of Eucalyptus globulus: effects of leaf age.
Metcalfe, J C; Davies, W J; Pereira, J S
1991-12-01
Biophysical variables influencing the expansion of plant cells (yield threshold, cell wall extensibility and turgor) were measured in individual Eucalyptus globulus leaves from the time of emergence until cessation of growth. Leaf water relations variables and growth rates were determined as relative humidity was changed on an hourly basis. Yield threshold and cell wall extensibility were estimated from plots of leaf growth rate versus turgor. Cell wall extensibility was also measured by the Instron technique, and yield threshold was determined experimentally both by stress relaxation in a psychrometer chamber and by incubation in a range of polyethylene glycol solutions. Once emerging leaves reached approximately 5 cm(2) in size, increases in leaf area were rapid throughout the expansive phase and varied little between light and dark periods. Both leaf growth rate and turgor were sensitive to changes in humidity, and in the longer term, both yield threshold and cell wall extensibility changed as the leaf aged. Rapidly expanding leaves had a very low yield threshold and high cell wall extensibility, whereas mature leaves had low cell wall extensibility. Yield threshold increased with leaf age.
Groundwater sensitivity mapping in Kentucky using GIS and digitally vectorized geologic quadrangles
NASA Astrophysics Data System (ADS)
Croskrey, Andrea; Groves, Chris
2008-05-01
Groundwater sensitivity (Ray and O’dell in Environ Geol 22:345 352, 1993a) refers to the inherent ease with which groundwater can be contaminated based on hydrogeologic characteristics. We have developed digital methods for identifying areas of varying groundwater sensitivity for a ten county area of south central Kentucky at a scale of 1:100,000. The study area includes extensive limestone karst sinkhole plains, with groundwater extremely sensitive to contamination. Digitally vectorized geologic quadrangles (DVGQs) were combined with elevation data to identify both hydrogeologic groundwater sensitivity regions and zones of “high risk runoff” where contaminants could be transported in runoff from less sensitive to higher sensitivity (particularly karst) areas. While future work will fine-tune these maps with additional layers of data (soils for example) as digital data have become available, using DVGQs allows a relatively rapid assessment of groundwater sensitivity for Kentucky at a more useful scale than previously available assessment methods, such as DRASTIC and DIVERSITY.
The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis
A Rashid, Ahmad Safuan; Ali, Nazri
2014-01-01
Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652
The contribution of particle swarm optimization to three-dimensional slope stability analysis.
Kalatehjari, Roohollah; Rashid, Ahmad Safuan A; Ali, Nazri; Hajihassani, Mohsen
2014-01-01
Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes.
Application of a data-mining method based on Bayesian networks to lesion-deficit analysis
NASA Technical Reports Server (NTRS)
Herskovits, Edward H.; Gerring, Joan P.
2003-01-01
Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.
van der Have, Mike; Oldenburg, Bas; Fidder, Herma H; Belderbos, Tim D G; Siersema, Peter D; van Oijen, Martijn G H
2014-03-01
Treatment with tumor necrosis factor-α (TNF-α) inhibitors in patients with Crohn's disease (CD) is associated with potentially serious infections, including tuberculosis (TB) and hepatitis B virus (HBV). We assessed the cost-effectiveness of extensive TB screening and HBV screening prior to initiating TNF-α inhibitors in CD. We constructed two Markov models: (1) comparing tuberculin skin test (TST) combined with chest X-ray (conventional TB screening) versus TST and chest X-ray followed by the interferon-gamma release assay (extensive TB screening) in diagnosing TB; and (2) HBV screening versus no HBV screening. Our base-case included an adult CD patient starting with infliximab treatment. Input parameters were extracted from the literature. Direct medical costs were assessed and discounted following a third-party payer perspective. The main outcome was the incremental cost-effectiveness ratio (ICER). Sensitivity and Monte Carlo analyses were performed over wide ranges of probability and cost estimates. At base-case, the ICERs of extensive screening and HBV screening were €64,340 and €75,760 respectively to gain one quality-adjusted life year. Sensitivity analyses concluded that extensive TB screening was a cost-effective strategy if the latent TB prevalence is more than 12 % or if the false positivity rate of TST is more than 20 %. HBV screening became cost-effective if HBV reactivation or HBV-related mortality is higher than 37 and 62 %, respectively. Extensive TB screening and HBV screening are not cost-effective compared with conventional TB screening and no HBV screening, respectively. However, when targeted at high-risk patient groups, these screening strategies are likely to become cost-effective.
Verheggen, Bram G; Westerhout, Kirsten Y; Schreder, Carl H; Augustin, Matthias
2015-01-01
Allergoids are chemically modified allergen extracts administered to reduce allergenicity and to maintain immunogenicity. Oralair® (the 5-grass tablet) is a sublingual native grass allergen tablet for pre- and co-seasonal treatment. Based on a literature review, meta-analysis, and cost-effectiveness analysis the relative effects and costs of the 5-grass tablet versus a mix of subcutaneous allergoid compounds for grass pollen allergic rhinoconjunctivitis were assessed. A Markov model with a time horizon of nine years was used to assess the costs and effects of three-year immunotherapy treatment. Relative efficacy expressed as standardized mean differences was estimated using an indirect comparison on symptom scores extracted from available clinical trials. The Rhinitis Symptom Utility Index (RSUI) was applied as a proxy to estimate utility values for symptom scores. Drug acquisition and other medical costs were derived from published sources as well as estimates for resource use, immunotherapy persistence, and occurrence of asthma. The analysis was executed from the German payer's perspective, which includes payments of the Statutory Health Insurance (SHI) and additional payments by insurants. Comprehensive deterministic and probabilistic sensitivity analyses and different scenarios were performed to test the uncertainty concerning the incremental model outcomes. The applied model predicted a cost-utility ratio of the 5-grass tablet versus a market mix of injectable allergoid products of € 12,593 per QALY in the base case analysis. Predicted incremental costs and QALYs were € 458 (95% confidence interval, CI: € 220; € 739) and 0.036 (95% CI: 0.002; 0.078), respectively. Compared to the allergoid mix the probability of the 5-grass tablet being the most cost-effective treatment option was predicted to be 76% at a willingness-to-pay threshold of € 20,000. The results were most sensitive to changes in efficacy estimates, duration of the pollen season, and immunotherapy persistence rates. This analysis suggests the sublingual native 5-grass tablet to be cost-effective relative to a mix of subcutaneous allergoid compounds. The robustness of these statements has been confirmed in extensive sensitivity and scenario analyses.
Ford, Lauren; Henderson, Robert L; Rayner, Christopher M; Blackburn, Richard S
2017-03-03
Madder (Rubia tinctorum L.) has been widely used as a red dye throughout history. Acid-sensitive colorants present in madder, such as glycosides (lucidin primeveroside, ruberythric acid, galiosin) and sensitive aglycons (lucidin), are degraded in the textile back extraction process; in previous literature these sensitive molecules are either absent or present in only low concentrations due to the use of acid in typical textile back extraction processes. Anthraquinone aglycons alizarin and purpurin are usually identified in analysis following harsh back extraction methods, such those using solvent mixtures with concentrated hydrochloric acid at high temperatures. Use of softer extraction techniques potentially allows for dye components present in madder to be extracted without degradation, which can potentially provide more information about the original dye profile, which varies significantly between madder varieties, species and dyeing technique. Herein, a softer extraction method involving aqueous glucose solution was developed and compared to other back extraction techniques on wool dyed with root extract from different varieties of Rubia tinctorum. Efficiencies of the extraction methods were analysed by HPLC coupled with diode array detection. Acidic literature methods were evaluated and they generally caused hydrolysis and degradation of the dye components, with alizarin, lucidin, and purpurin being the main compounds extracted. In contrast, extraction in aqueous glucose solution provides a highly effective method for extraction of madder dyed wool and is shown to efficiently extract lucidin primeveroside and ruberythric acid without causing hydrolysis and also extract aglycons that are present due to hydrolysis during processing of the plant material. Glucose solution is a favourable extraction medium due to its ability to form extensive hydrogen bonding with glycosides present in madder, and displace them from the fibre. This new glucose method offers an efficient process that preserves these sensitive molecules and is a step-change in analysis of madder dyed textiles as it can provide further information about historical dye preparation and dyeing processes that current methods cannot. The method also efficiently extracts glycosides in artificially aged samples, making it applicable for museum textile artefacts. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sahoo, S. K.; Jin, H.
2017-12-01
The evolution of Earth's biogeochemical cycles is intimately linked to the oxygenation of the oceans and atmosphere. The Late Devonian is no exception as its characterized with mass extinction and severe euxinia. Here we use concentrations of Molybdenum (Mo), Vanadium (V), Uranium (U) and Chromium (Cr) in organic rich black shales from the Lower Bakken Formation of the Williston Basin, to explore the relationship between extensive anoxia vs. euxinia and it's relation with massive release of oxygen in the ocean atmosphere system. XRF data from 4 core across the basin shows that modern ocean style Mo, U and Cr enrichments are observed throughout the Lower Bakken Formation, yet V is not enriched until later part of the formation. Given the coupling between redox-sensitive-trace element cycles and ocean redox, various models for Late Devonian ocean chemistry imply different effects on the biogeochemical cycling of major and trace nutrients. Here, we examine the differing redox behavior of molybdenum and vanadium under an extreme anoxia and relatively low extent of euxinia. The model suggests that Late Devonian was perhaps extensively anoxic- 40-50% compared to modern seafloor area, and a very little euxinia. Mo enrichments extend up to 500 p.p.m. throughout the section, representative of a modern reducing ocean. However, coeval low V enrichments only support towards anoxia, where anoxia is a source of V, and a sink for Mo. Our model suggests that the oceanic V reservoir is extremely sensitive to perturbations in the extent of anoxic condition, particularly during post glacial times.
NASA Astrophysics Data System (ADS)
Marseille, Gert-Jan; Stoffelen, Ad; Barkmeijer, Jan
2008-03-01
Lacking an established methodology to test the potential impact of prospective extensions to the global observing system (GOS) in real atmospheric cases we developed such a method, called Sensitivity Observing System Experiment (SOSE). For example, since the GOS is non uniform it is of interest to investigate the benefit of complementary observing systems filling its gaps. In a SOSE adjoint sensitivity structures are used to define a pseudo true atmospheric state for the simulation of the prospective observing system. Next, the synthetic observations are used together with real observations from the existing GOS in a state-of-the-art Numerical Weather Prediction (NWP) model to assess the potential added value of the new observing system. Unlike full observing system simulation experiments (OSSE), SOSE can be applied to real extreme events that were badly forecast operationally and only requires the simulation of the new instrument. As such SOSE is an effective tool, for example, to define observation requirements for extensions to the GOS. These observation requirements may serve as input for the design of an operational network of prospective observing systems. In a companion paper we use SOSE to simulate potential future space borne Doppler Wind Lidar (DWL) scenarios and assess their capability to sample meteorologically sensitive areas not well captured by the current GOS, in particular over the Northern Hemisphere oceans.
Cross-borehole slug test analysis in a fractured limestone aquifer
NASA Astrophysics Data System (ADS)
Audouin, Olivier; Bodin, Jacques
2008-01-01
SummaryThis work proposes new semi-analytical solutions for the interpretation of cross-borehole slug tests in fractured media. Our model is an extension of a previous work by Barker [Barker, J.A., 1988. A generalized radial flow model for hydraulic tests in fractured rock. Water Resources Research 24 (10), 1796-1804; Butler Jr., J.J., Zhan X., 2004. Hydraulic tests in highly permeable aquifers. Water Resources Research 40, W12402. doi:10.1029/2003/WR002998]. It includes inertial effects at both test and observation wells and a fractional flow dimension in the aquifer. The model has five fitting parameters: flow dimension n, hydraulic conductivity K, specific storage coefficient Ss, and effective lengths of test well Le and of observation well Leo. The results of a sensitivity analysis show that the most sensitive parameter is the flow dimension n. The model sensitivity to other parameters may be ranked as follows: K > Le ˜ Leo > Ss. The sensitivity to aquifer storage remains one or two orders of magnitude lower than that to other parameters. The model has been coupled to an automatic inversion algorithm for facilitating the interpretation of real field data. This inversion algorithm is based on a Gauss-Newton optimization procedure conditioned by re-scaled sensitivities. It has been used to interpret successfully cross-borehole slug test data from the Hydrogeological Experimental Site (HES) of Poitiers, France, consisting of fractured and karstic limestones. HES data provide flow dimension values ranging between 1.6 and 2.5, and hydraulic conductivity values ranging between 4.4 × 10 -5 and 7.7 × 10 -4 m s -1. These values are consistent with previous interpretations of single-well slug tests. The results of the sensitivity analysis are confirmed by calculations of relative errors on parameter estimates, which show that accuracy on n and K is below 20% and that on Ss is about one order of magnitude. The K-values interpreted from cross-borehole slug tests are one order of magnitude higher than those previously interpreted from interference pumping tests. These findings suggest that cross-borehole slug tests focus on preferential flowpath networks made by fractures and karstic channels, i.e. the head perturbation induced by a slug test propagates only through those flowpaths with the lowest hydraulic resistance. As a result, cross-borehole slug tests are expected to identify the hydrodynamic properties of karstic-channels and fracture flowpaths, and may be considered as complementary to pumping tests which more likely provide bulk properties of the whole fracture/karstic-channel/matrix system.
NASA Astrophysics Data System (ADS)
Aleksankina, Ksenia; Heal, Mathew R.; Dore, Anthony J.; Van Oijen, Marcel; Reis, Stefan
2018-04-01
Atmospheric chemistry transport models (ACTMs) are widely used to underpin policy decisions associated with the impact of potential changes in emissions on future pollutant concentrations and deposition. It is therefore essential to have a quantitative understanding of the uncertainty in model output arising from uncertainties in the input pollutant emissions. ACTMs incorporate complex and non-linear descriptions of chemical and physical processes which means that interactions and non-linearities in input-output relationships may not be revealed through the local one-at-a-time sensitivity analysis typically used. The aim of this work is to demonstrate a global sensitivity and uncertainty analysis approach for an ACTM, using as an example the FRAME model, which is extensively employed in the UK to generate source-receptor matrices for the UK Integrated Assessment Model and to estimate critical load exceedances. An optimised Latin hypercube sampling design was used to construct model runs within ±40 % variation range for the UK emissions of SO2, NOx, and NH3, from which regression coefficients for each input-output combination and each model grid ( > 10 000 across the UK) were calculated. Surface concentrations of SO2, NOx, and NH3 (and of deposition of S and N) were found to be predominantly sensitive to the emissions of the respective pollutant, while sensitivities of secondary species such as HNO3 and particulate SO42-, NO3-, and NH4+ to pollutant emissions were more complex and geographically variable. The uncertainties in model output variables were propagated from the uncertainty ranges reported by the UK National Atmospheric Emissions Inventory for the emissions of SO2, NOx, and NH3 (±4, ±10, and ±20 % respectively). The uncertainties in the surface concentrations of NH3 and NOx and the depositions of NHx and NOy were dominated by the uncertainties in emissions of NH3, and NOx respectively, whilst concentrations of SO2 and deposition of SOy were affected by the uncertainties in both SO2 and NH3 emissions. Likewise, the relative uncertainties in the modelled surface concentrations of each of the secondary pollutant variables (NH4+, NO3-, SO42-, and HNO3) were due to uncertainties in at least two input variables. In all cases the spatial distribution of relative uncertainty was found to be geographically heterogeneous. The global methods used here can be applied to conduct sensitivity and uncertainty analyses of other ACTMs.
Metabolome analysis for discovering biomarkers of gastroenterological cancer.
Suzuki, Makoto; Nishiumi, Shin; Matsubara, Atsuki; Azuma, Takeshi; Yoshida, Masaru
2014-09-01
Improvements in analytical technologies have made it possible to rapidly determine the concentrations of thousands of metabolites in any biological sample, which has resulted in metabolome analysis being applied to various types of research, such as clinical, cell biology, and plant/food science studies. The metabolome represents all of the end products and by-products of the numerous complex metabolic pathways operating in a biological system. Thus, metabolome analysis allows one to survey the global changes in an organism's metabolic profile and gain a holistic understanding of the changes that occur in organisms during various biological processes, e.g., during disease development. In clinical metabolomic studies, there is a strong possibility that differences in the metabolic profiles of human specimens reflect disease-specific states. Recently, metabolome analysis of biofluids, e.g., blood, urine, or saliva, has been increasingly used for biomarker discovery and disease diagnosis. Mass spectrometry-based techniques have been extensively used for metabolome analysis because they exhibit high selectivity and sensitivity during the identification and quantification of metabolites. Here, we describe metabolome analysis using liquid chromatography-mass spectrometry, gas chromatography-mass spectrometry, and capillary electrophoresis-mass spectrometry. Furthermore, the findings of studies that attempted to discover biomarkers of gastroenterological cancer are also outlined. Finally, we discuss metabolome analysis-based disease diagnosis. Copyright © 2014 Elsevier B.V. All rights reserved.
Sensitive periods in affective development: nonlinear maturation of fear learning.
Hartley, Catherine A; Lee, Francis S
2015-01-01
At specific maturational stages, neural circuits enter sensitive periods of heightened plasticity, during which the development of both brain and behavior are highly receptive to particular experiential information. A relatively advanced understanding of the regulatory mechanisms governing the initiation, closure, and reinstatement of sensitive period plasticity has emerged from extensive research examining the development of the visual system. In this article, we discuss a large body of work characterizing the pronounced nonlinear changes in fear learning and extinction that occur from childhood through adulthood, and their underlying neural substrates. We draw upon the model of sensitive period regulation within the visual system, and present burgeoning evidence suggesting that parallel mechanisms may regulate the qualitative changes in fear learning across development.
Sensitive Periods in Affective Development: Nonlinear Maturation of Fear Learning
Hartley, Catherine A; Lee, Francis S
2015-01-01
At specific maturational stages, neural circuits enter sensitive periods of heightened plasticity, during which the development of both brain and behavior are highly receptive to particular experiential information. A relatively advanced understanding of the regulatory mechanisms governing the initiation, closure, and reinstatement of sensitive period plasticity has emerged from extensive research examining the development of the visual system. In this article, we discuss a large body of work characterizing the pronounced nonlinear changes in fear learning and extinction that occur from childhood through adulthood, and their underlying neural substrates. We draw upon the model of sensitive period regulation within the visual system, and present burgeoning evidence suggesting that parallel mechanisms may regulate the qualitative changes in fear learning across development. PMID:25035083
US 93 preconstruction wildlife monitoring field methods handbook : final report.
DOT National Transportation Integrated Search
2006-11-01
The US 93 reconstruction project on the Flathead Indian Reservation in northwest Montana represents one of the most extensive wildlife-sensitive highway design efforts to occur in the continental United States. The reconstruction will include install...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-03
... business information or otherwise sensitive or protected information. NMFS will accept anonymous comments (enter ``N/A'' in the required fields if you wish to remain anonymous). Attachments to electronic...
Makris, Susan L.; Raffaele, Kathleen; Allen, Sandra; Bowers, Wayne J.; Hass, Ulla; Alleva, Enrico; Calamandrei, Gemma; Sheets, Larry; Amcoff, Patric; Delrue, Nathalie; Crofton, Kevin M.
2009-01-01
Objective We conducted a review of the history and performance of developmental neurotoxicity (DNT) testing in support of the finalization and implementation of Organisation of Economic Co-operation and Development (OECD) DNT test guideline 426 (TG 426). Information sources and analysis In this review we summarize extensive scientific efforts that form the foundation for this testing paradigm, including basic neurotoxicology research, interlaboratory collaborative studies, expert workshops, and validation studies, and we address the relevance, applicability, and use of the DNT study in risk assessment. Conclusions The OECD DNT guideline represents the best available science for assessing the potential for DNT in human health risk assessment, and data generated with this protocol are relevant and reliable for the assessment of these end points. The test methods used have been subjected to an extensive history of international validation, peer review, and evaluation, which is contained in the public record. The reproducibility, reliability, and sensitivity of these methods have been demonstrated, using a wide variety of test substances, in accordance with OECD guidance on the validation and international acceptance of new or updated test methods for hazard characterization. Multiple independent, expert scientific peer reviews affirm these conclusions. PMID:19165382
Development of automotive battery systems capable of surviving modern underhood environments
NASA Astrophysics Data System (ADS)
Pierson, John R.; Johnson, Richard T.
The starting, lighting, and ignition (SLI) battery in today's automobile typically finds itself in an engine compartment that is jammed with mechanical, electrical, and electronic devices. The spacing of these devices precludes air movement and, thus, heat transfer out of the compartment. Furthermore, many of the devices, in addition to the internal combustion engine, actually generate heat. The resulting underhood environment is extremely hostile to thermally-sensitive components, especially the battery. All indications point to a continuation of this trend towards higher engine-compartment temperatures as future vehicles evolve. The impact of ambient temperature on battery life is clearly demonstrated in the failure-mode analysis conducted by the Battery Council International in 1990. This study, when combined with additional failure-mode analyses, vehicle systems simulation, and elevated temperature life testing, provides insight into the potential for extension of life of batteries. Controlled fleet and field tests are used to document and quantify improvements in product design. Three approaches to battery life extension under adverse thermal conditions are assessed, namely: (i) battery design; (ii) thermal management, and (iii) alternative battery locations. The advantages and disadvantages of these approaches (both individually and in combination) for original equipment and aftermarket applications are explored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
A probabilistic risk assessment (PRA) was made of the Browns Ferry, Unit 1, nuclear plant as part of the Nuclear Regulatory Commission's Interim Reliability Evaluation Program (IREP). Specific goals of the study were to identify the dominant contributors to core melt, develop a foundation for more extensive use of PRA methods, expand the cadre of experienced PRA practitioners, and apply procedures for extension of IREP analyses to other domestic light water reactors. Event tree and fault tree analyses were used to estimate the frequency of accident sequences initiated by transients and loss of coolant accidents. External events such as floods,more » fires, earthquakes, and sabotage were beyond the scope of this study and were, therefore, excluded. From these sequences, the dominant contributors to probable core melt frequency were chosen. Uncertainty and sensitivity analyses were performed on these sequences to better understand the limitations associated with the estimated sequence frequencies. Dominant sequences were grouped according to common containment failure modes and corresponding release categories on the basis of comparison with analyses of similar designs rather than on the basis of detailed plant-specific calculations.« less
Ochiai, Nobuo; Sasamoto, Kikuo; Tsunokawa, Jun; Hoffmann, Andreas; Okanoya, Kazunori; MacNamara, Kevin
2015-11-20
An extension of multi-volatile method (MVM) technology using the combination of a standard dynamic headspace (DHS) configuration, and a modified DHS configuration incorporating an additional vacuum module, was developed for milliliter injection volume of aqueous sample with full sample evaporation. A prior step involved investigation of water management by weighing of the water residue in the adsorbent trap. The extended MVM for 1 mL aqueous sample consists of five different DHS method parameter sets including choice of the replaceable adsorbent trap. An initial two DHS sampling sets at 25°C with the standard DHS configuration using a carbon-based adsorbent trap target very volatile solutes with high vapor pressure (>10 kPa) and volatile solutes with moderate vapor pressure (1-10 kPa). Subsequent three DHS sampling sets at 80°C with the modified DHS configuration using a Tenax TA trap target solutes with low vapor pressure (<1 kPa) and/or hydrophilic characteristics. After the five sequential DHS samplings using the same HS vial, the five traps are sequentially desorbed with thermal desorption in reverse order of the DHS sampling and the desorbed compounds are trapped and concentrated in a programmed temperature vaporizing (PTV) inlet and subsequently analyzed in a single GC-MS run. Recoveries of 21 test aroma compounds in 1 mL water for each separate DHS sampling and the combined MVM procedure were evaluated as a function of vapor pressure in the range of 0.000088-120 kPa. The MVM procedure provided high recoveries (>88%) for 17 test aroma compounds and moderate recoveries (44-71%) for 4 test compounds. The method showed good linearity (r(2)>0.9913) and high sensitivity (limit of detection: 0.1-0.5 ng mL(-1)) even with MS scan mode. The improved sensitivity of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed green tea. Compared to the original 100 μL MVM procedure, this extension to 1 mL MVM allowed detection of nearly twice the number of aroma compounds, including 18 potent aroma compounds from top-note to base-note (e.g. 2,3-butanedione, coumarin, furaneol, guaiacol, cis-3-hexenol, linalool, maltol, methional, 3-methyl butanal, 2,3,5-trimethyl pyrazine, and vanillin). Sensitivity for 23 compounds improved by a factor of 3.4-15 under 1 mL MVM conditions. Copyright © 2015 Elsevier B.V. All rights reserved.
Toward a Principled Sampling Theory for Quasi-Orders
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601
Toward a Principled Sampling Theory for Quasi-Orders.
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.
Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Jörg
2016-01-01
This work presents a computer-based approach to analyze the two-dimensional vocal fold dynamics of endoscopic high-speed videos, and constitutes an extension and generalization of a previously proposed wavelet-based procedure. While most approaches aim for analyzing sustained phonation conditions, the proposed method allows for a clinically adequate analysis of both dynamic as well as sustained phonation paradigms. The analysis procedure is based on a spatio-temporal visualization technique, the phonovibrogram, that facilitates the documentation of the visible laryngeal dynamics. From the phonovibrogram, a low-dimensional set of features is computed using a principle component analysis strategy that quantifies the type of vibration patterns, irregularity, lateral symmetry and synchronicity, as a function of time. Two different test bench data sets are used to validate the approach: (I) 150 healthy and pathologic subjects examined during sustained phonation. (II) 20 healthy and pathologic subjects that were examined twice: during sustained phonation and a glissando from a low to a higher fundamental frequency. In order to assess the discriminative power of the extracted features, a Support Vector Machine is trained to distinguish between physiologic and pathologic vibrations. The results for sustained phonation sequences are compared to the previous approach. Finally, the classification performance of the stationary analyzing procedure is compared to the transient analysis of the glissando maneuver. For the first test bench the proposed procedure outperformed the previous approach (proposed feature set: accuracy: 91.3%, sensitivity: 80%, specificity: 97%, previous approach: accuracy: 89.3%, sensitivity: 76%, specificity: 96%). Comparing the classification performance of the second test bench further corroborates that analyzing transient paradigms provides clear additional diagnostic value (glissando maneuver: accuracy: 90%, sensitivity: 100%, specificity: 80%, sustained phonation: accuracy: 75%, sensitivity: 80%, specificity: 70%). The incorporation of parameters describing the temporal evolvement of vocal fold vibration clearly improves the automatic identification of pathologic vibration patterns. Furthermore, incorporating a dynamic phonation paradigm provides additional valuable information about the underlying laryngeal dynamics that cannot be derived from sustained conditions. The proposed generalized approach provides a better overall classification performance than the previous approach, and hence constitutes a new advantageous tool for an improved clinical diagnosis of voice disorders. Copyright © 2015 Elsevier B.V. All rights reserved.
Accumulation mechanisms and the weathering of Antarctic equilibrated ordinary chondrites
NASA Astrophysics Data System (ADS)
Benoit, P. H.; Sears, D. W. G.
1999-06-01
Induced thermoluminescence (TL) is used to quantitatively evaluate the degree of weathering of meteorites found in Antarctica. We find a weak correlation between TL sensitivity and descriptions of weathering in hand specimens, the highly weathered meteorites having lower TL sensitivity than unweathered meteorites. Analysis of samples taken throughout large meteorites shows that the heterogeneity in TL sensitivity within meteorite finds is not large relative to the range exhibited by different weathered meteorites. The TL sensitivity values can be restored by minimal acid washing, suggesting the lower TL sensitivities of weathered meteorites reflects thin weathering rims on mineral grains or coating of these grains by iron oxides produced by hydration and oxidation of metal and sulfides. Small meteorites may tend to be more highly weathered than large meteorites at the Allan Hills ice fields. We find that meteorite fragments >150 g may take up to 300,000 years to reach the highest degrees of weathering, while meteorites <150 g require <40,000 years. However, at other fields, local environmental conditions and variability in terrestrial history are more important in determining weathering than size alone. Weathering correlates poorly with surface exposure duration, presumably because weathering occurs primarily during interglacial periods. The Allan Hills locality has served as a fairly stable surface over the last 100,000 years or so and has efficiently preserved both small and large meteorites. Meteorites from Lewis Cliff, however, have experienced extensive weathering, probably because of increased surface melt water from nearby outcrops. Meteorites from the Elephant Moraine locality tend to exhibit only minor degrees of weathering, but small meteorites are less weathered than large meteorites, which we suggest is due to the loss of small meteorites by aeolian transport.
Computational and Mathematical Modeling of Coupled Superconducting Quantum Interference Devices
NASA Astrophysics Data System (ADS)
Berggren, Susan Anne Elizabeth
This research focuses on conducting an extensive computational investigation and mathematical analysis into the average voltage response of arrays of Superconducting Quantum Interference Devices (SQUIDs). These arrays will serve as the basis for the development of a sensitive, low noise, significantly lower Size, Weight and Power (SWaP) antenna integrated with Low-Noise Amplifier (LNA) using the SQUID technology. The goal for this antenna is to be capable of meeting all requirements for Guided Missile Destroyers (DDG) 1000 class ships for Information Operations/Signals Intelligence (IO/SIGINT) applications in Very High Frequency/Ultra High Frequency (V/UHF) bands. The device will increase the listening capability of receivers by moving technology into a new regime of energy detection allowing wider band, smaller size, more sensitive, stealthier systems. The smaller size and greater sensitivity will allow for ships to be “de-cluttered” of their current large dishes and devices, replacing everything with fewer and smaller SQUID antenna devices. The fewer devices present on the deck of a ship, the more invisible the ship will be to enemy forces. We invent new arrays of SQUIDs, optimized for signal detection with very high dynamic range and excellent spur-free dynamic range, while maintaining extreme small size (and low radar cross section), wide bandwidth, and environmentally noise limited sensitivity, effectively shifting the bottle neck of receiver systems forever away from the antenna itself deeper into the receiver chain. To accomplish these goals we develop and validate mathematical models for different designs of SQUID arrays and use them to invent a new device and systems design. This design is capable of significantly exceeding, per size weight and power, state-of-the-art receiver system measures of performance, such as bandwidth, sensitivity, dynamic range, and spurious-free dynamic range.
Gandjour, Afschin; Müller, Dirk
2014-10-01
One of the major ethical concerns regarding cost-effectiveness analysis in health care has been the inclusion of life-extension costs ("it is cheaper to let people die"). For this reason, many analysts have opted to rule out life-extension costs from the analysis. However, surprisingly little has been written in the health economics literature regarding this ethical concern and the resulting practice. The purpose of this work was to present a framework and potential solution for ethical objections against life-extension costs. This work found three levels of ethical concern: (i) with respect to all life-extension costs (disease-related and -unrelated); (ii) with respect to disease-unrelated costs only; and (iii) regarding disease-unrelated costs plus disease-related costs not influenced by the intervention. Excluding all life-extension costs for ethical reasons would require-for reasons of consistency-a simultaneous exclusion of savings from reducing morbidity. At the other extreme, excluding only disease-unrelated life-extension costs for ethical reasons would require-again for reasons of consistency-the exclusion of health gains due to treatment of unrelated diseases. Therefore, addressing ethical concerns regarding the inclusion of life-extension costs necessitates fundamental changes in the calculation of cost effectiveness.
Illán, Ignacio Alvarez; Górriz, Juan Manuel; Ramírez, Javier; Lang, Elmar W; Salas-Gonzalez, Diego; Puntonet, Carlos G
2012-11-01
This paper explores the importance of the latent symmetry of the brain in computer-aided systems for diagnosing Alzheimer's disease (AD). Symmetry and asymmetry are studied from two points of view: (i) the development of an effective classifier within the scope of machine learning techniques, and (ii) the assessment of its relevance to the AD diagnosis in the early stages of the disease. The proposed methodology is based on eigenimage decomposition of single-photon emission-computed tomography images, using an eigenspace extension to accommodate odd and even eigenvectors separately. This feature extraction technique allows for support-vector-machine classification and image analysis. Identification of AD patterns is improved when the latent symmetry of the brain is considered, with an estimated 92.78% accuracy (92.86% sensitivity, 92.68% specificity) using a linear kernel and a leave-one-out cross validation strategy. Also, asymmetries may be used to define a test for AD that is very specific (90.24% specificity) but not especially sensitive. Two main conclusions are derived from the analysis of the eigenimage spectrum. Firstly, the recognition of AD patterns is improved when considering only the symmetric part of the spectrum. Secondly, asymmetries in the hypo-metabolic patterns, when present, are more pronounced in subjects with AD. Copyright © 2012 Elsevier B.V. All rights reserved.
Strategies and Approaches to TPS Design
NASA Technical Reports Server (NTRS)
Kolodziej, Paul
2005-01-01
Thermal protection systems (TPS) insulate planetary probes and Earth re-entry vehicles from the aerothermal heating experienced during hypersonic deceleration to the planet s surface. The systems are typically designed with some additional capability to compensate for both variations in the TPS material and for uncertainties in the heating environment. This additional capability, or robustness, also provides a surge capability for operating under abnormal severe conditions for a short period of time, and for unexpected events, such as meteoroid impact damage, that would detract from the nominal performance. Strategies and approaches to developing robust designs must also minimize mass because an extra kilogram of TPS displaces one kilogram of payload. Because aircraft structures must be optimized for minimum mass, reliability-based design approaches for mechanical components exist that minimize mass. Adapting these existing approaches to TPS component design takes advantage of the extensive work, knowledge, and experience from nearly fifty years of reliability-based design of mechanical components. A Non-Dimensional Load Interference (NDLI) method for calculating the thermal reliability of TPS components is presented in this lecture and applied to several examples. A sensitivity analysis from an existing numerical simulation of a carbon phenolic TPS provides insight into the effects of the various design parameters, and is used to demonstrate how sensitivity analysis may be used with NDLI to develop reliability-based designs of TPS components.
Dametto, Andressa; Sperotto, Raul A; Adamski, Janete M; Blasi, Édina A R; Cargnelutti, Denise; de Oliveira, Luiz F V; Ricachenevsky, Felipe K; Fregonezi, Jeferson N; Mariath, Jorge E A; da Cruz, Renata P; Margis, Rogério; Fett, Janette P
2015-09-01
Rice productivity is largely affected by low temperature, which can be harmful throughout plant development, from germination to grain filling. Germination of indica rice cultivars under cold is slow and not uniform, resulting in irregular emergence and small plant population. To identify and characterize novel genes involved in cold tolerance during the germination stage, two indica rice genotypes (sister lines previously identified as cold-tolerant and cold-sensitive) were used in parallel transcriptomic analysis (RNAseq) under cold treatment (seeds germinating at 13 °C for 7 days). We detected 1,361 differentially expressed transcripts. Differences in gene expression found by RNAseq were confirmed for 11 selected genes using RT-qPCR. Biological processes enhanced in the cold-tolerant seedlings include: cell division and expansion (confirmed by anatomical sections of germinating seeds), cell wall integrity and extensibility, water uptake and membrane transport capacity, sucrose synthesis, generation of simple sugars, unsaturation of membrane fatty acids, wax biosynthesis, antioxidant capacity (confirmed by histochemical staining of H2O2), and hormone and Ca(2+)-signaling. The cold-sensitive seedlings respond to low temperature stress increasing synthesis of HSPs and dehydrins, along with enhanced ubiquitin/proteasome protein degradation pathway and polyamine biosynthesis. Our findings can be useful in future biotechnological approaches aiming to cold tolerance in indica rice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties
NASA Astrophysics Data System (ADS)
Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro
2013-12-01
We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.
Thirumala, Parthasarathy D; Thiagarajan, Karthy; Gedela, Satyanarayana; Crammond, Donald J; Balzer, Jeffrey R
2016-03-01
The 30 day stroke rate following carotid endarterectomy (CEA) ranges between 2-6%. Such periprocedural strokes are associated with a three-fold increased risk of mortality. Our primary aim was to determine the diagnostic accuracy of electroencephalogram (EEG) in predicting perioperative strokes through meta-analysis of existing literature. An extensive search for relevant literature was undertaken using PubMed and Web of Science databases. Studies were included after screening using predetermined criteria. Data was extracted and analyzed. Summary sensitivity, specificity and diagnostic odds ratio were obtained. Subgroup analysis of studies using eight or more EEG channels was done. Perioperative stroke rate for the cohort of 8765 patients was 1.75%. Pooled sensitivity and specificity of EEG changes in predicting these strokes were 52% (95% confidence interval [CI], 43-61%) and 84% (95% CI, 81-86%) respectively. Summary estimates of the subgroup were similar. The diagnostic odds ratio was 5.85 (95% CI, 3.71-9.22). For the observed stroke rate, the positive likelihood ratio was 3.25 while the negative predictive value was 98.99%. According to these results, patients with perioperative strokes have six times greater odds of experiencing an intraoperative change in EEG during CEA. EEG monitoring was found to be highly specific in predicting perioperative strokes after CEA. Copyright © 2015 Elsevier Ltd. All rights reserved.
Impaired temporal contrast sensitivity in dyslexics is specific to retain-and-compare paradigms.
Ben-Yehudah, G; Sackett, E; Malchi-Ginzberg, L; Ahissar, M
2001-07-01
Developmental dyslexia is a specific reading disability that affects 5-10% of the population. Recent studies have suggested that dyslexics may experience a deficit in the visual magnocellular pathway. The most extensively studied prediction deriving from this hypothesis is impaired contrast sensitivity to transient, low-luminance stimuli at low spatial frequencies. However, the findings are inconsistent across studies and even seemingly contradictory. In the present study, we administered several different paradigms for assessing temporal contrast sensitivity, and found both impaired and normal contrast sensitivity within the same group of dyslexic participants. Under sequential presentation, in a temporal forced choice paradigm, dyslexics showed impaired sensitivity to both drifting and flickering gratings. However, under simultaneous presentation, with a spatial forced choice paradigm, dyslexics' sensitivity did not differ from that of the controls. Within each paradigm, dyslexics' sensitivity was poorer at higher temporal frequencies, consistent with the magnocellular hypothesis. These results suggest that a basic perceptual impairment in dyslexics may be their limited ability to retain-and-compare perceptual traces across brief intervals.
The visual analysis of emotional actions.
Chouchourelou, Arieta; Matsuka, Toshihiko; Harber, Kent; Shiffrar, Maggie
2006-01-01
Is the visual analysis of human actions modulated by the emotional content of those actions? This question is motivated by a consideration of the neuroanatomical connections between visual and emotional areas. Specifically, the superior temporal sulcus (STS), known to play a critical role in the visual detection of action, is extensively interconnected with the amygdala, a center for emotion processing. To the extent that amygdala activity influences STS activity, one would expect to find systematic differences in the visual detection of emotional actions. A series of psychophysical studies tested this prediction. Experiment 1 identified point-light walker movies that convincingly depicted five different emotional states: happiness, sadness, neutral, anger, and fear. In Experiment 2, participants performed a walker detection task with these movies. Detection performance was systematically modulated by the emotional content of the gaits. Participants demonstrated the greatest visual sensitivity to angry walkers. The results of Experiment 3 suggest that local velocity cues to anger may account for high false alarm rates to the presence of angry gaits. These results support the hypothesis that the visual analysis of human action depends upon emotion processes.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S.
2012-01-01
In this paper a modeling method based on data reductions is investigated which includes pre analyzed MERRA atmospheric fields for quantitative estimates of uncertainties introduced in the integrated path differential absorption methods for the sensing of various molecules including CO2. This approach represents the extension of our existing lidar modeling framework previously developed and allows effective on- and offline wavelength optimizations and weighting function analysis to minimize the interference effects such as those due to temperature sensitivity and water vapor absorption. The new simulation methodology is different from the previous implementation in that it allows analysis of atmospheric effects over annual spans and the entire Earth coverage which was achieved due to the data reduction methods employed. The effectiveness of the proposed simulation approach is demonstrated with application to the mixing ratio retrievals for the future ASCENDS mission. Independent analysis of multiple accuracy limiting factors including the temperature, water vapor interferences, and selected system parameters is further used to identify favorable spectral regions as well as wavelength combinations facilitating the reduction in total errors in the retrieved XCO2 values.
Survey of visualization and analysis tools
NASA Technical Reports Server (NTRS)
Meyer, P. J.
1994-01-01
A large number of commercially available visualization and analysis tools are available to the researcher. Some of the strengths and limitations of some of these tools, from the viewpoint of the earth sciences discipline, are discussed. Visualization and analysis tools fall into one of two categories: those that are designed to a specific purpose and are non-extensive and those that are generic visual programming tools that are extensible. Most of the extensible packages examined incorporate a data flow paradigm.
Mapping Extension's Networks: Using Social Network Analysis to Explore Extension's Outreach
ERIC Educational Resources Information Center
Bartholomay, Tom; Chazdon, Scott; Marczak, Mary S.; Walker, Kathrin C.
2011-01-01
The University of Minnesota Extension conducted a social network analysis (SNA) to examine its outreach to organizations external to the University of Minnesota. The study found that its outreach network was both broad in its reach and strong in its connections. The study found that SNA offers a unique method for describing and measuring Extension…
Motion analysis study on sensitivity of finite element model of the cervical spine to geometry.
Zafarparandeh, Iman; Erbulut, Deniz U; Ozer, Ali F
2016-07-01
Numerous finite element models of the cervical spine have been proposed, with exact geometry or with symmetric approximation in the geometry. However, few researches have investigated the sensitivity of predicted motion responses to the geometry of the cervical spine. The goal of this study was to evaluate the effect of symmetric assumption on the predicted motion by finite element model of the cervical spine. We developed two finite element models of the cervical spine C2-C7. One model was based on the exact geometry of the cervical spine (asymmetric model), whereas the other was symmetric (symmetric model) about the mid-sagittal plane. The predicted range of motion of both models-main and coupled motions-was compared with published experimental data for all motion planes under a full range of loads. The maximum differences between the asymmetric model and symmetric model predictions for the principal motion were 31%, 78%, and 126% for flexion-extension, right-left lateral bending, and right-left axial rotation, respectively. For flexion-extension and lateral bending, the minimum difference was 0%, whereas it was 2% for axial rotation. The maximum coupled motions predicted by the symmetric model were 1.5° axial rotation and 3.6° lateral bending, under applied lateral bending and axial rotation, respectively. Those coupled motions predicted by the asymmetric model were 1.6° axial rotation and 4° lateral bending, under applied lateral bending and axial rotation, respectively. In general, the predicted motion response of the cervical spine by the symmetric model was in the acceptable range and nonlinearity of the moment-rotation curve for the cervical spine was properly predicted. © IMechE 2016.
Jia, Yongliang; Leung, Siu-Wai
2017-09-01
More than 230 randomized controlled trials (RCTs) of danshen dripping pill (DSP) and isosorbide dinitrate (ISDN) in treating angina pectoris after the first preferred reporting items for systematic reviews and meta-analyses-compliant comprehensive meta-analysis were published in 2010. Other meta-analyses had flaws in study selection, statistical meta-analysis, and evidence assessment. This study completed the meta-analysis with an extensive assessment of the evidence. RCTs published from 1994 to 2016 on DSP and ISDN in treating angina pectoris for at least 4 weeks were included. The risk of bias (RoB) of included RCTs was assessed with the Cochrane's tool for assessing RoB. Meta-analyses based on a random-effects model were performed on two outcome measures: symptomatic (SYM) and electrocardiography (ECG) improvements. Subgroup analysis, sensitivity analysis, metaregression, and publication bias analysis were also conducted. The evidence strength was evaluated with the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) method. Among the included 109 RCTs with 11,973 participants, 49 RCTs and 5042 participants were new (after 2010). The RoB of included RCTs was high in randomization and blinding. Overall effect sizes in odds ratios for DSP over ISDN were 2.94 (95% confidence interval [CI]: 2.53-3.41) on SYM (n = 108) and 2.37 (95% CI: 2.08-2.69) by ECG (n = 81) with significant heterogeneities (I 2 = 41%, p < 0.0001 on SYM and I 2 = 44%, p < 0.0001 on ECG). Subgroup, sensitivity, and metaregression analyses showed consistent results without publication bias. However, the evidence strength was low in GRADE. The efficacy of DSP was still better than ISDN in treating angina pectoris, but the confidence decreased due to high RoB and heterogeneities.
Chip-Based Sensors for Disease Diagnosis
NASA Astrophysics Data System (ADS)
Fang, Zhichao
Nucleic acid analysis is one of the most important disease diagnostic approaches in medical practice, and has been commonly used in cancer biomarker detection, bacterial speciation and many other fields in laboratory. Currently, the application of powerful research methods for genetic analysis, including the polymerase chain reaction (PCR), DNA sequencing, and gene expression profiling using fluorescence microarrays, are not widely used in hospitals and extended-care units due to high-cost, long detection times, and extensive sample preparation. Bioassays, especially chip-based electrochemical sensors, may be suitable for the next generation of rapid, sensitive, and multiplexed detection tools. Herein, we report three different microelectrode platforms with capabilities enabled by nano- and microtechnology: nanoelectrode ensembles (NEEs), nanostructured microelectrodes (NMEs), and hierarchical nanostructured microelectrodes (HNMEs), all of which are able to directly detect unpurified RNA in clinical samples without enzymatic amplification. Biomarkers that are cancer and infectious disease relevant to clinical medicine were chosen to be the targets. Markers were successfully detected with clinically-relevant sensitivity. Using peptide nucleic acids (PNAs) as probes and an electrocatalytic reporter system, NEEs were able to detect prostate cancer-related gene fusions in tumor tissue samples with 100 ng of RNA. The development of NMEs improved the sensitivity of the assay further to 10 aM of DNA target, and multiplexed detection of RNA sequences of different prostate cancer-related gene fusion types was achieved on the chip-based NMEs platform. An HNMEs chip integrated with a bacterial lysis device was able to detect as few as 25 cfu bacteria in 30 minutes and monitor the detection in real time. Bacterial detection could also be performed in neat urine samples. The development of these versatile clinical diagnostic tools could be extended to the detection of various cancers, genetic, and infectious diseases.
van Rooij, Antonius J; Schoenmakers, Tim M; van de Mheen, Dike
2017-01-01
Clinicians struggle with the identification of video gaming problems. To address this issue, a clinical assessment tool (C-VAT 2.0) was developed and tested in a clinical setting. The instrument allows exploration of the validity of the DSM-5 proposal for 'internet gaming disorder'. Using C-VAT 2.0, the current study provides a sensitivity analysis of the proposed DSM-5 criteria in a clinical youth sample (13-23years old) in treatment for video gaming disorder (N=32). The study also explores the clinical characteristics of these patients. The patients were all male and reported spending extensive amounts of time on video games. At least half of the patients reported playing online games (n=15). Comorbid problems were common (n=22) and included (social) anxiety disorders, PDD NOS, ADHD/ADD, Parent-Child relationship problem, and various types of depressive mood problems. The sensitivity of the test was good: results further show that the C-VAT correctly identified 91% of the sample at the proposed cut-off score of at least 5 out of 9 of the criteria. As our study did not include healthy, extreme gamers, we could not assess the specificity of the tool: future research should make this a priority. Using the proposed DSM-5 cut-off score, the C-VAT 2.0 shows preliminary validity in a sample of gamers in treatment for gaming disorder, but the discriminating value of the instrument should be studied further. In the meantime, it is crucial that therapists try to avoid false positives by using expert judgment of functional impairment in each case. Copyright © 2015 Elsevier Ltd. All rights reserved.
Medialization thyroplasty versus injection laryngoplasty: a cost minimization analysis.
Tam, Samantha; Sun, Hongmei; Sarma, Sisira; Siu, Jennifer; Fung, Kevin; Sowerby, Leigh
2017-02-20
Medialization thyroplasty and injection laryngoplasty are widely accepted treatment options for unilateral vocal fold paralysis. Although both procedures result in similar clinical outcomes, little is known about the corresponding medical care costs. Medialization thyroplasty requires expensive operating room resources while injection laryngoplasty utilizes outpatient resources but may require repeated procedures. The purpose of this study, therefore, is to quantify the cost differences in adult patients with unilateral vocal fold paralysis undergoing medialization thyroplasty versus injection laryngoplasty. Cost minimization analysis conducted using a decision tree model. A decision tree model was constructed to capture clinical scenarios for medialization thyroplasty and injection laryngoplasty. Probabilities for various events were obtained from a retrospective cohort from the London Health Sciences Centre, Canada. Costs were derived from the published literature and the London Health Science Centre. All costs were reported in 2014 Canadian dollars. Time horizon was 5 years. The study was conducted from an academic hospital perspective in Canada. Various sensitivity analyses were conducted to assess differences in procedure-specific costs and probabilities of key events. Sixty-three patients underwent medialization thyroplasty and 41 underwent injection laryngoplasty. Cost of medialization thyroplasty was C$2499.10 per patient whereas those treated with injection laryngoplasty cost C$943.19. Results showed that cost savings with IL were C$1555.91. Deterministic and probabilistic sensitivity analyses suggested cost savings ranged from C$596 to C$3626. Treatment with injection laryngoplasty results in cost savings of C$1555.91 per patient. Our extensive sensitivity analyses suggest that switching from medialization thyroplasty to injection laryngoplasty will lead to a minimum cost savings of C$596 per patient. Considering the significant cost savings and similar effectiveness, injection laryngoplasty should be strongly considered as a preferred treatment option for patients diagnosed with unilateral vocal fold paralysis.
Lao, Yexing; Yang, Cuiping; Zou, Wei; Gan, Manquan; Chen, Ping; Su, Weiwei
2012-05-01
The cryptand Kryptofix 2.2.2 is used extensively as a phase-transfer reagent in the preparation of [18F]fluoride-labelled radiopharmaceuticals. However, it has considerable acute toxicity. The aim of this study was to develop and validate a method for rapid (within 1 min), specific and sensitive quantification of Kryptofix 2.2.2 at trace levels. Chromatographic separations were carried out by rapid-resolution liquid chromatography (Agilent ZORBAX SB-C18 rapid-resolution column, 2.1 × 30 mm, 3.5 μm). Tandem mass spectra were acquired using a triple quadrupole mass spectrometer equipped with an electrospray ionization interface. Quantitative mass spectrometric analysis was conducted in positive ion mode and multiple reaction monitoring mode for the m/z 377.3 → 114.1 transition for Kryptofix 2.2.2. The external standard method was used for quantification. The method met the precision and efficiency requirements for PET radiopharmaceuticals, providing satisfactory results for specificity, matrix effect, stability, linearity (0.5-100 ng/ml, r(2)=0.9975), precision (coefficient of variation < 5%), accuracy (relative error < ± 3%), sensitivity (lower limit of quantification=0.5 ng) and detection time (<1 min). Fluorodeoxyglucose (n=6) was analysed, and the Kryptofix 2.2.2 content was found to be well below the maximum permissible levels approved by the US Food and Drug Administration. The developed method has a short analysis time (<1 min) and high sensitivity (lower limit of quantification=0.5 ng/ml) and can be successfully applied to rapid quantification of Kryptofix 2.2.2 at trace levels in fluorodeoxyglucose. This method could also be applied to other [18F]fluorine-labelled radiopharmaceuticals that use Kryptofix 2.2.2 as a phase-transfer reagent.
Different Imaging Strategies in Patients With Possible Basilar Artery Occlusion
Beyer, Sebastian E.; Hunink, Myriam G.; Schöberl, Florian; von Baumgarten, Louisa; Petersen, Steffen E.; Dichgans, Martin; Janssen, Hendrik; Ertl-Wagner, Birgit; Reiser, Maximilian F.
2015-01-01
Background and Purpose— This study evaluated the cost-effectiveness of different noninvasive imaging strategies in patients with possible basilar artery occlusion. Methods— A Markov decision analytic model was used to evaluate long-term outcomes resulting from strategies using computed tomographic angiography (CTA), magnetic resonance imaging, nonenhanced CT, or duplex ultrasound with intravenous (IV) thrombolysis being administered after positive findings. The analysis was performed from the societal perspective based on US recommendations. Input parameters were derived from the literature. Costs were obtained from United States costing sources and published literature. Outcomes were lifetime costs, quality-adjusted life-years (QALYs), incremental cost-effectiveness ratios, and net monetary benefits, with a willingness-to-pay threshold of $80 000 per QALY. The strategy with the highest net monetary benefit was considered the most cost-effective. Extensive deterministic and probabilistic sensitivity analyses were performed to explore the effect of varying parameter values. Results— In the reference case analysis, CTA dominated all other imaging strategies. CTA yielded 0.02 QALYs more than magnetic resonance imaging and 0.04 QALYs more than duplex ultrasound followed by CTA. At a willingness-to-pay threshold of $80 000 per QALY, CTA yielded the highest net monetary benefits. The probability that CTA is cost-effective was 96% at a willingness-to-pay threshold of $80 000/QALY. Sensitivity analyses showed that duplex ultrasound was cost-effective only for a prior probability of ≤0.02 and that these results were only minimally influenced by duplex ultrasound sensitivity and specificity. Nonenhanced CT and magnetic resonance imaging never became the most cost-effective strategy. Conclusions— Our results suggest that CTA in patients with possible basilar artery occlusion is cost-effective. PMID:26022634
Experimental droughts with rainout shelters: A methodological review
USDA-ARS?s Scientific Manuscript database
Forecast increases in the frequency, intensity and duration of droughts with climate change may have extreme and extensive ecological consequences. There are currently hundreds of published, ongoing and new drought experiments worldwide aimed to assess ecosystem sensitivities to drought and identify...
COPPER PITTING AND PINHOLE LEAK RESEARCH STUDY
Localized copper corrosion or pitting is a significant problem at many water utilities across the United States. Copper pinhole leak problems resulting from extensive pitting are widely under reported. Given the sensitive nature of the problem, extent of damage possible, costs o...
Workshop Report: Juvenile toxicity testing protocols for chemicals
There is increased awareness of the specific position of children when it comes to hazards of xenobiotic exposures. Children are not small adults, since their exposure patterns, compound kinetics and metabolism, and sensitivity of their developing organs may differ extensively fr...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brauer, Carolyn S.; Pearson, John C.; Drouin, Brian J.
The spectrum of ethyl cyanide, or propionitrile (CH{sub 3}CH{sub 2}CN), has been repeatedly observed in the interstellar medium with large column densities and surprisingly high temperatures in hot core sources. The construction of new, more sensitive, observatories accessing higher frequencies such as Herschel, ALMA, and SOFIA have made it important to extend the laboratory data for ethyl cyanide to coincide with the capabilities of the new instruments. We report extensions of the laboratory measurements of the rotational spectrum of ethyl cyanide in its ground vibrational state to 1.6 THz. A global analysis of the ground state, which includes all ofmore » the previous data and 3356 newly assigned transitions, has been fitted to within experimental error to J = 132, K = 36, using both Watson A-reduced and Watson S-reduced Hamiltonians.« less
The AC-120: The advanced commercial transport
NASA Technical Reports Server (NTRS)
Duran, David; Griffin, Ernest; Mendoza, Saul; Nguyen, Son; Pickett, Tim; Noernberg, Clemm
1993-01-01
The main objective of this design was to fulfill a need for a new airplane to replace the aging 100 to 150 passenger, 1500 nautical mile range aircraft such as the Douglas DC9 and Boeing 737-100 airplanes. After researching the future aircraft market, conducting extensive trade studies, and analysis on different configurations, the AC-120 Advanced Commercial Transport final design was achieved. The AC-120's main design features include the incorporation of a three lifting surface configuration which is powered by two turboprop engines. The AC-120 is an economically sensitive aircraft which meets the new FM Stage Three noise requirements, and has lower NO(x) emissions than current turbofan powered airplanes. The AC-120 also improves on its contemporaries in passenger comfort, manufacturing, and operating cost.
Quantifying Drosophila food intake: comparative analysis of current methodology
Deshpande, Sonali A.; Carvalho, Gil B.; Amador, Ariadna; Phillips, Angela M.; Hoxha, Sany; Lizotte, Keith J.; Ja, William W.
2014-01-01
Food intake is a fundamental parameter in animal studies. Despite the prevalent use of Drosophila in laboratory research, precise measurements of food intake remain challenging in this model organism. Here, we compare several common Drosophila feeding assays: the Capillary Feeder (CAFE), food-labeling with a radioactive tracer or a colorimetric dye, and observations of proboscis extension (PE). We show that the CAFE and radioisotope-labeling provide the most consistent results, have the highest sensitivity, and can resolve differences in feeding that dye-labeling and PE fail to distinguish. We conclude that performing the radiolabeling and CAFE assays in parallel is currently the best approach for quantifying Drosophila food intake. Understanding the strengths and limitations of food intake methodology will greatly advance Drosophila studies of nutrition, behavior, and disease. PMID:24681694
Spatial-time-state fusion algorithm for defect detection through eddy current pulsed thermography
NASA Astrophysics Data System (ADS)
Xiao, Xiang; Gao, Bin; Woo, Wai Lok; Tian, Gui Yun; Xiao, Xiao Ting
2018-05-01
Eddy Current Pulsed Thermography (ECPT) has received extensive attention due to its high sensitive of detectability on surface and subsurface cracks. However, it remains as a difficult challenge in unsupervised detection as to identify defects without knowing any prior knowledge. This paper presents a spatial-time-state features fusion algorithm to obtain fully profile of the defects by directional scanning. The proposed method is intended to conduct features extraction by using independent component analysis (ICA) and automatic features selection embedding genetic algorithm. Finally, the optimal feature of each step is fused to obtain defects reconstruction by applying common orthogonal basis extraction (COBE) method. Experiments have been conducted to validate the study and verify the efficacy of the proposed method on blind defect detection.
Event-specific real-time detection and quantification of genetically modified Roundup Ready soybean.
Huang, Chia-Chia; Pan, Tzu-Ming
2005-05-18
The event-specific real-time detection and quantification of Roundup Ready soybean (RRS) using an ABI PRISM 7700 sequence detection system with light upon extension (LUX) primer was developed in this study. The event-specific primers were designed, targeting the junction of the RRS 5' integration site and the endogenous gene lectin1. Then, a standard reference plasmid was constructed that carried both of the targeted sequences for quantitative analysis. The detection limit of the LUX real-time PCR system was 0.05 ng of 100% RRS genomic DNA, which was equal to 20.5 copies. The range of quantification was from 0.1 to 100%. The sensitivity and range of quantification successfully met the requirement of the labeling rules in the European Union and Taiwan.
VizieR Online Data Catalog: Catalogue of HI maps of galaxies. I. (Martin 1998)
NASA Astrophysics Data System (ADS)
Martin, M. C.
1998-03-01
A catalogue is presented of galaxies having large-scale observations in the HI line. This catalogue collects from the literature the information that characterizes the observations in the 21-cm line and the way that these data were presented by means of maps, graphics and tables, for showing the distribution and kinematics of the gas. It contains furthermore a measure of the HI extension that is detected at the level of the maximum sensitivity reached in the observations. This catalogue is intended as a guide for references on the HI maps published in the literature from 1953 to 1995 and is the basis for the analysis of the data presented in Paper II (Cat.
NASA Astrophysics Data System (ADS)
Hofer, L.; Lasi, D.; Tulej, M.; Wurz, P.; Cabane, M.; Cosica, D.; Gerasimov, M.; Rodinov, D.
2013-09-01
In preparation for the Russian Luna-Glob and Luna-Resurs missions we combined our compact time-offlight mass spectrometer (TOF-MS) with a chemical pre-separation of the species by gas chromatography (GC). Combined measurements with both instruments were successfully performed with the laboratory prototype of the mass spectrometer and a flight-like gas chromatograph. Due to its capability to record mass spectra over the full mass range at once with high sensitivity and a dynamic range of up to 106 within 1s, the TOF-MS system is a valuable extension of the GC analysis. The combined GC-MS complex is able to detect concentrations of volatile species in the sample of about 2·10^-9 by mass.
NASA Astrophysics Data System (ADS)
Shafiq, Natis
Energy transfer (ET) based sensitization of silicon (Si) using proximal nanocrystal quantum dots (NQDs) has been studied extensively in recent years as a means to develop thin and flexible Si based solar cells. The driving force for this research activity is a reduction in materials cost. To date, the main method for determining the role of ET in sensitizing Si has been optical spectroscopic studies. The quantitative contribution from two modes of ET (namely, nonradiative and radiative) has been reported using time-resolved photoluminescence (TRPL) spectroscopy coupled with extensive theoretical modelling. Thus, optical techniques have established the potential for utilizing ET based sensitization of Si as a feasible way to develop novel NQD-Si hybrid solar cells. However, the ultimate measure of the efficiency of ET-based mechanisms is the generation of electron-hole pairs by the impinging photons. It is therefore important to perform electrical measurements. However, only a couple of studies have attempted electrical quantification of ET modes. A few studies have focused on photocurrent measurements, without considering industrially relevant photovoltaic (PV) systems. Therefore, there is a need to develop a systematic approach for the electrical quantification of ET-generated charges and to help engineer new PV architectures optimized for harnessing the full advantages of ET mechanisms. Within this context, the work presented in this dissertation aims to develop an experimental testing protocol that can be applied to different PV structures for quantifying ET contributions from electrical measurements. We fabricated bulk Si solar cells (SCs) as a test structure and utilized CdSe/ZnS NQDs for ET based sensitization. The NQD-bulk Si hybrid devices showed ˜30% PV enhancement after NQD deposition. We measured external quantum efficiency (EQE) of these devices to quantify ET-generated charges. Reflectance measurements were also performed to decouple contributions of intrinsic optical effects (i.e., anti-reflection) from NQD mediated ET processes. Our analysis indicates that the contribution of ET-generated charges cannot be detected by EQE measurements. Instead, changes in the optical properties (i.e., anti-reflection property) due to the NQD layer are found to be the primary source of the photocurrent enhancement. Based on this finding, we propose to minimize bulk Si absorption by using an ultrathin (˜300 nm) Si PV architecture which should enable measurements of ET-generated charges. We describe an optimized process flow for fabricating such ultrathin Si devices. The devices fabricated by this method behave like photo-detectors and show enhanced sensitivity under 1 Sun AM1.5G illumination. The geometry and process flow of these devices make it possible to incorporate NQDs for sensitization. Overall, this dissertation provides a protocol for the quantification of ET-generated charges and documents an optimized process flow for the development of an ultrathin Si solar cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafsson, Åsa, E-mail: asa.gustafsson@foi.se; Dept of Public Health and Clinical Medicine, Umeå University; Bergström, Ulrika
The aim of this study was to investigate the inflammatory and immunological responses in airways and lung-draining lymph nodes (LDLNs), following lung exposure to iron oxide (hematite) nanoparticles (NPs). The responses to the hematite NPs were evaluated in both healthy non-sensitized mice, and in sensitized mice with an established allergic airway disease. The mice were exposed intratracheally to either hematite NPs or to vehicle (PBS) and the cellular responses were evaluated on days 1, 2, and 7, post-exposure. Exposure to hematite NPs increased the numbers of neutrophils, eosinophils, and lymphocytes in the airways of non-sensitized mice on days 1 andmore » 2 post-exposure; at these time points the number of lymphocytes was also elevated in the LDLNs. In contrast, exposing sensitized mice to hematite NPs induced a rapid and unspecific cellular reduction in the alveolar space on day 1 post-exposure; a similar decrease of lymphocytes was also observed in the LDLN. The results indicate that cells in the airways and in the LDLN of individuals with established airway inflammation undergo cell death when exposed to hematite NPs. A possible explanation for this toxic response is the extensive generation of reactive oxygen species (ROS) in the pro-oxidative environment of inflamed airways. This study demonstrates how sensitized and non-sensitized mice respond differently to hematite NP exposure, and it highlights the importance of including individuals with respiratory disorders when evaluating health effects of inhaled nanomaterials. - Highlights: • Hematite NPs induce differential responses in airways of healthy and allergic mice. • Hematite induced an airway inflammation in healthy mice. • Hematite induced cellular reduction in the alveolus and lymph nodes of allergic mice. • Cell death is possible due to extensive pro-oxidative environment in allergic mice. • It is important to include sensitive individuals when valuing health effects of NPs.« less
Al Khatib, Haya K; Hall, Wendy L; Creedon, Alice; Ooi, Emily; Masri, Tala; McGowan, Laura; Harding, Scott V; Darzi, Julia; Pot, Gerda K
2018-01-01
ABSTRACT Background Evidence suggests that short sleep duration may be a newly identified modifiable risk factor for obesity, yet there is a paucity of studies to investigate this. Objective We assessed the feasibility of a personalized sleep extension protocol in adults aged 18–64 y who are habitually short sleepers (5 to <7 h), with sleep primarily measured by wrist actigraphy. In addition, we collected pilot data to assess the effects of extended sleep on dietary intake and quality measured by 7-d food diaries, resting and total energy expenditure, physical activity, and markers of cardiometabolic health. Design Forty-two normal-weight healthy participants who were habitually short sleepers completed this free-living, 4-wk, parallel-design randomized controlled trial. The sleep extension group (n = 21) received a behavioral consultation session targeting sleep hygiene. The control group (n = 21) maintained habitual short sleep. Results Rates of participation, attrition, and compliance were 100%, 6.5%, and 85.7%, respectively. The sleep extension group significantly increased time in bed [0:55 hours:minutes (h:mm); 95% CI: 0:37, 1:12 h:mm], sleep period (0:47 h:mm; 95% CI: 0:29, 1:05 h:mm), and sleep duration (0:21 h:mm; 95% CI: 0:06, 0:36 h:mm) compared with the control group. Sleep extension led to reduced intake of free sugars (–9.6 g; 95% CI: –16.0, –3.1 g) compared with control (0.7 g; 95% CI: –5.7, 7.2 g) (P = 0.042). A sensitivity analysis in plausible reporters showed that the sleep extension group reduced intakes of fat (percentage), carbohydrates (grams), and free sugars (grams) in comparison to the control group. There were no significant differences between groups in markers of energy balance or cardiometabolic health. Conclusions We showed the feasibility of extending sleep in adult short sleepers. Sleep extension led to reduced free sugar intakes and may be a viable strategy to facilitate limiting excessive consumption of free sugars in an obesity-promoting environment. This trial was registered at www.clinicaltrials.gov as NCT02787577. PMID:29381788
Optimization of the High-Frequency Radar Sites in the Bering Strait Region
2015-02-01
and Daley 2000; Köhl and Stammer 2004) and was used extensively in dynam- ical sensitivity studies (e.g., Marotzke et al. 1999; Losch and Heimbach 2007 ...modeling (Winsor and Chapman 2004; Spall 2007 ; Watanabe and Hasumi 2009) studies suggest that Pacific waters enter the open AO through two pathways (Fig...S5 Ŵ21/2DBQT (11) is the sensitivity matrix (e.g., Köhl and Stammer 2004). By electing the trace of the covariance matrix as its norm, we employ the
Characterization and limits of a cold-atom Sagnac interferometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauguet, A.; Canuel, B.; Leveque, T.
2009-12-15
We present the full evaluation of a cold-atom gyroscope based on atom interferometry. We have performed extensive studies to determine the systematic errors, scale factor and sensitivity. We demonstrate that the acceleration noise can be efficiently removed from the rotation signal, allowing us to reach the fundamental limit of the quantum projection noise for short term measurements. The technical limits to the long term sensitivity and accuracy have been identified, clearing the way for the next generation of ultrasensitive atom gyroscopes.
Heibeck, Tyler H.; Ding, Shi-Jian; Opresko, Lee K.; Zhao, Rui; Schepmoes, Athena A.; Yang, Feng; Tolmachev, Aleksey V.; Monroe, Matthew E.; Camp, David G.; Smith, Richard D.; Wiley, H. Steven; Qian, Wei-Jun
2010-01-01
Protein tyrosine phosphorylation represents a central regulatory mechanism in cell signaling. Here we present an extensive survey of tyrosine phosphorylation sites in a normal-derived human mammary epithelial cell line by applying anti-phosphotyrosine peptide immunoaffinity purification coupled with high sensitivity capillary liquid chromatography tandem mass spectrometry. A total of 481 tyrosine phosphorylation sites (covered by 716 unique peptides) from 285 proteins were confidently identified in HMEC following the analysis of both the basal condition and acute stimulation with epidermal growth factor (EGF). The estimated false discovery rate was 1.0% as determined by searching against a scrambled database. Comparison of these data with existing literature showed significant agreement for previously reported sites. However, we observed 281 sites that were not previously reported for HMEC cultures and 29 of which have not been reported for any human cell or tissue system. The analysis showed that the majority of highly phosphorylated proteins were relatively low-abundance. Large differences in phosphorylation stoichiometry for sites within the same protein were also observed, raising the possibility of more important functional roles for such highly phosphorylated pTyr sites. By mapping to major signaling networks, such as the EGF receptor and insulin growth factor-1 receptor signaling pathways, many known proteins involved in these pathways were revealed to be tyrosine phosphorylated, which provides interesting targets for future hypothesis-driven and targeted quantitative studies involving tyrosine phosphorylation in HMEC or other human systems. PMID:19534553
Quantitative analysis on the urban flood mitigation effect by the extensive green roof system.
Lee, J Y; Moon, H J; Kim, T I; Kim, H W; Han, M Y
2013-10-01
Extensive green-roof systems are expected to have a synergetic effect in mitigating urban runoff, decreasing temperature and supplying water to a building. Mitigation of runoff through rainwater retention requires the effective design of a green-roof catchment. This study identified how to improve building runoff mitigation through quantitative analysis of an extensive green-roof system. Quantitative analysis of green-roof runoff characteristics indicated that the extensive green roof has a high water-retaining capacity response to rainfall of less than 20 mm/h. As the rainfall intensity increased, the water-retaining capacity decreased. The catchment efficiency of an extensive green roof ranged from 0.44 to 0.52, indicating reduced runoff comparing with efficiency of 0.9 for a concrete roof. Therefore, extensive green roofs are an effective storm water best-management practice and the proposed parameters can be applied to an algorithm for rainwater-harvesting tank design. © 2013 Elsevier Ltd. All rights reserved.
Turner, Andrew; Sasse, Jurgen; Varadi, Aniko
2016-10-19
Inherited disorders of haemoglobin are the world's most common genetic diseases, resulting in significant morbidity and mortality. The large number of mutations associated with the haemoglobin beta gene (HBB) makes gene scanning by High Resolution Melting (HRM) PCR an attractive diagnostic approach. However, existing HRM-PCR assays are not able to detect all common point mutations and have only a very limited ability to detect larger gene rearrangements. The aim of the current study was to develop a HBB assay, which can be used as a screening test in highly heterogeneous populations, for detection of both point mutations and larger gene rearrangements. The assay is based on a combination of conventional HRM-PCR and a novel Gene Ratio Analysis Copy Enumeration (GRACE) PCR method. HRM-PCR was extensively optimised, which included the use of an unlabelled probe and incorporation of universal bases into primers to prevent interference from common non-pathological polymorphisms. GRACE-PCR was employed to determine HBB gene copy numbers relative to a reference gene using melt curve analysis to detect rearrangements in the HBB gene. The performance of the assay was evaluated by analysing 410 samples. A total of 44 distinct pathological genotypes were detected. In comparison with reference methods, the assay has a sensitivity of 100 % and a specificity of 98 %. We have developed an assay that detects both point mutations and larger rearrangements of the HBB gene. This assay is quick, sensitive, specific and cost effective making it suitable as an initial screening test that can be used for highly heterogeneous cohorts.
Design and Analysis of an X-Ray Mirror Assembly Using the Meta-Shell Approach
NASA Technical Reports Server (NTRS)
McClelland, Ryan S.; Bonafede, Joseph; Saha, Timo T.; Solly, Peter M.; Zhang, William W.
2016-01-01
Lightweight and high resolution optics are needed for future space-based x-ray telescopes to achieve advances in high-energy astrophysics. Past missions such as Chandra and XMM-Newton have achieved excellent angular resolution using a full shell mirror approach. Other missions such as Suzaku and NuSTAR have achieved lightweight mirrors using a segmented approach. This paper describes a new approach, called meta-shells, which combines the fabrication advantages of segmented optics with the alignment advantages of full shell optics. Meta-shells are built by layering overlapping mirror segments onto a central structural shell. The resulting optic has the stiffness and rotational symmetry of a full shell, but with an order of magnitude greater collecting area. Several meta-shells so constructed can be integrated into a large x-ray mirror assembly by proven methods used for Chandra and XMM-Newton. The mirror segments are mounted to the meta-shell using a novel four point semi-kinematic mount. The four point mount deterministically locates the segment in its most performance sensitive degrees of freedom. Extensive analysis has been performed to demonstrate the feasibility of the four point mount and meta-shell approach. A mathematical model of a meta-shell constructed with mirror segments bonded at four points and subject to launch loads has been developed to determine the optimal design parameters, namely bond size, mirror segment span, and number of layers per meta-shell. The parameters of an example 1.3 m diameter mirror assembly are given including the predicted effective area. To verify the mathematical model and support opto-mechanical analysis, a detailed finite element model of a meta-shell was created. Finite element analysis predicts low gravity distortion and low sensitivity to thermal gradients.
Advances in targeted proteomics and applications to biomedical research
Shi, Tujin; Song, Ehwang; Nie, Song; Rodland, Karin D.; Liu, Tao; Qian, Wei-Jun; Smith, Richard D.
2016-01-01
Targeted proteomics technique has emerged as a powerful protein quantification tool in systems biology, biomedical research, and increasing for clinical applications. The most widely used targeted proteomics approach, selected reaction monitoring (SRM), also known as multiple reaction monitoring (MRM), can be used for quantification of cellular signaling networks and preclinical verification of candidate protein biomarkers. As an extension to our previous review on advances in SRM sensitivity herein we review recent advances in the method and technology for further enhancing SRM sensitivity (from 2012 to present), and highlighting its broad biomedical applications in human bodily fluids, tissue and cell lines. Furthermore, we also review two recently introduced targeted proteomics approaches, parallel reaction monitoring (PRM) and data-independent acquisition (DIA) with targeted data extraction on fast scanning high-resolution accurate-mass (HR/AM) instruments. Such HR/AM targeted quantification with monitoring all target product ions addresses SRM limitations effectively in specificity and multiplexing; whereas when compared to SRM, PRM and DIA are still in the infancy with a limited number of applications. Thus, for HR/AM targeted quantification we focus our discussion on method development, data processing and analysis, and its advantages and limitations in targeted proteomics. Finally, general perspectives on the potential of achieving both high sensitivity and high sample throughput for large-scale quantification of hundreds of target proteins are discussed. PMID:27302376
Alonso, Sergio; Suzuki, Koichi; Yamamoto, Fumiichiro; Perucho, Manuel
2018-01-01
Somatic, and in a minor scale also germ line, epigenetic aberrations are fundamental to carcinogenesis, cancer progression, and tumor phenotype. DNA methylation is the most extensively studied and arguably the best understood epigenetic mechanisms that become altered in cancer. Both somatic loss of methylation (hypomethylation) and gain of methylation (hypermethylation) are found in the genome of malignant cells. In general, the cancer cell epigenome is globally hypomethylated, while some regions-typically gene-associated CpG islands-become hypermethylated. Given the profound impact that DNA methylation exerts on the transcriptional profile and genomic stability of cancer cells, its characterization is essential to fully understand the complexity of cancer biology, improve tumor classification, and ultimately advance cancer patient management and treatment. A plethora of methods have been devised to analyze and quantify DNA methylation alterations. Several of the early-developed methods relied on the use of methylation-sensitive restriction enzymes, whose activity depends on the methylation status of their recognition sequences. Among these techniques, methylation-sensitive amplification length polymorphism (MS-AFLP) was developed in the early 2000s, and successfully adapted from its original gel electrophoresis fingerprinting format to a microarray format that notably increased its throughput and allowed the quantification of the methylation changes. This array-based platform interrogates over 9500 independent loci putatively amplified by the MS-AFLP technique, corresponding to the NotI sites mapped throughout the human genome.
Clinical case definition for the diagnosis of acute intussusception.
Bines, Julie E; Ivanoff, Bernard; Justice, Frances; Mulholland, Kim
2004-11-01
Because of the reported association between intussusception and a rotavirus vaccine, future clinical trials of rotavirus vaccines will need to include intussusception surveillance in the evaluation of vaccine safety. The aim of this study is to develop and validate a clinical case definition for the diagnosis of acute intussusception. A clinical case definition for the diagnosis of acute intussusception was developed by analysis of an extensive literature review that defined the clinical presentation of intussusception in 70 developed and developing countries. The clinical case definition was then assessed for sensitivity and specificity using a retrospective chart review of hospital admissions. Sensitivity of the clinical case definition was assessed in children diagnosed with intussusception over a 6.5-year period. Specificity was assessed in patients aged <2 years admitted with bowel obstruction and in patients aged <19 years presenting with symptoms that may occur in intussusception. The clinical case definition accurately identified 185 of 191 assessable cases as "probable" intussusception and six cases as "possible" intussusception (sensitivity, 97%). No case of radiologic or surgically proven intussusception failed to be identified by the clinical case definition. The specificity of the definition in correctly identifying patients who did not have intussusception ranged from 87% to 91%. The clinical case definition for intussusception may assist in the prompt identification of patients with intussusception and may provide an important tool for the future trials of enteric vaccines.
Advances in targeted proteomics and applications to biomedical research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Tujin; Song, Ehwang; Nie, Song
Targeted proteomics technique has emerged as a powerful protein quantification tool in systems biology, biomedical research, and increasing for clinical applications. The most widely used targeted proteomics approach, selected reaction monitoring (SRM), also known as multiple reaction monitoring (MRM), can be used for quantification of cellular signaling networks and preclinical verification of candidate protein biomarkers. As an extension to our previous review on advances in SRM sensitivity (Shi et al., Proteomics, 12, 1074–1092, 2012) herein we review recent advances in the method and technology for further enhancing SRM sensitivity (from 2012 to present), and highlighting its broad biomedical applications inmore » human bodily fluids, tissue and cell lines. Furthermore, we also review two recently introduced targeted proteomics approaches, parallel reaction monitoring (PRM) and data-independent acquisition (DIA) with targeted data extraction on fast scanning high-resolution accurate-mass (HR/AM) instruments. Such HR/AM targeted quantification with monitoring all target product ions addresses SRM limitations effectively in specificity and multiplexing; whereas when compared to SRM, PRM and DIA are still in the infancy with a limited number of applications. Thus, for HR/AM targeted quantification we focus our discussion on method development, data processing and analysis, and its advantages and limitations in targeted proteomics. Finally, general perspectives on the potential of achieving both high sensitivity and high sample throughput for large-scale quantification of hundreds of target proteins are discussed.« less
The sensitivity of the ESA DELTA model
NASA Astrophysics Data System (ADS)
Martin, C.; Walker, R.; Klinkrad, H.
Long-term debris environment models play a vital role in furthering our understanding of the future debris environment, and in aiding the determination of a strategy to preserve the Earth orbital environment for future use. By their very nature these models have to make certain assumptions to enable informative future projections to be made. Examples of these assumptions include the projection of future traffic, including launch and explosion rates, and the methodology used to simulate break-up events. To ensure a sound basis for future projections, and consequently for assessing the effectiveness of various mitigation measures, it is essential that the sensitivity of these models to variations in key assumptions is examined. The DELTA (Debris Environment Long Term Analysis) model, developed by QinetiQ for the European Space Agency, allows the future projection of the debris environment throughout Earth orbit. Extensive analyses with this model have been performed under the auspices of the ESA Space Debris Mitigation Handbook and following the recent upgrade of the model to DELTA 3.0. This paper draws on these analyses to present the sensitivity of the DELTA model to changes in key model parameters and assumptions. Specifically the paper will address the variation in future traffic rates, including the deployment of satellite constellations, and the variation in the break-up model and criteria used to simulate future explosion and collision events.
Adhikari, Bal-Ram; Govindhan, Maduraiveeran; Chen, Aicheng
2015-01-01
Electrochemical sensors and biosensors have attracted considerable attention for the sensitive detection of a variety of biological and pharmaceutical compounds. Since the discovery of carbon-based nanomaterials, including carbon nanotubes, C60 and graphene, they have garnered tremendous interest for their potential in the design of high-performance electrochemical sensor platforms due to their exceptional thermal, mechanical, electronic, and catalytic properties. Carbon nanomaterial-based electrochemical sensors have been employed for the detection of various analytes with rapid electron transfer kinetics. This feature article focuses on the recent design and use of carbon nanomaterials, primarily single-walled carbon nanotubes (SWCNTs), reduced graphene oxide (rGO), SWCNTs-rGO, Au nanoparticle-rGO nanocomposites, and buckypaper as sensing materials for the electrochemical detection of some representative biological and pharmaceutical compounds such as methylglyoxal, acetaminophen, valacyclovir, β-nicotinamide adenine dinucleotide hydrate (NADH), and glucose. Furthermore, the electrochemical performance of SWCNTs, rGO, and SWCNT-rGO for the detection of acetaminophen and valacyclovir was comparatively studied, revealing that SWCNT-rGO nanocomposites possess excellent electrocatalytic activity in comparison to individual SWCNT and rGO platforms. The sensitive, reliable and rapid analysis of critical disease biomarkers and globally emerging pharmaceutical compounds at carbon nanomaterials based electrochemical sensor platforms may enable an extensive range of applications in preemptive medical diagnostics. PMID:26404304
Reid, Caroline H; Finnerty, Niall J
2017-07-08
We detail an extensive characterisation study on a previously described dual amperometric H₂O₂ biosensor consisting of H₂O₂ detection (blank) and degradation (catalase) electrodes. In vitro investigations demonstrated excellent H₂O₂ sensitivity and selectivity against the interferent, ascorbic acid. Ex vivo studies were performed to mimic physiological conditions prior to in vivo deployment. Exposure to brain tissue homogenate identified reliable sensitivity and selectivity recordings up to seven days for both blank and catalase electrodes. Furthermore, there was no compromise in pre- and post-implanted catalase electrode sensitivity in ex vivo mouse brain. In vivo investigations performed in anaesthetised mice confirmed the ability of the H₂O₂ biosensor to detect increases in amperometric current following locally perfused/infused H₂O₂ and antioxidant inhibitors mercaptosuccinic acid and sodium azide. Subsequent recordings in freely moving mice identified negligible effects of control saline and sodium ascorbate interference injections on amperometric H₂O₂ current. Furthermore, the stability of the amperometric current was confirmed over a five-day period and analysis of 24-h signal recordings identified the absence of diurnal variations in amperometric current. Collectively, these findings confirm the biosensor current responds in vivo to increasing exogenous and endogenous H₂O₂ and tentatively supports measurement of H₂O₂ dynamics in freely moving NOD SCID mice.
Emrich, Stephen M; Riggall, Adam C; Larocque, Joshua J; Postle, Bradley R
2013-04-10
Traditionally, load sensitivity of sustained, elevated activity has been taken as an index of storage for a limited number of items in visual short-term memory (VSTM). Recently, studies have demonstrated that the contents of a single item held in VSTM can be decoded from early visual cortex, despite the fact that these areas do not exhibit elevated, sustained activity. It is unknown, however, whether the patterns of neural activity decoded from sensory cortex change as a function of load, as one would expect from a region storing multiple representations. Here, we use multivoxel pattern analysis to examine the neural representations of VSTM in humans across multiple memory loads. In an important extension of previous findings, our results demonstrate that the contents of VSTM can be decoded from areas that exhibit a transient response to visual stimuli, but not from regions that exhibit elevated, sustained load-sensitive delay-period activity. Moreover, the neural information present in these transiently activated areas decreases significantly with increasing load, indicating load sensitivity of the patterns of activity that support VSTM maintenance. Importantly, the decrease in classification performance as a function of load is correlated with within-subject changes in mnemonic resolution. These findings indicate that distributed patterns of neural activity in putatively sensory visual cortex support the representation and precision of information in VSTM.
NASA Astrophysics Data System (ADS)
Mishra, Vinod Kumar
2017-09-01
In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.
Molinos-Senante, María; Gómez, Trinidad; Caballero, Rafael; Hernández-Sancho, Francesc; Sala-Garrido, Ramón
2015-11-01
The selection of the most appropriate wastewater treatment (WWT) technology is a complex problem since many alternatives are available and many criteria are involved in the decision-making process. To deal with this challenge, the analytic network process (ANP) is applied for the first time to rank a set of seven WWT technology set-ups for secondary treatment in small communities. A major advantage of ANP is that it incorporates interdependent relationships between elements. Results illustrated that extensive technologies, constructed wetlands and pond systems are the most preferred alternatives by WWT experts. The sensitivity analysis performed verified that the ranking of WWT alternatives is very stable since constructed wetlands are almost always placed in the first position. This paper showed that ANP analysis is suitable to deal with complex decision-making problems, such as the selection of the most appropriate WWT system contributing to better understand the multiple interdependences among elements involved in the assessment. Copyright © 2015 Elsevier B.V. All rights reserved.
Integrated techno-economic and environmental analysis of butadiene production from biomass.
Farzad, Somayeh; Mandegari, Mohsen Ali; Görgens, Johann F
2017-09-01
In this study, lignocellulose biorefineries annexed to a typical sugar mill were investigated to produce either ethanol (EtOH) or 1,3-butadiene (BD), utilizing bagasse and trash as feedstock. Aspen simulation of the scenarios were developed and evaluated in terms of economic and environmental performance. The minimum selling prices (MSPs) for bio-based BD and EtOH production were 2.9-3.3 and 1.26-1.38-fold higher than market prices, respectively. Based on the sensitivity analysis results, capital investment, Internal Rate of Return and extension of annual operating time had the greatest impact on the MSP. Monte Carlo simulation demonstrated that EtOH and BD productions could be profitable if the average of ten-year historical price increases by 1.05 and 1.9-fold, respectively. The fossil-based route was found inferior to bio-based pathway across all investigated environmental impact categories, due to burdens associated with oil extraction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multiplex cDNA quantification method that facilitates the standardization of gene expression data
Gotoh, Osamu; Murakami, Yasufumi; Suyama, Akira
2011-01-01
Microarray-based gene expression measurement is one of the major methods for transcriptome analysis. However, current microarray data are substantially affected by microarray platforms and RNA references because of the microarray method can provide merely the relative amounts of gene expression levels. Therefore, valid comparisons of the microarray data require standardized platforms, internal and/or external controls and complicated normalizations. These requirements impose limitations on the extensive comparison of gene expression data. Here, we report an effective approach to removing the unfavorable limitations by measuring the absolute amounts of gene expression levels on common DNA microarrays. We have developed a multiplex cDNA quantification method called GEP-DEAN (Gene expression profiling by DCN-encoding-based analysis). The method was validated by using chemically synthesized DNA strands of known quantities and cDNA samples prepared from mouse liver, demonstrating that the absolute amounts of cDNA strands were successfully measured with a sensitivity of 18 zmol in a highly multiplexed manner in 7 h. PMID:21415008
Advanced microgrid design and analysis for forward operating bases
NASA Astrophysics Data System (ADS)
Reasoner, Jonathan
This thesis takes a holistic approach in creating an improved electric power generation system for a forward operating base (FOB) in the future through the design of an isolated microgrid. After an extensive literature search, this thesis found a need for drastic improvement of the FOB power system. A thorough design process analyzed FOB demand, researched demand side management improvements, evaluated various generation sources and energy storage options, and performed a HOMERRTM discrete optimization to determine the best microgrid design. Further sensitivity analysis was performed to see how changing parameters would affect the outcome. Lastly, this research also looks at some of the challenges which are associated with incorporating a design which relies heavily on inverter-based generation sources, and gives possible solutions to help make a renewable energy powered microgrid a reality. While this thesis uses a FOB as the case study, the process and discussion can be adapted to aide in the design of an off-grid small-scale power grid which utilizes high-penetration levels of renewable energy.
Ahn, Jaeil; Morita, Satoshi; Wang, Wenyi; Yuan, Ying
2017-01-01
Analyzing longitudinal dyadic data is a challenging task due to the complicated correlations from repeated measurements and within-dyad interdependence, as well as potentially informative (or non-ignorable) missing data. We propose a dyadic shared-parameter model to analyze longitudinal dyadic data with ordinal outcomes and informative intermittent missing data and dropouts. We model the longitudinal measurement process using a proportional odds model, which accommodates the within-dyad interdependence using the concept of the actor-partner interdependence effects, as well as dyad-specific random effects. We model informative dropouts and intermittent missing data using a transition model, which shares the same set of random effects as the longitudinal measurement model. We evaluate the performance of the proposed method through extensive simulation studies. As our approach relies on some untestable assumptions on the missing data mechanism, we perform sensitivity analyses to evaluate how the analysis results change when the missing data mechanism is misspecified. We demonstrate our method using a longitudinal dyadic study of metastatic breast cancer.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vimmerstedt, Laura J.; Jadun, Paige; McMillan, Colin A.
This report provides projected cost and performance assumptions for electric technologies considered in the Electrification Futures Study, a detailed and comprehensive analysis of the effects of widespread electrification of end-use service demands in all major economic sectors - transportation, residential and commercial buildings, and industry - for the contiguous United States through 2050. Using extensive literature searches and expert assessment, the authors identify slow, moderate, and rapid technology advancement sensitivities on technology cost and performance, and they offer a comparative analysis of levelized cost metrics as a reference indicator of total costs. The identification and characterization of these end-use servicemore » demand technologies is fundamental to the Electrification Futures Study. This report, the larger Electrification Futures Study, and the associated data and methodologies may be useful to planners and analysts in evaluating the potential role of electrification in an uncertain future. The report could be broadly applicable for other analysts and researchers who wish to assess electrification and electric technologies.« less
Cancer cell profiling by barcoding allows multiplexed protein analysis in fine-needle aspirates.
Ullal, Adeeti V; Peterson, Vanessa; Agasti, Sarit S; Tuang, Suan; Juric, Dejan; Castro, Cesar M; Weissleder, Ralph
2014-01-15
Immunohistochemistry-based clinical diagnoses require invasive core biopsies and use a limited number of protein stains to identify and classify cancers. We introduce a technology that allows analysis of hundreds of proteins from minimally invasive fine-needle aspirates (FNAs), which contain much smaller numbers of cells than core biopsies. The method capitalizes on DNA-barcoded antibody sensing, where barcodes can be photocleaved and digitally detected without any amplification steps. After extensive benchmarking in cell lines, this method showed high reproducibility and achieved single-cell sensitivity. We used this approach to profile ~90 proteins in cells from FNAs and subsequently map patient heterogeneity at the protein level. Additionally, we demonstrate how the method could be used as a clinical tool to identify pathway responses to molecularly targeted drugs and to predict drug response in patient samples. This technique combines specificity with ease of use to offer a new tool for understanding human cancers and designing future clinical trials.
Cancer cell profiling by barcoding allows multiplexed protein analysis in fine needle aspirates
Ullal, Adeeti V.; Peterson, Vanessa; Agasti, Sarit S.; Tuang, Suan; Juric, Dejan; Castro, Cesar M.; Weissleder, Ralph
2014-01-01
Immunohistochemistry-based clinical diagnoses require invasive core biopsies and use a limited number of protein stains to identify and classify cancers. Here, we introduce a technology that allows analysis of hundreds of proteins from minimally invasive fine needle aspirates (FNA), which contain much smaller numbers of cells than core biopsies. The method capitalizes on DNA-barcoded antibody sensing where barcodes can be photo-cleaved and digitally detected without any amplification steps. Following extensive benchmarking in cell lines, this method showed high reproducibility and achieved single cell sensitivity. We used this approach to profile ~90 proteins in cells from FNAs and subsequently map patient heterogeneity at the protein level. Additionally, we demonstrate how the method could be used as a clinical tool to identify pathway responses to molecularly targeted drugs and to predict drug response in patient samples. This technique combines specificity with ease of use to offer a new tool for understanding human cancers and designing future clinical trials. PMID:24431113
NASA Astrophysics Data System (ADS)
Langer, Andreas; Schräml, Michael; Strasser, Ralf; Daub, Herwin; Myers, Thomas; Heindl, Dieter; Rant, Ulrich
2015-07-01
The engineering of high-performance enzymes for future sequencing and PCR technologies as well as the development of many anticancer drugs requires a detailed analysis of DNA/RNA synthesis processes. However, due to the complex molecular interplay involved, real-time methodologies have not been available to obtain comprehensive information on both binding parameters and enzymatic activities. Here we introduce a chip-based method to investigate polymerases and their interactions with nucleic acids, which employs an electrical actuation of DNA templates on microelectrodes. Two measurement modes track both the dynamics of the induced switching process and the DNA extension simultaneously to quantitate binding kinetics, dissociation constants and thermodynamic energies. The high sensitivity of the method reveals previously unidentified tight binding states for Taq and Pol I (KF) DNA polymerases. Furthermore, the incorporation of label-free nucleotides can be followed in real-time and changes in the DNA polymerase conformation (finger closing) during enzymatic activity are observable.
Farr, Ryan J; Januszewski, Andrzej S; Joglekar, Mugdha V; Liang, Helena; McAulley, Annie K; Hewitt, Alex W; Thomas, Helen E; Loudovaris, Tom; Kay, Thomas W H; Jenkins, Alicia; Hardikar, Anandwardhan A
2015-06-02
MicroRNAs are now increasingly recognized as biomarkers of disease progression. Several quantitative real-time PCR (qPCR) platforms have been developed to determine the relative levels of microRNAs in biological fluids. We systematically compared the detection of cellular and circulating microRNA using a standard 96-well platform, a high-content microfluidics platform and two ultra-high content platforms. We used extensive analytical tools to compute inter- and intra-run variability and concordance measured using fidelity scoring, coefficient of variation and cluster analysis. We carried out unprejudiced next generation sequencing to identify a microRNA signature for Diabetic Retinopathy (DR) and systematically assessed the validation of this signature on clinical samples using each of the above four qPCR platforms. The results indicate that sensitivity to measure low copy number microRNAs is inversely related to qPCR reaction volume and that the choice of platform for microRNA biomarker validation should be made based on the abundance of miRNAs of interest.
Inverse problems and computational cell metabolic models: a statistical approach
NASA Astrophysics Data System (ADS)
Calvetti, D.; Somersalo, E.
2008-07-01
In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.
[Spatiotemporal pattern analysis of event-related potentials elicited by emotional Stroop task].
Liu, Qi; Liu, Ling; He, Hui; Zhou, Shu
2007-05-01
To investigate the spatiotemporal pattern of event-related potentials (ERPs) induced by emotional Stroop task. The ERPs of 19 channels were recorded from 13 healthy subjects while performing emotional Stroop task by pressing the buttons representing the colors in which the words denoting different emotions were displayed. A repeated-measures factorial design was adopted with three levels (word valence: positive, neutral and negative). The result of ERP analysis was presented in the form of statistical parametric mapping (SPM) of F value. No significant difference was found in either reaction time or accuracy. The SPM of ERPs suggested significant emotional valence effects in the occipital region (200-220 ms), the left and central frontal regions (270-300 ms), and the bilateral temporal and parietal cortex (560-580 and 620-630 ms, respectively). Processing of task-irrelevant emotional valence information involves the dynamic operation of extensive brain regions. The ERPs are more sensitive than the behavioral indices in emotional evaluation.
Pneumothorax detection in chest radiographs using local and global texture signatures
NASA Astrophysics Data System (ADS)
Geva, Ofer; Zimmerman-Moreno, Gali; Lieberman, Sivan; Konen, Eli; Greenspan, Hayit
2015-03-01
A novel framework for automatic detection of pneumothorax abnormality in chest radiographs is presented. The suggested method is based on a texture analysis approach combined with supervised learning techniques. The proposed framework consists of two main steps: at first, a texture analysis process is performed for detection of local abnormalities. Labeled image patches are extracted in the texture analysis procedure following which local analysis values are incorporated into a novel global image representation. The global representation is used for training and detection of the abnormality at the image level. The presented global representation is designed based on the distinctive shape of the lung, taking into account the characteristics of typical pneumothorax abnormalities. A supervised learning process was performed on both the local and global data, leading to trained detection system. The system was tested on a dataset of 108 upright chest radiographs. Several state of the art texture feature sets were experimented with (Local Binary Patterns, Maximum Response filters). The optimal configuration yielded sensitivity of 81% with specificity of 87%. The results of the evaluation are promising, establishing the current framework as a basis for additional improvements and extensions.
Zheng, Wenming; Lin, Zhouchen; Wang, Haixian
2014-04-01
A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.
Comparative study on gene set and pathway topology-based enrichment methods.
Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim
2015-10-22
Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both types of methods for enrichment analysis require further improvements in order to deal with the problem of pathway overlaps.
Guerriero, S; Ajossa, S; Minguez, J A; Jurado, M; Mais, V; Melis, G B; Alcazar, J L
2015-11-01
To review the diagnostic accuracy of transvaginal ultrasound (TVS) in the preoperative detection of endometriosis in the uterosacral ligaments (USL), rectovaginal septum (RVS), vagina and bladder in patients with clinical suspicion of deep infiltrating endometriosis (DIE). An extensive search was performed in MEDLINE (PubMed) and EMBASE for studies published between January 1989 and December 2014. Studies were considered eligible if they reported on the use of TVS for the preoperative detection of endometriosis in the USL, RVS, vagina and bladder in women with clinical suspicion of DIE using the surgical data as a reference standard. Study quality was assessed using the PRISMA guidelines and QUADAS-2 tool. Of the 801 citations identified, 11 studies (n = 1583) were considered eligible and were included in the meta-analysis. For detection of endometriosis in the USL, the overall pooled sensitivity and specificity of TVS were 53% (95%CI, 35-70%) and 93% (95%CI, 83-97%), respectively. The pretest probability of USL endometriosis was 54%, which increased to 90% when suspicion of endometriosis was present after TVS examination. For detection of endometriosis in the RVS, the overall pooled sensitivity and specificity were 49% (95%CI, 36-62%) and 98% (95%CI, 95-99%), respectively. The pretest probability of RVS endometriosis was 24%, which increased to 89% when suspicion of endometriosis was present after TVS examination. For detection of vaginal endometriosis, the overall pooled sensitivity and specificity were 58% (95%CI, 40-74%) and 96% (95%CI, 87-99%), respectively. The pretest probability of vaginal endometriosis was 17%, which increased to 76% when suspicion of endometriosis was present after TVS assessment. Substantial heterogeneity was found for sensitivity and specificity for all these locations. For detection of bladder endometriosis, the overall pooled sensitivity and specificity were 62% (95%CI, 40-80%) and 100% (95%CI, 97-100%), respectively. Moderate heterogeneity was found for sensitivity and specificity for bladder endometriosis. The pretest probability of bladder endometriosis was 5%, which increased to 92% when suspicion of endometriosis was present after TVS assessment. Overall diagnostic performance of TVS for detecting DIE in uterosacral ligaments, rectovaginal septum, vagina and bladder is fair with high specificity. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.
A digital repository with an extensible data model for biobanking and genomic analysis management.
Izzo, Massimiliano; Mortola, Francesco; Arnulfo, Gabriele; Fato, Marco M; Varesio, Luigi
2014-01-01
Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid.
A digital repository with an extensible data model for biobanking and genomic analysis management
2014-01-01
Motivation Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. Results We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Conclusions Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid. PMID:25077808
Assessment of cross-reactivity among five species of house dust and storage mites.
Saridomichelakis, Manolis N; Marsella, Rosanna; Lee, Kenneth W; Esch, Robert E; Farmaki, Rania; Koutinas, Alexander F
2008-04-01
In vitro cross-reactivity among two house dust (Dermatophagoides farinae, D. pteronyssinus) and three storage (Acarus siro, Tyrophagus putrescentiae, Lepidoglyphus destructor) mites was examined in 20 mite-sensitive dogs with natural occurring atopic dermatitis (group A), 13 high-IgE beagles experimentally sensitized to D. farinae (group B), and five healthy beagles (group C). Intradermal testing (IDT) and serology for allergen-specific IgE demonstrated that co-sensitization for all possible pairs of the five mites was generally 45% or higher among group A dogs. In the same dogs, enzyme-linked immunosorbent assay cross-inhibition results indicated that each one of D. farinae, A. siro and T. putrescentiae was a strong inhibitor of all the remaining mites, whereas D. pteronyssinus was a strong inhibitor of L. destructor. A high number of positive IDT and serology test results for D. pteronyssinus, A. siro, T. putrescentiae and L. destructor were recorded among group B dogs. No conclusive evidence of exposure to these mites was found upon analysis of dust samples from their environment and their food for the presence of mites and guanine. Also, the number of positive test results was generally higher among group B than among group C dogs. Enzyme-linked immunosorbent assay cross-inhibition revealed that D. farinae was a strong inhibitor of D. pteronyssinus, A. siro and T. putrescentiae. Collectively, these results demonstrated extensive in vitro cross-reactivity among house dust and/or storage mites that can explain false-positive results upon testing of dust mite-sensitive dogs with atopic dermatitis.
A Transient Dopamine Signal Represents Avoidance Value and Causally Influences the Demand to Avoid
Pultorak, Katherine J.; Schelp, Scott A.; Isaacs, Dominic P.; Krzystyniak, Gregory
2018-01-01
Abstract While an extensive literature supports the notion that mesocorticolimbic dopamine plays a role in negative reinforcement, recent evidence suggests that dopamine exclusively encodes the value of positive reinforcement. In the present study, we employed a behavioral economics approach to investigate whether dopamine plays a role in the valuation of negative reinforcement. Using rats as subjects, we first applied fast-scan cyclic voltammetry (FSCV) to determine that dopamine concentration decreases with the number of lever presses required to avoid electrical footshock (i.e., the economic price of avoidance). Analysis of the rate of decay of avoidance demand curves, which depict an inverse relationship between avoidance and increasing price, allows for inference of the worth an animal places on avoidance outcomes. Rapidly decaying demand curves indicate increased price sensitivity, or low worth placed on avoidance outcomes, while slow rates of decay indicate reduced price sensitivity, or greater worth placed on avoidance outcomes. We therefore used optogenetics to assess how inducing dopamine release causally modifies the demand to avoid electrical footshock in an economic setting. Increasing release at an avoidance predictive cue made animals more sensitive to price, consistent with a negative reward prediction error (i.e., the animal perceives they received a worse outcome than expected). Increasing release at avoidance made animals less sensitive to price, consistent with a positive reward prediction error (i.e., the animal perceives they received a better outcome than expected). These data demonstrate that transient dopamine release events represent the value of avoidance outcomes and can predictably modify the demand to avoid. PMID:29766047
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Management of Malignant Pleural Effusion: A Cost-Utility Analysis.
Shafiq, Majid; Frick, Kevin D; Lee, Hans; Yarmus, Lonny; Feller-Kopman, David J
2015-07-01
Malignant pleural effusion (MPE) is associated with a significant impact on health-related quality of life. Palliative interventions abound, with varying costs and degrees of invasiveness. We examined the relative cost-utility of 5 therapeutic alternatives for MPE among adults. Original studies investigating the management of MPE were extensively researched, and the most robust and current data particularly those from the TIME2 trial were chosen to estimate event probabilities. Medicare data were used for cost estimation. Utility estimates were adapted from 2 original studies and kept consistent with prior estimations. The decision tree model was based on clinical guidelines and authors' consensus opinion. Primary outcome of interest was the incremental cost-effectiveness ratio for each intervention over a less effective alternative over an analytical horizon of 6 months. Given the paucity of data on rapid pleurodesis protocol, a sensitivity analysis was conducted to address the uncertainty surrounding its efficacy in terms of achieving long-term pleurodesis. Except for repeated thoracentesis (RT; least effective), all interventions had similar effectiveness. Tunneled pleural catheter was the most cost-effective option with an incremental cost-effectiveness ratio of $45,747 per QALY gained over RT, assuming a willingness-to-pay threshold of $100,000/QALY. Multivariate sensitivity analysis showed that rapid pleurodesis protocol remained cost-ineffective even with an estimated probability of lasting pleurodesis up to 85%. Tunneled pleural catheter is the most cost-effective therapeutic alternative to RT. This, together with its relative convenience (requiring neither hospitalization nor thoracoscopic procedural skills), makes it an intervention of choice for MPE.
Plasmonic light-sensitive skins of nanocrystal monolayers
NASA Astrophysics Data System (ADS)
Akhavan, Shahab; Gungor, Kivanc; Mutlugun, Evren; Demir, Hilmi Volkan
2013-04-01
We report plasmonically coupled light-sensitive skins of nanocrystal monolayers that exhibit sensitivity enhancement and spectral range extension with plasmonic nanostructures embedded in their photosensitive nanocrystal platforms. The deposited plasmonic silver nanoparticles of the device increase the optical absorption of a CdTe nanocrystal monolayer incorporated in the device. Controlled separation of these metallic nanoparticles in the vicinity of semiconductor nanocrystals enables optimization of the photovoltage buildup in the proposed nanostructure platform. The enhancement factor was found to depend on the excitation wavelength. We observed broadband sensitivity improvement (across 400-650 nm), with a 2.6-fold enhancement factor around the localized plasmon resonance peak. The simulation results were found to agree well with the experimental data. Such plasmonically enhanced nanocrystal skins hold great promise for large-area UV/visible sensing applications.
Reinforcement Sensitivity, Coping, and Delinquent Behaviour in Adolescents
ERIC Educational Resources Information Center
Hasking, Penelope A.
2007-01-01
Since 1964, the relationship between personality and criminal behaviour has been extensively studied. However, studies, which have examined the Eysenckian dimensions of extraversion, neuroticism and psychoticism have produced mixed results. Gray's [Gray, J. A. (1970). The psychophysiological basis of introversion-extroversion. "Behavior Research…
77 FR 14367 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-09
.... SUPPLEMENTARY INFORMATION: Proposal to request approval from OMB of the extension for three years, with revision...: 12 U.S.C. 3105(c)(2), 1817(a)(1) and (3), and 3102(b). Except for select sensitive items, the FFIEC...
Li, Wenbo; Zhao, Sheng; Wu, Nan; Zhong, Junwen; Wang, Bo; Lin, Shizhe; Chen, Shuwen; Yuan, Fang; Jiang, Hulin; Xiao, Yongjun; Hu, Bin; Zhou, Jun
2017-07-19
Wearable active sensors have extensive applications in mobile biosensing and human-machine interaction but require good flexibility, high sensitivity, excellent stability, and self-powered feature. In this work, cellular polypropylene (PP) piezoelectret was chosen as the core material of a sensitivity-enhanced wearable active voiceprint sensor (SWAVS) to realize voiceprint recognition. By virtue of the dipole orientation control method, the air layers in the piezoelectret were efficiently utilized, and the current sensitivity was enhanced (from 1.98 pA/Hz to 5.81 pA/Hz at 115 dB). The SWAVS exhibited the superiorities of high sensitivity, accurate frequency response, and excellent stability. The voiceprint recognition system could make correct reactions to human voices by judging both the password and speaker. This study presented a voiceprint sensor with potential applications in noncontact biometric recognition and safety guarantee systems, promoting the progress of wearable sensor networks.
Schwensen, Jakob Ferløv; Friis, Ulrik Fischer; Menné, Torkil; Flyvholm, Mari-Ann; Johansen, Jeanne Duus
2017-05-01
The aim of the study is to investigate risk factors for sensitization to preservatives and to examine to which extent different preservatives are registered in chemical products for occupational use in Denmark. A retrospective epidemiological observational analysis of data from a university hospital was conducted. All patients had occupational contact dermatitis and were consecutively patch tested with 11 preservatives from the European baseline series and extended patch test series during a 5-year period: 2009-2013. Information regarding the same preservatives in chemical products for occupational use ('substances and materials') registered in the Danish Product Register Database (PROBAS) was obtained. The frequency of preservative contact allergy was 14.2% (n = 141) in 995 patients with occupational contact dermatitis. Patients with preservative contact allergy had significantly more frequently facial dermatitis (19.9 versus 13.1%) and age > 40 years (71.6 versus 45.8%) than patients without preservative contact allergy, whereas atopic dermatitis was less frequently observed (12.1 versus 19.8%). Preservative contact allergy was more frequent in painters with occupational contact dermatitis as compared to non-painters with occupational contact dermatitis (p < 0.001). This was mainly caused by contact allergy to methylisothiazolinone and contact allergy to formaldehyde. Analysis of the registered substances and materials in PROBAS revealed that preservatives occurred in several product categories, e.g., 'paints and varnishes', 'cleaning agents', 'cooling agents', and 'polishing agents'. Formaldehyde and isothiazolinones were extensively registered in PROBAS. The extensive use of formaldehyde and isothiazolinones in chemical products for occupational use may be problematic for the worker. Appropriate legislation, substitution, and employee education should be prioritized.
Liu, Kuangyi; Song, Yonggui; Liu, Yali; Peng, Mi; Li, Hanyun; Li, Xueliang; Feng, Bingwei; Xu, Pengfei; Su, Dan
2017-05-30
Currently the pharmacokinetic (PK) research of herbal medicines is still limited and facing critical technical challenges on quantitative analysis of multi-components from biological matrices which often accompanied by lacking of authentic standards and low concentration. This present work contributes to the development of an integrated strategy for extensive pharmacokinetics assessments, and a selective and sensitive method independent of authentic standards for multi-components analysis based on the use of ultra-performance liquid chromatography/quadrupole-time-of-flight/MS E (UPLC-TOF-MS E ) and UPLC-TOF-MRM (rnhanced target). Initially, phytochemicals were identified by UPLC-TOF-MS E analysis, subsequently the identified components were matched with authentic standards and pre-classified, and UPLC-QTOF-MRM method optimized and developed. To guarantee reliable results, three rules are necessary: (1) detection with a mass error of less than 5ppm; (2) same class chemical compositions with structural high similarity between analytes with and without authentic reference substance; (3) a matching retention time between TOF-MRM mode and TOF-MS E within 0.2min. The developed and validated method was applied for the simultaneous determination of 12 lignans in rat plasma after administered with wine processed Schisandra Chinensis fructus (WPSCF) extract. Such an approach was found capable of providing extensive pharmacokinetic profiles of multi-components absorbed into blood after oral administrated with WPSCF extract. The results also indicated that significant difference in pharmacokinetics parameters of dibenzocyclooctadiene lignans was observed between schizandrin and gomisin compounds. For lignans, the absorption via gastrointestinal tract were all rapid and maintained relatively long retention time, especially for schisantherin A and schisantherin B with higher plasma exposure. Copyright © 2017 Elsevier B.V. All rights reserved.
Mertens, Janina; Stock, Stephanie; Lüngen, Markus; von Berg, Andrea; Krämer, Ursula; Filipiak-Pittroff, Birgit; Heinrich, Joachim; Koletzko, Sibylle; Grübl, Armin; Wichmann, H-Erich; Bauer, Carl-P; Reinhardt, Dietrich; Berdel, Dietrich; Gerber, Andreas
2012-09-01
The German Infant Nutritional Intervention (GINI) trial, a prospective, randomized, double-blind intervention, enrolled children with a hereditary risk for atopy. When fed with certain hydrolyzed formulas for the first 4 months of life, the risk was reduced by 26-45% in PP and 8-29% in intention-to-treat (ITT) analyses compared with children fed with regular cow's milk at age 6. The objective was to assess the cost-effectiveness of feeding hydrolyzed formulas. Cost-effectiveness was assessed with a decision tree model programmed in TreeAge. Costs and effects over a 6-yr period were analyzed from the perspective of the German statutory health insurance (SHI) and a societal perspective at a 3% effective discount rate followed by sensitivity analyses. The extensively hydrolyzed casein formula would be the most cost-saving strategy with savings of 478 € per child treated in the ITT analysis (CI95%: 12 €; 852 €) and 979 € in the PP analysis (95%CI: 355 €; 1455 €) from a societal perspective. If prevented cases are considered, the partially whey hydrolyzed formula is cost-saving (ITT -5404 €, PP -6358 €). From an SHI perspective, the partially whey hydrolyzed formula is cost-effective, but may also be cost-saving depending on the scenario. An extensively hydrolyzed whey formula also included into the analysis was dominated in all analyses. For the prevention of AE, two formulas can be cost-effective or even cost-saving. We recommend that SHI should reimburse formula feeding or at least the difference between costs for cow's milk formula and the most cost-effective formula. © 2012 John Wiley & Sons A/S.
Coyle, Kathryn; Carrier, Marc; Lazo-Langner, Alejandro; Shivakumar, Sudeep; Zarychanski, Ryan; Tagalakis, Vicky; Solymoss, Susan; Routhier, Nathalie; Douketis, James; Coyle, Douglas
2017-03-01
Unprovoked venous thromboembolism (VTE) can be the first manifestation of cancer. It is unclear if extensive screening for occult cancer including a comprehensive computed tomography (CT) scan of the abdomen/pelvis is cost-effective in this patient population. To assess the health care related costs, number of missed cancer cases and health related utility values of a limited screening strategy with and without the addition of a comprehensive CT scan of the abdomen/pelvis and to identify to what extent testing should be done in these circumstances to allow early detection of occult cancers. Cost effectiveness analysis using data that was collected alongside the SOME randomized controlled trial which compared an extensive occult cancer screening including a CT of the abdomen/pelvis to a more limited screening strategy in patients with a first unprovoked VTE, was used for the current analyses. Analyses were conducted with a one-year time horizon from a Canadian health care perspective. Primary analysis was based on complete cases, with sensitivity analysis using appropriate multiple imputation methods to account for missing data. Data from a total of 854 patients with a first unprovoked VTE were included in these analyses. The addition of a comprehensive CT scan was associated with higher costs ($551 CDN) with no improvement in utility values or number of missed cancers. Results were consistent when adopting multiple imputation methods. The addition of a comprehensive CT scan of the abdomen/pelvis for the screening of occult cancer in patients with unprovoked VTE is not cost effective, as it is both more costly and not more effective in detecting occult cancer. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydom, Gerhard; Bostelmann, F.
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained).more » SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) be implemented. This CRP is a continuation of the previous IAEA and Organization for Economic Co-operation and Development (OECD)/Nuclear Energy Agency (NEA) international activities on Verification and Validation (V&V) of available analytical capabilities for HTGR simulation for design and safety evaluations. Within the framework of these activities different numerical and experimental benchmark problems were performed and insight was gained about specific physics phenomena and the adequacy of analysis methods.« less
Extension Procedures for Confirmatory Factor Analysis
ERIC Educational Resources Information Center
Nagy, Gabriel; Brunner, Martin; Lüdtke, Oliver; Greiff, Samuel
2017-01-01
We present factor extension procedures for confirmatory factor analysis that provide estimates of the relations of common and unique factors with external variables that do not undergo factor analysis. We present identification strategies that build upon restrictions of the pattern of correlations between unique factors and external variables. The…
Granja, M F; Pedraza, C M; Flórez, D C; Romero, J A; Palau, M A; Aguirre, D A
To evaluate the diagnostic performance of the length of the tumor contact with the capsule (LTC) and the apparent diffusion coefficient (ADC) map in the prediction of microscopic extracapsular extension in patients with prostate cancer who are candidates for radical prostatectomy. We used receiver operating curves to retrospectively study the diagnostic performance of the ADC map and the LTC as predictors of microscopic extracapsular extension in 92 patients with prostate cancer and moderate to high risk who were examined between May 2011 and December 2013. The optimal cutoff for the ADC map was 0.87× 10 -3 mm 2 /s, which yielded an area under the ROC curve of 72% (95% CI: 57%-86%), corresponding to a sensitivity of 83% and a specificity of 61%. The optimal cutoff for the LTC was 17.5mm, which yielded an area under the ROC curve of 74% (95% CI: 61%-87%), corresponding to a sensitivity of 91% and a specificity of 57%. Combining the two criteria improved the diagnostic performance, yielding an area under the ROC curve of 77% (95% CI: 62%-92%), corresponding to a sensitivity of 77% and a specificity of 61%. We elaborated a logistic regression model, obtaining an area under the ROC curve of 82% (95% CI: 73%-93%). Using quantitative measures improves the diagnostic accuracy of multiparametric magnetic resonance imaging in the staging of prostate cancer. The values of the ADC and LTC were predictors of microscopic extracapsular extension, and the best results were obtained when both values were used in combination. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Yajing; Xiong, Zhichao; Zhang, Lingyi; Zhao, Jiaying; Zhang, Quanqing; Peng, Li; Zhang, Weibing; Ye, Mingliang; Zou, Hanfa
2015-02-01
Highly selective and efficient capture of glycosylated proteins and peptides from complex biological samples is of profound significance for the discovery of disease biomarkers in biological systems. Recently, hydrophilic interaction liquid chromatography (HILIC)-based functional materials have been extensively utilized for glycopeptide enrichment. However, the low amount of immobilized hydrophilic groups on the affinity material has limited its specificity, detection sensitivity and binding capacity in the capture of glycopeptides. Herein, a novel affinity material was synthesized to improve the binding capacity and detection sensitivity for glycopeptides by coating a poly(2-(methacryloyloxy)ethyl)-dimethyl-(3-sulfopropyl) ammonium hydroxide (PMSA) shell onto Fe3O4@SiO2 nanoparticles, taking advantage of reflux-precipitation polymerization for the first time (denoted as Fe3O4@SiO2@PMSA). The thick polymer shell endows the nanoparticles with excellent hydrophilic property and several functional groups on the polymer chains. The resulting Fe3O4@SiO2@PMSA demonstrated an outstanding ability for glycopeptide enrichment with high selectivity, extremely high detection sensitivity (0.1 fmol), large binding capacity (100 mg g-1), high enrichment recovery (above 73.6%) and rapid magnetic separation. Furthermore, in the analysis of real complicated biological samples, 905 unique N-glycosylation sites from 458 N-glycosylated proteins were reliably identified in three replicate analyses of a 65 μg protein sample extracted from mouse liver, showing the great potential of Fe3O4@SiO2@PMSA in the detection and identification of low-abundance N-linked glycopeptides in biological samples.Highly selective and efficient capture of glycosylated proteins and peptides from complex biological samples is of profound significance for the discovery of disease biomarkers in biological systems. Recently, hydrophilic interaction liquid chromatography (HILIC)-based functional materials have been extensively utilized for glycopeptide enrichment. However, the low amount of immobilized hydrophilic groups on the affinity material has limited its specificity, detection sensitivity and binding capacity in the capture of glycopeptides. Herein, a novel affinity material was synthesized to improve the binding capacity and detection sensitivity for glycopeptides by coating a poly(2-(methacryloyloxy)ethyl)-dimethyl-(3-sulfopropyl) ammonium hydroxide (PMSA) shell onto Fe3O4@SiO2 nanoparticles, taking advantage of reflux-precipitation polymerization for the first time (denoted as Fe3O4@SiO2@PMSA). The thick polymer shell endows the nanoparticles with excellent hydrophilic property and several functional groups on the polymer chains. The resulting Fe3O4@SiO2@PMSA demonstrated an outstanding ability for glycopeptide enrichment with high selectivity, extremely high detection sensitivity (0.1 fmol), large binding capacity (100 mg g-1), high enrichment recovery (above 73.6%) and rapid magnetic separation. Furthermore, in the analysis of real complicated biological samples, 905 unique N-glycosylation sites from 458 N-glycosylated proteins were reliably identified in three replicate analyses of a 65 μg protein sample extracted from mouse liver, showing the great potential of Fe3O4@SiO2@PMSA in the detection and identification of low-abundance N-linked glycopeptides in biological samples. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr05955g
NASA Astrophysics Data System (ADS)
Opsahl, Stephen; Benner, Ronald
1995-12-01
Long-term subaqueous decomposition patterns of five different vascular plant tissues including mangrove leaves and wood ( Avicennia germinans), cypress needles and wood ( Taxodium distichum) and smooth cordgrass ( Spartina alternifora) were followed for a period of 4.0 years, representing the longest litter bag decomposition study to date. All tissues decomposed under identical conditions and final mass losses were 97, 68, 86, 39, and 93%, respectively. Analysis of the lignin component of herbaceous tissues using alkaline CuO oxidation was complicated by the presence of a substantial ester-bound phenol component composed primarily of cinnamyl phenols. To overcome this problem, we introduce a new parameter to represent lignin, Λ6. Λ6 is comprised only of the six syringyl and vanillyl phenols and was found to be much less sensitive to diagenetic variation than the commonly used parameter Λ, which includes the cinnamyl phenols. Patterns of change in lignin content were strongly dependent on tissue type, ranging from 77% enrichment in smooth cordgrass to 6% depletion in cypress needles. In contrast, depletion of cutin was extensive (65-99%) in all herbaceous tissues. Despite these differences in the overall reactivity of lignin and cutin, both macromolecules were extensively degraded during the decomposition period. The long-term decomposition series also provided very useful information about the compositional parameters which are derived from the specific oxidation products of both lignin and cutin. The relative lability of ester-bound cinnamyl phenols compromised their use in parameters to distinguish woody from herbaceous plant debris. The dimer to monomer ratios of lignin-derived phenols indicated that most intermonomeric linkages in lignin degraded at similar rates. Acid to aldehyde ratios of vanillyl and syringyl phenols became elevated, particularly during the latter stages of decomposition supporting the use of these parameters as indicators of diagenetic alteration. Given the observation that cutin-derived source indicator parameters were generally more sensitive to diagenetic alteration than those of lignin, we suggest the distributional patterns of cutin-derived acids and their associated positional isomers may be most useful for tissue-specific distinctions complementing the general categorical information obtained from lignin phenol analysis alone.
The Acetylene-Ethylene Assay for N2 Fixation: Laboratory and Field Evaluation 1
Hardy, R. W. F.; Holsten, R. D.; Jackson, E. K.; Burns, R. C.
1968-01-01
The methodology, characteristics and application of the sensitive C2H2-C2H4 assay for N2 fixation by nitrogenase preparations and bacterial cultures in the laboratory and by legumes and free-living bacteria in situ is presented in this comprehensive report. This assay is based on the N2ase-catalyzed reduction of C2H2 to C2H4, gas chromatographic isolation of C2H2 and C2H4, and quantitative measurement with a H2-flame analyzer. As little as 1 μμmole C2H4 can be detected, providing a sensitivity 103-fold greater than is possible with 15N analysis. A simple, rapid and effective procedure utilizing syringe-type assay chambers is described for the analysis of C2H2-reducing activity in the field. Applications to field samples included an evaluation of N2 fixation by commercially grown soybeans based on over 2000 analyses made during the course of the growing season. Assay values reflected the degree of nodulation of soybean plants and indicated a calculated seasonal N2 fixation rate of 30 to 33 kg N2 fixed per acre, in good agreement with literature estimates based on Kjeldahl analyses. The assay was successfully applied to measurements of N2 fixation by other symbionts and by free living soil microorganisms, and was also used to assess the effects of light and temperature on the N2 fixing activity of soybeans. The validity of measuring N2 fixation in terms of C2H2 reduction was established through extensive comparisons of these activities using defined systems, including purified N2ase preparations and pure cultures of N2-fixing bacteria. With this assay it now becomes possible and practicable to conduct comprehensive surveys of N2 fixation, to make detailed comparisons among different N2-fixing symbionts, and to rapidly evaluate the effects of cultural practices and environmental factors on N2 fixation. The knowledge obtained through extensive application of this assay should provide the basis for efforts leading to the maximum agricultural exploitation of the N2 fixation reaction. PMID:16656902
A review of mechanistic insight and application of pH-sensitive liposomes in drug delivery.
Paliwal, Shivani Rai; Paliwal, Rishi; Vyas, Suresh P
2015-05-01
The pH-sensitive liposomes have been extensively used as an alternative to conventional liposomes in effective intracellular delivery of therapeutics/antigen/DNA/diagnostics to various compartments of the target cell. Such liposomes are destabilized under acidic conditions of the endocytotic pathway as they usually contain pH-sensitive lipid components. Therefore, the encapsulated content is delivered into the intracellular bio-environment through destabilization or its fusion with the endosomal membrane. The therapeutic efficacy of pH-sensitive liposomes enables them as biomaterial with commercial utility especially in cancer treatment. In addition, targeting ligands including antibodies can be anchored on the surface of pH-sensitive liposomes to target specific cell surface receptors/antigen present on tumor cells. These vesicles have also been widely explored for antigen delivery and serve as immunological adjuvant to enhance the immune response to antigens. The present review deals with recent research updates on application of pH-sensitive liposomes in chemotherapy/diagnostics/antigen/gene delivery etc.
Improvements to the YbF electron electric dipole moment experiment
NASA Astrophysics Data System (ADS)
Sauer, B. E.; Rabey, I. M.; Devlin, J. A.; Tarbutt, M. R.; Ho, C. J.; Hinds, E. A.
2017-04-01
The standard model of particle physics predicts that the permanent electric dipole moment (EDM) of the electron is very nearly zero. Many extensions to the standard model predict an electron EDM just below current experimental limits. We are currently working to improve the sensitivity of the Imperial College YbF experiment. We have implemented combined laser-radiofrequency pumping techniques which both increase the number of molecules which participate in the EDM experiment and also increase the probability of detection. Combined, these techniques give nearly two orders of magnitude increase in the experimental sensitivity. At this enhanced sensitivity magnetic effects which were negligible become important. We have developed a new way to construct the electrodes for electric field plates which minimizes the effect of magnetic Johnson noise. The new YbF experiment is expected to comparable in sensitivity to the most sensitive measurements of the electron EDM to date. We will also discuss laser cooling techniques which promise an even larger increase in sensitivity.
ERIC Educational Resources Information Center
Hwang, Heungsun; Montreal, Hec; Dillon, William R.; Takane, Yoshio
2006-01-01
An extension of multiple correspondence analysis is proposed that takes into account cluster-level heterogeneity in respondents' preferences/choices. The method involves combining multiple correspondence analysis and k-means in a unified framework. The former is used for uncovering a low-dimensional space of multivariate categorical variables…
Tetracycline and its derivatives are extensively used human and animal antibiotics, and enter stream ecosystems via point and non-point sources. Laboratory studies indicate that microbial organisms are more sensitive to antibiotics than invertebrates or fish, and may indicate t...
PRNP variants in goats reduce sensitivity of detection of PrPSc by immunoassay
USDA-ARS?s Scientific Manuscript database
Immunoassays are extensively utilized in disease diagnostics with monoclonal antibodies serving as critical tools within the assay. Detection of scrapie in sheep and goats relies heavily on immunoassays including immunohistochemistry, western blotting, and ELISA. In the United States, regulatory tes...
Testing sterile neutrino extensions of the Standard Model at future lepton colliders
NASA Astrophysics Data System (ADS)
Antusch, Stefan; Fischer, Oliver
2015-05-01
Extending the Standard Model (SM) with sterile ("right-handed") neutrinos is one of the best motivated ways to account for the observed neutrino masses. We discuss the expected sensitivity of future lepton collider experiments for probing such extensions. An interesting testable scenario is given by "symmetry protected seesaw models", which theoretically allow for sterile neutrino masses around the electroweak scale with up to order one mixings with the light (SM) neutrinos. In addition to indirect tests, e.g. via electroweak precision observables, sterile neutrinos with masses around the electroweak scale can also be probed by direct searches, e.g. via sterile neutrino decays at the Z pole, deviations from the SM cross section for four lepton final states at and beyond the WW threshold and via Higgs boson decays. We study the present bounds on sterile neutrino properties from LEP and LHC as well as the expected sensitivities of possible future lepton colliders such as ILC, CEPC and FCC-ee (TLEP).
2007-01-01
multi-disciplinary optimization with uncertainty. Robust optimization and sensitivity analysis is usually used when an optimization model has...formulation is introduced in Section 2.3. We briefly discuss several definitions used in the sensitivity analysis in Section 2.4. Following in...2.5. 2.4 SENSITIVITY ANALYSIS In this section, we discuss several definitions used in Chapter 5 for Multi-Objective Sensitivity Analysis . Inner
Frempong, Samuel N; Sutton, Andrew J; Davenport, Clare; Barton, Pelham
2018-02-01
There is little specific guidance on the implementation of cost-effectiveness modelling at the early stage of test development. The aim of this study was to review the literature in this field to examine the methodologies and tools that have been employed to date. Areas Covered: A systematic review to identify relevant studies in established literature databases. Five studies were identified and included for narrative synthesis. These studies revealed that there is no consistent approach in this growing field. The perspective of patients and the potential for value of information (VOI) to provide information on the value of future research is often overlooked. Test accuracy is an essential consideration, with most studies having described and included all possible test results in their analysis, and conducted extensive sensitivity analyses on important parameters. Headroom analysis was considered in some instances but at the early development stage (not the concept stage). Expert commentary: The techniques available to modellers that can demonstrate the value of conducting further research and product development (i.e. VOI analysis, headroom analysis) should be better utilized. There is the need for concerted efforts to develop rigorous methodology in this growing field to maximize the value and quality of such analysis.
Wang, Jianmiao; Xu, Yongjian; Liu, Xiansheng; Xiong, Weining; Xie, Jungang; Zhao, Jianping
2016-01-01
Problem-based learning (PBL) has been extensively applied as an experimental educational method in Chinese medical schools over the past decade. A meta-analysis was performed to assess the effectiveness of PBL on students’ learning outcomes in physical diagnostics education. Related databases were searched for eligible studies evaluating the effects of PBL compared to traditional teaching on students’ knowledge and/or skill scores of physical diagnostics. Standardized mean difference (SMD) with 95% confidence interval (CI) was estimated. Thirteen studies with a total of 2086 medical students were included in this meta-analysis. All of these studies provided usable data on knowledge scores, and the pooled analysis showed a significant difference in favor of PBL compared to the traditional teaching (SMD = 0.76, 95%CI = 0.33–1.19). Ten studies provided usable data on skill scores, and a significant difference in favor of PBL was also observed (SMD = 1.46, 95%CI = 0.89–2.02). Statistically similar results were obtained in the sensitivity analysis, and there was no significant evidence of publication bias. These results suggested that PBL in physical diagnostics education in China appeared to be more effective than traditional teaching method in improving knowledge and skills. PMID:27808158
Wang, Jianmiao; Xu, Yongjian; Liu, Xiansheng; Xiong, Weining; Xie, Jungang; Zhao, Jianping
2016-11-03
Problem-based learning (PBL) has been extensively applied as an experimental educational method in Chinese medical schools over the past decade. A meta-analysis was performed to assess the effectiveness of PBL on students' learning outcomes in physical diagnostics education. Related databases were searched for eligible studies evaluating the effects of PBL compared to traditional teaching on students' knowledge and/or skill scores of physical diagnostics. Standardized mean difference (SMD) with 95% confidence interval (CI) was estimated. Thirteen studies with a total of 2086 medical students were included in this meta-analysis. All of these studies provided usable data on knowledge scores, and the pooled analysis showed a significant difference in favor of PBL compared to the traditional teaching (SMD = 0.76, 95%CI = 0.33-1.19). Ten studies provided usable data on skill scores, and a significant difference in favor of PBL was also observed (SMD = 1.46, 95%CI = 0.89-2.02). Statistically similar results were obtained in the sensitivity analysis, and there was no significant evidence of publication bias. These results suggested that PBL in physical diagnostics education in China appeared to be more effective than traditional teaching method in improving knowledge and skills.
Wang, Jun-Sheng; Olszewski, Emily; Devine, Erin E; Hoffman, Matthew R; Zhang, Yu; Shao, Jun; Jiang, Jack J
2016-08-01
To evaluate the spatiotemporal correlation of vocal fold vibration using eigenmode analysis before and after polyp removal and explore the potential clinical relevance of spatiotemporal analysis of correlation length and entropy as quantitative voice parameters. We hypothesized that increased order in the vibrating signal after surgical intervention would decrease the eigenmode-based entropy and increase correlation length. Prospective case series. Forty subjects (23 males, 17 females) with unilateral (n = 24) or bilateral (n = 16) polyps underwent polyp removal. High-speed videoendoscopy was performed preoperatively and 2 weeks postoperatively. Spatiotemporal analysis was performed to determine entropy, quantification of signal disorder, correlation length, size, and spatially ordered structure of vocal fold vibration in comparison to full spatial consistency. The signal analyzed consists of the vibratory pattern in space and time derived from the high-speed video glottal area contour. Entropy decreased (Z = -3.871, P < .001) and correlation length increased (t = -8.913, P < .001) following polyp excision. The intraclass correlation coefficients (ICC) for correlation length and entropy were 0.84 and 0.93. Correlation length and entropy are sensitive to mass lesions. These parameters could potentially be used to augment subjective visualization after polyp excision when evaluating procedural efficacy. © The Author(s) 2016.
Huang, Zhuo; Ito, Kazuaki; Morita, Isamu; Yokota, Kuriko; Fukushi, Keiichi; Timerbaev, Andrei R; Watanabe, Shuichi; Hirokawa, Takeshi
2005-08-01
Using a novel high-sensitivity capillary electrophoretic method, vertical distributions of iodate, iodide, total inorganic iodine, dissolved organic iodine and total iodine in the North Pacific Ocean (0-5500 m) were determined without any sample pre-treatment other than UV irradiation before total iodine analysis. An extensive set of data demonstrated that the iodine behaviour in the ocean water collected during a cruise in the North Pacific Ocean in February-March 2003 was not conservative but correlated with variations in concentrations of dissolved oxygen and nutrient elements such as silicon, nitrogen and phosphorus. This suggests that the vertical distribution of iodine is associated with biological activities. The dissolved organic iodine was found in the euphotic zone in accord with observations elsewhere in the oceans. The vertical profile of dissolved organic iodine also appears to be related to biogeochemical activity. The concentrations of all measured iodine species vary noticeably above 1000 m but only minor latitudinal changes occur below 1000 m and slight vertical alterations can be observed below 2400 m. These findings are thought to reflect the stratification of nutrients and iodine species with different biological activities in the water column.
Nanotopography-guided tissue engineering and regenerative medicine.
Kim, Hong Nam; Jiao, Alex; Hwang, Nathaniel S; Kim, Min Sung; Kang, Do Hyun; Kim, Deok-Ho; Suh, Kahp-Yang
2013-04-01
Human tissues are intricate ensembles of multiple cell types embedded in complex and well-defined structures of the extracellular matrix (ECM). The organization of ECM is frequently hierarchical from nano to macro, with many proteins forming large scale structures with feature sizes up to several hundred microns. Inspired from these natural designs of ECM, nanotopography-guided approaches have been increasingly investigated for the last several decades. Results demonstrate that the nanotopography itself can activate tissue-specific function in vitro as well as promote tissue regeneration in vivo upon transplantation. In this review, we provide an extensive analysis of recent efforts to mimic functional nanostructures in vitro for improved tissue engineering and regeneration of injured and damaged tissues. We first characterize the role of various nanostructures in human tissues with respect to each tissue-specific function. Then, we describe various fabrication methods in terms of patterning principles and material characteristics. Finally, we summarize the applications of nanotopography to various tissues, which are classified into four types depending on their functions: protective, mechano-sensitive, electro-active, and shear stress-sensitive tissues. Some limitations and future challenges are briefly discussed at the end. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Effects of emotional valence and arousal on the voice perception network
Kotz, Sonja A.; Belin, Pascal
2017-01-01
Abstract Several theories conceptualise emotions along two main dimensions: valence (a continuum from negative to positive) and arousal (a continuum that varies from low to high). These dimensions are typically treated as independent in many neuroimaging experiments, yet recent behavioural findings suggest that they are actually interdependent. This result has impact on neuroimaging design, analysis and theoretical development. We were interested in determining the extent of this interdependence both behaviourally and neuroanatomically, as well as teasing apart any activation that is specific to each dimension. While we found extensive overlap in activation for each dimension in traditional emotion areas (bilateral insulae, orbitofrontal cortex, amygdalae), we also found activation specific to each dimension with characteristic relationships between modulations of these dimensions and BOLD signal change. Increases in arousal ratings were related to increased activations predominantly in voice-sensitive cortices after variance explained by valence had been removed. In contrast, emotions of extreme valence were related to increased activations in bilateral voice-sensitive cortices, hippocampi, anterior and midcingulum and medial orbito- and superior frontal regions after variance explained by arousal had been accounted for. Our results therefore do not support a complete segregation of brain structures underpinning the processing of affective dimensions. PMID:28449127
Optimization of bicelle lipid composition and temperature for EPR spectroscopy of aligned membranes.
McCaffrey, Jesse E; James, Zachary M; Thomas, David D
2015-01-01
We have optimized the magnetic alignment of phospholipid bilayered micelles (bicelles) for EPR spectroscopy, by varying lipid composition and temperature. Bicelles have been extensively used in NMR spectroscopy for several decades, in order to obtain aligned samples in a near-native membrane environment and take advantage of the intrinsic sensitivity of magnetic resonance to molecular orientation. Recently, bicelles have also seen increasing use in EPR, which offers superior sensitivity and orientational resolution. However, the low magnetic field strength (less than 1 T) of most conventional EPR spectrometers results in homogeneously oriented bicelles only at a temperature well above physiological. To optimize bicelle composition for magnetic alignment at reduced temperature, we prepared bicelles containing varying ratios of saturated (DMPC) and unsaturated (POPC) phospholipids, using EPR spectra of a spin-labeled fatty acid to assess alignment as a function of lipid composition and temperature. Spectral analysis showed that bicelles containing an equimolar mixture of DMPC and POPC homogeneously align at 298 K, 20 K lower than conventional DMPC-only bicelles. It is now possible to perform EPR studies of membrane protein structure and dynamics in well-aligned bicelles at physiological temperatures and below. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Peng; Zhang, Xiaohua; Zhou, Jiawan; Xiong, Erhu; Li, Xiaoyu; Chen, Jinhua
2015-11-01
A novel competitive host-guest strategy regulated by protein biogate was developed for sensitive and selective analysis of prion protein. The methylene blue (MB)-tagged prion aptamer (MB-Apt) was introduced to the multiwalled carbon nanotubes-β-cyclodextrins (MWCNTs-β-CD) composites-modified glassy carbon (GC) electrode through the host-guest interaction between β-CD and MB. In the absence of prion, MB-Apt could be displaced by ferrocenecarboxylic acid (FCA) due to its stronger binding affinity to β-CD, resulting in a large oxidation peak of FCA. However, in the presence of prion, the specific prion-aptamer interaction drove the formation of protein biogate to seal the cavity of β-CD, which hindered the guest displacement of MB by FCA and resulted in the oxidation peak current of MB (IMB) increased and that of FCA (IFCA) decreased. The developed aptasensor showed good response towards the target (prion protein) with a low detection limit of 160 fM. By changing the specific aptamers, this strategy could be easily extended to detect other proteins, showing promising potential for extensive applications in bioanalysis.
Coal gasification systems engineering and analysis, volume 2
NASA Technical Reports Server (NTRS)
1980-01-01
The major design related features of each generic plant system were characterized in a catalog. Based on the catalog and requirements data, approximately 17 designs and cost estimates were developed for MBG and alternate products. A series of generic trade studies was conducted to support all of the design studies. A set of cost and programmatic analyses were conducted to supplement the designs. The cost methodology employed for the design and sensitivity studies was documented and implemented in a computer program. Plant design and construction schedules were developed for the K-T, Texaco, and B&W MBG plant designs. A generic work breakdown structure was prepared, based on the K-T design, to coincide with TVA's planned management approach. An extensive set of cost sensitivity analyses was completed for K-T, Texaco, and B&W design. Product price competitiveness was evaluated for MBG and the alternate products. A draft management policy and procedures manual was evaluated. A supporting technology development plan was developed to address high technology risk issues. The issues were identified and ranked in terms of importance and tractability, and a plan developed for obtaining data or developing technology required to mitigate the risk.
Integrating Solar PV in Utility System Operations: Analytical Framework and Arizona Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jing; Botterud, Audun; Mills, Andrew
2015-06-01
A systematic framework is proposed to estimate the impact on operating costs due to uncertainty and variability in renewable resources. The framework quantifies the integration costs associated with subhourly variability and uncertainty as well as day-ahead forecasting errors in solar PV (photovoltaics) power. A case study illustrates how changes in system operations may affect these costs for a utility in the southwestern United States (Arizona Public Service Company). We conduct an extensive sensitivity analysis under different assumptions about balancing reserves, system flexibility, fuel prices, and forecasting errors. We find that high solar PV penetrations may lead to operational challenges, particularlymore » during low-load and high solar periods. Increased system flexibility is essential for minimizing integration costs and maintaining reliability. In a set of sensitivity cases where such flexibility is provided, in part, by flexible operations of nuclear power plants, the estimated integration costs vary between $1.0 and $4.4/MWh-PV for a PV penetration level of 17%. The integration costs are primarily due to higher needs for hour-ahead balancing reserves to address the increased sub-hourly variability and uncertainty in the PV resource. (C) 2015 Elsevier Ltd. All rights reserved.« less
The Mere Exposure Effect in the Domain of Haptics
Jakesch, Martina; Carbon, Claus-Christian
2012-01-01
Background Zajonc showed that the attitude towards stimuli that one had been previously exposed to is more positive than towards novel stimuli. This mere exposure effect (MEE) has been tested extensively using various visual stimuli. Research on the MEE is sparse, however, for other sensory modalities. Methodology/Principal Findings We used objects of two material categories (stone and wood) and two complexity levels (simple and complex) to test the influence of exposure frequency (F0 = novel stimuli, F2 = stimuli exposed twice, F10 = stimuli exposed ten times) under two sensory modalities (haptics only and haptics & vision). Effects of exposure frequency were found for high complex stimuli with significantly increasing liking from F0 to F2 and F10, but only for the stone category. Analysis of “Need for Touch” data showed the MEE in participants with high need for touch, which suggests different sensitivity or saturation levels of MEE. Conclusions/Significance This different sensitivity or saturation levels might also reflect the effects of expertise on the haptic evaluation of objects. It seems that haptic and cross-modal MEEs are influenced by factors similar to those in the visual domain indicating a common cognitive basis. PMID:22347451
NAIMA: target amplification strategy allowing quantitative on-chip detection of GMOs.
Morisset, Dany; Dobnik, David; Hamels, Sandrine; Zel, Jana; Gruden, Kristina
2008-10-01
We have developed a novel multiplex quantitative DNA-based target amplification method suitable for sensitive, specific and quantitative detection on microarray. This new method named NASBA Implemented Microarray Analysis (NAIMA) was applied to GMO detection in food and feed, but its application can be extended to all fields of biology requiring simultaneous detection of low copy number DNA targets. In a first step, the use of tailed primers allows the multiplex synthesis of template DNAs in a primer extension reaction. A second step of the procedure consists of transcription-based amplification using universal primers. The cRNA product is further on directly ligated to fluorescent dyes labelled 3DNA dendrimers allowing signal amplification and hybridized without further purification on an oligonucleotide probe-based microarray for multiplex detection. Two triplex systems have been applied to test maize samples containing several transgenic lines, and NAIMA has shown to be sensitive down to two target copies and to provide quantitative data on the transgenic contents in a range of 0.1-25%. Performances of NAIMA are comparable to singleplex quantitative real-time PCR. In addition, NAIMA amplification is faster since 20 min are sufficient to achieve full amplification.
NASA Astrophysics Data System (ADS)
Ripamonti, Giancarlo; Lacaita, Andrea L.
1993-03-01
The extreme sensitivity and time resolution of Geiger-mode avalanche photodiodes (GM- APDs) have already been exploited for optical time domain reflectometry (OTDR). Better than 1 cm spatial resolution in Rayleigh scattering detection was demonstrated. Distributed and quasi-distributed optical fiber sensors can take advantage of the capabilities of GM-APDs. Extensive studies have recently disclosed the main characteristics and limitations of silicon devices, both commercially available and developmental. In this paper we report an analysis of the performance of these detectors. The main characteristics of GM-APDs of interest for distributed optical fiber sensors are briefly reviewed. Command electronics (active quenching) is then introduced. The detector timing performance sets the maximum spatial resolution in experiments employing OTDR techniques. We highlight that the achievable time resolution depends on the physics of the avalanche spreading over the device area. On the basis of these results, trade-off between the important parameters (quantum efficiency, time resolution, background noise, and afterpulsing effects) is considered. Finally, we show first results on Germanium devices, capable of single photon sensitivity at 1.3 and 1.5 micrometers with sub- nanosecond time resolution.
NAIMA: target amplification strategy allowing quantitative on-chip detection of GMOs
Morisset, Dany; Dobnik, David; Hamels, Sandrine; Žel, Jana; Gruden, Kristina
2008-01-01
We have developed a novel multiplex quantitative DNA-based target amplification method suitable for sensitive, specific and quantitative detection on microarray. This new method named NASBA Implemented Microarray Analysis (NAIMA) was applied to GMO detection in food and feed, but its application can be extended to all fields of biology requiring simultaneous detection of low copy number DNA targets. In a first step, the use of tailed primers allows the multiplex synthesis of template DNAs in a primer extension reaction. A second step of the procedure consists of transcription-based amplification using universal primers. The cRNA product is further on directly ligated to fluorescent dyes labelled 3DNA dendrimers allowing signal amplification and hybridized without further purification on an oligonucleotide probe-based microarray for multiplex detection. Two triplex systems have been applied to test maize samples containing several transgenic lines, and NAIMA has shown to be sensitive down to two target copies and to provide quantitative data on the transgenic contents in a range of 0.1–25%. Performances of NAIMA are comparable to singleplex quantitative real-time PCR. In addition, NAIMA amplification is faster since 20 min are sufficient to achieve full amplification. PMID:18710880
Hartman, Sarah; Widaman, Keith F; Belsky, Jay
2015-08-01
Manuck, Craig, Flory, Halder, and Ferrell (2011) reported that a theoretically anticipated effect of family rearing on girls' menarcheal age was genetically moderated by two single nucleotide polymorphisms (SNPs) of the estrogen receptor-α gene. We sought to replicate and extend these findings, studying 210 White females followed from birth. The replication was general because a different measure of the rearing environment was used in this inquiry (i.e., maternal sensitivity) than in the prior one (i.e., family cohesion). Extensions of the work included prospective rather than retrospective measurements of the rearing environment, reports of first menstruation within a year of its occurrence rather than decades later, accounting for some heritability of menarcheal age by controlling for maternal age of menarche, and using a new model-fitting approach to competitively compare diathesis-stress versus differential-susceptibility models of Gene × Environment interaction. The replication/extension effort proved successful in the case of both estrogen receptor-α SNPs, with the Gene × Environment interactions principally reflecting diathesis-stress: lower levels of maternal sensitivity predicted earlier age of menarche for girls homozygous for the minor alleles of either SNP but not for girls carrying other genotypes. Results are discussed in light of the new analytic methods adopted.
ANALYSIS OF "IN-DEPTH" SCHOOLS CONDUCTED BY AREA EXTENSION AGENTS.
ERIC Educational Resources Information Center
MCCORMICK, ROBERT W.
FIVE EDUCATIONAL PROGRAMS WERE CONDUCTED DURING THE FALL AND WINTER OF 1965-66 AT AREA EXTENSION CENTERS ESTABLISHED BY THE OHIO COOPERATIVE EXTENSION SERVICE IN JANUARY 1965. AIMING MAINLY AT THE COMMERCIAL AGRICULTURAL INDUSTRY, SPECIALIZED EXTENSION AGENTS FOCUSED ON EDUCATIONAL PROBLEMS OF AGRICULTURAL PRODUCTION AND OF SUCH AGRIBUSINESS…
A National Perspective on the Current Evaluation Activities in Extension
ERIC Educational Resources Information Center
Lamm, Alexa J.; Israel, Glenn D.; Diehl, David
2013-01-01
In order to enhance Extension evaluation efforts it is important to understand current practices. The study reported here researched the evaluation behaviors of county-based Extension professionals. Extension professionals from eight states (n = 1,173) responded to a survey regarding their evaluation data collection, analysis, and reporting…
Association Between CHEK2*1100delC and Breast Cancer: A Systematic Review and Meta-Analysis.
Liang, Mingming; Zhang, Yun; Sun, Chenyu; Rizeq, Feras Kamel; Min, Min; Shi, Tingting; Sun, Yehuan
2018-06-16
The association between the checkpoint kinase 2*1100delC (CHEK2*1100delC) and breast cancer has been extensively explored. In light of the recent publication of studies on these specific findings, particularly regarding male patients with breast cancer, we performed an updated meta-analysis to investigate a more reliable estimate. This meta-analysis included 26 published studies selected in a search of electronic databases up to January 2018, including 118,735 breast cancer cases and 195,807 controls. Odds ratios (ORs) with 95% confidence intervals (CIs) were used to assess the association between 1100delC and breast cancer. Meta-analysis results suggested that 1100delC contributed to an increased breast cancer risk in overall populations (OR 2.89; 95% CI 2.63-3.16). Subgroup analysis found ORs of 3.13 (95% CI 1.94-5.07) for male breast cancer, 2.88 (95% CI 2.63-3.16) for female breast cancer, 2.87 (95% CI 1.85-4.47) for early-onset breast cancer, 2.92 (95% CI 2.65-3.22) for invasive breast cancer, and 3.21 (95% CI 2.41-4.29) for familial breast cancer. The sensitivity analysis suggested that results of this meta-analysis were generally robust. CHEK2*1100delC is associated with an increased risk of both female and male breast cancer.
Extensively Drug-Resistant Tuberculosis (XDR-TB) - A Potential Threat in Ireland
Mc Laughlin, Anne Marie; O’Donnell, Rory A; Gibbons, Noel; Scully, Mary; O’Flangan, Darina; Keane, Joseph
2007-01-01
We describe a case of a 25 year old female from Lithuania who presented with a productive cough. Chest radiograph demonstrated an infiltrate in the left upper lobe and a cavitating lesion in the right middle lobe. Sensitivity testing of her sputum led to a diagnosis of extensively drug-resistant tuberculosis (XDR-TB). This is the first case in Ireland and highlights the need for physicians to be aware of the possibility of XDR-TB. Moreover it underlines the need for improvement in service provision in terms of a TB reference laboratory and TB clinics. PMID:19340317
Reduction of substituted p-Benzoquinones by Fe II near neutral pH
USDA-ARS?s Scientific Manuscript database
The oxidation of dihydroxyaromatics to benzoquinones by FeIII (hydr)oxides is important in respiratory electron shuttling by microorganisms and has been extensively studied. Prior publications have noted that the Gibbs Free Energy (DG) for the forward reaction is sensitive to dihydroxyaromatic struc...
Equity Sensitivity in Illinois Public School Teachers
ERIC Educational Resources Information Center
Grossi, Robert G.
2013-01-01
Research supports the importance of teacher quality on effective student learning. School districts recognize this fact and focus extensively on hiring quality teachers and improving teaching skills through professional development programs. Amazingly, despite common sense and a vast amount of research that reflects that employee performance is a…
The Utility of the Small Rodent Electrocardiogram in Toxicology
Extensive research has lead to a growing appreciation that the heart is acutely sensitive to a broad array of toxicants via multiple routes of exposure. These agents are as diverse as the anti-neoplastic drug doxorubicin and various components of ambient air pollution. Adverse ef...
STANLEY (Sandia Text ANaLysis Extensible librarY) Ver. 1.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
BENZ, ZACHARY; APODACA, VINCENT; BASILICO, JUSTIN
2009-11-10
Reusable, extensible text analysis library. This library forms the basis for the automated generation of cognitive models from text for Sandia's Cognition program. It also is the basis for the suite of underlying, related applications.
Hainsworth, Atticus H; Randall, Andrew D; Stefani, Alessandro
2005-01-01
Voltage-sensitive Ca(2+) channels (VSCC) play a central role in an extensive array of physiological processes. Their importance in cellular function arises from their ability both to sense membrane voltage and to conduct Ca(2+) ions, two facets that couple membrane excitability to a key intracellular second messenger. Through this relationship, activation of VSCCs is tightly coupled to the gamut of cellular functions dependent on intracellular Ca(2+), including muscle contraction, energy metabolism, gene expression, and exocytotic/endocytotic cycling.
NASA Technical Reports Server (NTRS)
Rambler, M.; Margulis, L.
1979-01-01
The effects of UV and high intensity irradiation on microorganisms growing under conditions prevalent during the early Precambrian Aeon are examined. The study employed the anaerobic red pigmented marine vibrio, Beneckea gazogenes (Harwood, 1978), using an extreme UV sensitivity of 2537 A, extensive cell lysis, and commitant production of bacteriophage induced by the UV light. Three types of white mutant, pink colony mutant, and red wild type isolates of B gazogenes were grown showing differential irradiation sensitivity and phage particles from all three lysates were collected and examined.
Imbert, Daniel; Cantuel, Martine; Bünzli, Jean-Claude G; Bernardinelli, Gérald; Piguet, Claude
2003-12-24
A [Cr(alpha,alpha'-diimine)3]3+ chromophore is used as a donor for sensitizing NdIII and YbIII near-infrared (NIR) emitters in the heterobimetallic helicates [LnCrIIIL3]6+. The intramolecular CrIII --> LnIII energy transfer process controls the population of the lanthanide-centered emitting levels, thus leading to unprecedented extension of the NIR luminescence decay times in the millisecond range for Nd and Yb ions incorporated in coordination complexes.
Graphene: Nanostructure engineering and applications
NASA Astrophysics Data System (ADS)
Zhang, Tingting; Wu, Shuang; Yang, Rong; Zhang, Guangyu
2017-02-01
Graphene has attracted extensive research interest in recent years because of its fascinating physical properties and its potential for various applications. The band structure or electronic properties of graphene are very sensitive to its geometry, size, and edge structures, especially when the size of graphene is below the quantum confinement limit. Graphene nanoribbons (GNRs) can be used as a model system to investigate such structure-sensitive parameters. In this review, we examine the fabrication of GNRs via both top-down and bottom-up approaches. The edge-related electronic and transport properties of GNRs are also discussed.
Challenges of Systematic Reviewing Integrative Health Care
Coulter, Ian D.; Khorsan, Raheleh; Crawford, Cindy; Hsiao, An-Fu
2013-01-01
This article is based on an extensive review of integrative medicine (IM) and integrative health care (IHC). Since there is no general agreement of what constitutes IM/IHC, several major problems were identified that make the review of work in this field problematic. In applying the systematic review methodology, we found that many of those captured articles that used the term integrative medicine were in actuality referring to adjunctive, complementary, or supplemental medicine. The objective of this study was to apply a sensitivity analysis to demonstrate how the results of a systematic review of IM and IHC will differ according to what inclusion criteria is used based on the definition of IM/IHC. By analyzing 4 different scenarios, the authors show that, due to unclear usage of these terms, results vary dramatically, exposing an inconsistent literature base for this field. PMID:23843689
Tribochemistry of contact interfaces of nanocrystalline molybdenum carbide films
NASA Astrophysics Data System (ADS)
Kumar, D. Dinesh; Kumar, N.; Panda, Kalpataru; Kamalan Kirubaharan, A. M.; Kuppusami, P.
2018-07-01
Transition metal carbides (TMC) are known for their improved tribological properties and are sensitive to the tribo-atmospheric environment. Nanocrystalline molybdenum carbide (MoC) thin films were deposited by DC magnetron sputtering technique using reactive CH4 gas. The friction and wear resistance properties of MoC thin films were significantly improved in humid-atmospheric condition as compared to high-vacuum tribo-condition. A comprehensive chemical analysis of deformed contact interfaces was carried out by X-ray photoelectron spectroscopy (XPS), energy dispersive X-ray spectroscopy (EDX) and Raman spectroscopy. XPS and Raman spectroscopy showed the formation of stable molybdenum-oxide (MoO), molybdenum carbide (MoC) and amorphous carbon (a-C) tribo-phases. Moreover, during the sliding in humid-atmospheric condition, these phases were extensively deposited on the sliding steel ball counter body which significantly protected against undesirable friction and wear.
Analogs of diadenosine tetraphosphate (Ap4A).
Guranowski, Andrzej
2003-01-01
This review summarizes our knowledge of analogs and derivatives of diadenosine 5',5"'-P1,P4-tetraphosphate (Ap4A), the most extensively studied member of the dinucleoside 5',5"'-P1,Pn-polyphosphate (NpnN) family. After a short discussion of enzymes that may be responsible for the accumulation and degradation of Np4)N's in the cell, this review focuses on chemically and/or enzymatically produced analogs and their practical applications. Particular attention is paid to compounds that have aided the study of enzymes involved in the metabolism of Ap4A (Np4N'). Certain Ap4A analogs were alternative substrates of Ap4A-degrading enzymes and/or acted as enzyme inhibitors, some other helped to establish enzyme mechanisms, increased the sensitivity of certain enzyme assays or produced stable enzyme:ligand complexes for structural analysis.
Tupaia belangeri as an experimental animal model for viral infection.
Tsukiyama-Kohara, Kyoko; Kohara, Michinori
2014-01-01
Tupaias, or tree shrews, are small mammals that are similar in appearance to squirrels. The morphological and behavioral characteristics of the group have been extensively characterized, and despite previously being classified as primates, recent studies have placed the group in its own family, the Tupaiidae. Genomic analysis has revealed that the genus Tupaia is closer to humans than it is to rodents. In addition, tupaias are susceptible to hepatitis B virus and hepatitis C virus. The only other experimental animal that has been demonstrated to be sensitive to both of these viruses is the chimpanzee, but restrictions on animal testing have meant that experiments using chimpanzees have become almost impossible. Consequently, the development of the tupaia for use as an animal infection model could become a powerful tool for hepatitis virus research and in preclinical studies on drug development.
Tupaia Belangeri as an Experimental Animal Model for Viral Infection
Tsukiyama-Kohara, Kyoko; Kohara, Michinori
2014-01-01
Tupaias, or tree shrews, are small mammals that are similar in appearance to squirrels. The morphological and behavioral characteristics of the group have been extensively characterized, and despite previously being classified as primates, recent studies have placed the group in its own family, the Tupaiidae. Genomic analysis has revealed that the genus Tupaia is closer to humans than it is to rodents. In addition, tupaias are susceptible to hepatitis B virus and hepatitis C virus. The only other experimental animal that has been demonstrated to be sensitive to both of these viruses is the chimpanzee, but restrictions on animal testing have meant that experiments using chimpanzees have become almost impossible. Consequently, the development of the tupaia for use as an animal infection model could become a powerful tool for hepatitis virus research and in preclinical studies on drug development. PMID:25048261
Using Dynamic Sensitivity Analysis to Assess Testability
NASA Technical Reports Server (NTRS)
Voas, Jeffrey; Morell, Larry; Miller, Keith
1990-01-01
This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.
SL12-GADRAS-PD2Ka Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.
2014-09-09
The GADRAS Development project comprises several elements that are all related to the Detector Response Function (DRF), which is the core of GADRAS. An ongoing activity is implementing continuous improvements in the accuracy and versatility of the DRF. The ability to perform rapid computation of the response of gammaray detectors for 3-D descriptions of source objects and their environments is a good example of a recent utilization of this versatility. The 3-D calculations, which execute several orders of magnitude faster than competing techniques, compute the response as an extension of the DRF so the radiation transport problem is never solvedmore » explicitly, thus saving considerable computational time. Maintenance of the Graphic User Interface (GUI) and extension of the GUI to enable construction of the 3-D source models is included in tasking for the GADRAS Development project. Another aspect of this project is application of the isotope identification algorithms for search applications. Specifically, SNL is tasked with development of an isotope-identification based search capability for use with the RSL-developed AVID system, which supports simultaneous operation of numerous radiation search assets. A Publically Available (PA) GADRAS-DRF application, which eliminates sensitive analysis components, will soon be available so that the DRF can be used by researchers at universities and corporations.« less
Xu, Hang; Merryweather, Andrew; Bloswick, Donald; Mao, Qi; Wang, Tong
2015-01-01
Marker placement can be a significant source of error in biomechanical studies of human movement. The toe marker placement error is amplified by footwear since the toe marker placement on the shoe only relies on an approximation of underlying anatomical landmarks. Three total knee replacement subjects were recruited and three self-speed gait trials per subject were collected. The height variation between toe and heel markers of four types of footwear was evaluated from the results of joint kinematics and muscle forces using OpenSim. The reference condition was considered as the same vertical height of toe and heel markers. The results showed that the residual variances for joint kinematics had an approximately linear relationship with toe marker placement error for lower limb joints. Ankle dorsiflexion/plantarflexion is most sensitive to toe marker placement error. The influence of toe marker placement error is generally larger for hip flexion/extension and rotation than hip abduction/adduction and knee flexion/extension. The muscle forces responded to the residual variance of joint kinematics to various degrees based on the muscle function for specific joint kinematics. This study demonstrates the importance of evaluating marker error for joint kinematics and muscle forces when explaining relative clinical gait analysis and treatment intervention.
Design of a Modular Monolithic Implicit Solver for Multi-Physics Applications
NASA Technical Reports Server (NTRS)
Carton De Wiart, Corentin; Diosady, Laslo T.; Garai, Anirban; Burgess, Nicholas; Blonigan, Patrick; Ekelschot, Dirk; Murman, Scott M.
2018-01-01
The design of a modular multi-physics high-order space-time finite-element framework is presented together with its extension to allow monolithic coupling of different physics. One of the main objectives of the framework is to perform efficient high- fidelity simulations of capsule/parachute systems. This problem requires simulating multiple physics including, but not limited to, the compressible Navier-Stokes equations, the dynamics of a moving body with mesh deformations and adaptation, the linear shell equations, non-re effective boundary conditions and wall modeling. The solver is based on high-order space-time - finite element methods. Continuous, discontinuous and C1-discontinuous Galerkin methods are implemented, allowing one to discretize various physical models. Tangent and adjoint sensitivity analysis are also targeted in order to conduct gradient-based optimization, error estimation, mesh adaptation, and flow control, adding another layer of complexity to the framework. The decisions made to tackle these challenges are presented. The discussion focuses first on the "single-physics" solver and later on its extension to the monolithic coupling of different physics. The implementation of different physics modules, relevant to the capsule/parachute system, are also presented. Finally, examples of coupled computations are presented, paving the way to the simulation of the full capsule/parachute system.
Tang, Qiaohong; Mo, Zhongjun; Yao, Jie; Li, Qi; Du, Chenfei; Wang, Lizhen; Fan, Yubo
2014-12-01
This study was aimed to estimate the effect of different ProDisc-C arthroplasty designs after it was implanted to C5-C6 cervicalspine. Finite element (FE) model of intact C5-C6 segments including the vertebrae and disc was developed and validated. Ball-and-socket artificial disc prosthesis model (ProDisc-C, Synthes) was implanted into the validated FE model and the curvature of the ProDisc-C prosthesis was varied. All models were loaded with compressed force 74 N and the pure moment of 1.8 Nm along flexion-extension and bilateral bending and axial torsion separately. The results indicated that the variation in the curvature of ball and socket configuration would influence the range of motion in flexion/extension, while there were not apparently differences under other conditions of loads. The method increasing the curvature will solve the stress concentration of the polyethylene, but it will also bring adverse outcomes, such as facet joint force increasing and ligament tension increasing. Therefore, the design of artificial discs should be considered comprehensively to reserve the range of motion as well as to avoid the adverse problems, so as not to affect the long-term clinical results.
Methane Adsorption in Zr-Based MOFs: Comparison and Critical Evaluation of Force Fields
2017-01-01
The search for nanoporous materials that are highly performing for gas storage and separation is one of the contemporary challenges in material design. The computational tools to aid these experimental efforts are widely available, and adsorption isotherms are routinely computed for huge sets of (hypothetical) frameworks. Clearly the computational results depend on the interactions between the adsorbed species and the adsorbent, which are commonly described using force fields. In this paper, an extensive comparison and in-depth investigation of several force fields from literature is reported for the case of methane adsorption in the Zr-based Metal–Organic Frameworks UiO-66, UiO-67, DUT-52, NU-1000, and MOF-808. Significant quantitative differences in the computed uptake are observed when comparing different force fields, but most qualitative features are common which suggests some predictive power of the simulations when it comes to these properties. More insight into the host–guest interactions is obtained by benchmarking the force fields with an extensive number of ab initio computed single molecule interaction energies. This analysis at the molecular level reveals that especially ab initio derived force fields perform well in reproducing the ab initio interaction energies. Finally, the high sensitivity of uptake predictions on the underlying potential energy surface is explored. PMID:29170687
Daytime Land Surface Temperature Extraction from MODIS Thermal Infrared Data under Cirrus Clouds
Fan, Xiwei; Tang, Bo-Hui; Wu, Hua; Yan, Guangjian; Li, Zhao-Liang
2015-01-01
Simulated data showed that cirrus clouds could lead to a maximum land surface temperature (LST) retrieval error of 11.0 K when using the generalized split-window (GSW) algorithm with a cirrus optical depth (COD) at 0.55 μm of 0.4 and in nadir view. A correction term in the COD linear function was added to the GSW algorithm to extend the GSW algorithm to cirrus cloudy conditions. The COD was acquired by a look up table of the isolated cirrus bidirectional reflectance at 0.55 μm. Additionally, the slope k of the linear function was expressed as a multiple linear model of the top of the atmospheric brightness temperatures of MODIS channels 31–34 and as the difference between split-window channel emissivities. The simulated data showed that the LST error could be reduced from 11.0 to 2.2 K. The sensitivity analysis indicated that the total errors from all the uncertainties of input parameters, extension algorithm accuracy, and GSW algorithm accuracy were less than 2.5 K in nadir view. Finally, the Great Lakes surface water temperatures measured by buoys showed that the retrieval accuracy of the GSW algorithm was improved by at least 1.5 K using the proposed extension algorithm for cirrus skies. PMID:25928059