Optimum sensitivity derivatives of objective functions in nonlinear programming
NASA Technical Reports Server (NTRS)
Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.
1983-01-01
The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, T.; Laville, C.; Dyrda, J.
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less
Design sensitivity analysis of nonlinear structural response
NASA Technical Reports Server (NTRS)
Cardoso, J. B.; Arora, J. S.
1987-01-01
A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2015-05-01
Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.
Shape design sensitivity analysis using domain information
NASA Technical Reports Server (NTRS)
Seong, Hwal-Gyeong; Choi, Kyung K.
1985-01-01
A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
Benchmark On Sensitivity Calculation (Phase III)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, Tatiana; Laville, Cedric; Dyrda, James
2012-01-01
The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
Shape design sensitivity analysis and optimal design of structural systems
NASA Technical Reports Server (NTRS)
Choi, Kyung K.
1987-01-01
The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.
Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems
NASA Technical Reports Server (NTRS)
Hou, Gene J. W.; Kenny, Sean P.
1991-01-01
A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.
Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mason, B. H.; Walsh, J. L.
2001-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Bell, L T O; Gandhi, S
2018-06-01
To directly compare the accuracy and speed of analysis of two commercially available computer-assisted detection (CAD) programs in detecting colorectal polyps. In this retrospective single-centre study, patients who had colorectal polyps identified on computed tomography colonography (CTC) and subsequent lower gastrointestinal endoscopy, were analysed using two commercially available CAD programs (CAD1 and CAD2). Results were compared against endoscopy to ascertain sensitivity and positive predictive value (PPV) for colorectal polyps. Time taken for CAD analysis was also calculated. CAD1 demonstrated a sensitivity of 89.8%, PPV of 17.6% and mean analysis time of 125.8 seconds. CAD2 demonstrated a sensitivity of 75.5%, PPV of 44.0% and mean analysis time of 84.6 seconds. The sensitivity and PPV for colorectal polyps and CAD analysis times can vary widely between current commercially available CAD programs. There is still room for improvement. Generally, there is a trade-off between sensitivity and PPV, and so further developments should aim to optimise both. Information on these factors should be made routinely available, so that an informed choice on their use can be made. This information could also potentially influence the radiologist's use of CAD results. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hou, Gene
2004-01-01
The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.
1998-01-01
This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.
ERIC Educational Resources Information Center
Schwarz, Gunnar; Burger, Marcel; Guex, Kevin; Gundlach-Graham, Alexander; Ka¨ser, Debora; Koch, Joachim; Velicsanyi, Peter; Wu, Chung-Che; Gu¨nther, Detlef; Hattendorf, Bodo
2016-01-01
A public demonstration of laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) for fast and sensitive qualitative elemental analysis of solid everyday objects is described. This demonstration served as a showcase model for modern instrumentation (and for elemental analysis, in particular) to the public. Several steps were made to…
Analysis and design of optical systems by use of sensitivity analysis of skew ray tracing
NASA Astrophysics Data System (ADS)
Lin, Psang Dain; Lu, Chia-Hung
2004-02-01
Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat's eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.
Analysis and Design of Optical Systems by Use of Sensitivity Analysis of Skew Ray Tracing
NASA Astrophysics Data System (ADS)
Dain Lin, Psang; Lu, Chia-Hung
2004-02-01
Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat ?s eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.
NASA Astrophysics Data System (ADS)
Luo, Jiannan; Lu, Wenxi
2014-06-01
Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.
NASA Technical Reports Server (NTRS)
Yao, Tse-Min; Choi, Kyung K.
1987-01-01
An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.
Automated Sensitivity Analysis of Interplanetary Trajectories
NASA Technical Reports Server (NTRS)
Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno
2017-01-01
This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...
2017-01-24
Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil
Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less
Automated Sensitivity Analysis of Interplanetary Trajectories for Optimal Mission Design
NASA Technical Reports Server (NTRS)
Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno
2017-01-01
This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.
Hestekin, Christa N.; Lin, Jennifer S.; Senderowicz, Lionel; Jakupciak, John P.; O’Connell, Catherine; Rademaker, Alfred; Barron, Annelise E.
2012-01-01
Knowledge of the genetic changes that lead to disease has grown and continues to grow at a rapid pace. However, there is a need for clinical devices that can be used routinely to translate this knowledge into the treatment of patients. Use in a clinical setting requires high sensitivity and specificity (>97%) in order to prevent misdiagnoses. Single strand conformational polymorphism (SSCP) and heteroduplex analysis (HA) are two DNA-based, complementary methods for mutation detection that are inexpensive and relatively easy to implement. However, both methods are most commonly detected by slab gel electrophoresis, which can be labor-intensive, time-consuming, and often the methods are unable to produce high sensitivity and specificity without the use of multiple analysis conditions. Here we demonstrate the first blinded study using microchip electrophoresis-SSCP/HA. We demonstrate the ability of microchip electrophoresis-SSCP/HA to detect with 98% sensitivity and specificity >100 samples from the p53 gene exons 5–9 in a blinded study in an analysis time of less than 10 minutes. PMID:22002021
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
2016-09-12
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.
Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun
2017-12-01
Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.
Gordon, H R; Du, T; Zhang, T
1997-09-20
We provide an analysis of the influence of instrument polarization sensitivity on the radiance measured by spaceborne ocean color sensors. Simulated examples demonstrate the influence of polarization sensitivity on the retrieval of the water-leaving reflectance rho(w). A simple method for partially correcting for polarization sensitivity--replacing the linear polarization properties of the top-of-atmosphere reflectance with those from a Rayleigh-scattering atmosphere--is provided and its efficacy is evaluated. It is shown that this scheme improves rho(w) retrievals as long as the polarization sensitivity of the instrument does not vary strongly from band to band. Of course, a complete polarization-sensitivity characterization of the ocean color sensor is required to implement the correction.
Applying geologic sensitivity analysis to environmental risk management: The financial implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, D.T.
The financial risks associated with environmental contamination can be staggering and are often difficult to identify and accurately assess. Geologic sensitivity analysis is gaining recognition as a significant and useful tool that can empower the user with crucial information concerning environmental risk management and brownfield redevelopment. It is particularly useful when (1) evaluating the potential risks associated with redevelopment of historical industrial facilities (brownfields) and (2) planning for future development, especially in areas of rapid development because the number of potential contaminating sources often increases with an increase in economic development. An examination of the financial implications relating to geologicmore » sensitivity analysis in southeastern Michigan from numerous case studies indicate that the environmental cost of contamination may be 100 to 1,000 times greater at a geologically sensitive location compared to the least sensitive location. Geologic sensitivity analysis has demonstrated that near-surface geology may influence the environmental impact of a contaminated site to a greater extent than the amount and type of industrial development.« less
Analysis of Publically Available Skin Sensitization Data from REACH Registrations 2008–2014
Luechtefeld, Thomas; Maertens, Alexandra; Russo, Daniel P.; Rovida, Costanza; Zhu, Hao; Hartung, Thomas
2017-01-01
Summary The public data on skin sensitization from REACH registrations already included 19,111 studies on skin sensitization in December 2014, making it the largest repository of such data so far (1,470 substances with mouse LLNA, 2,787 with GPMT, 762 with both in vivo and in vitro and 139 with only in vitro data). 21% were classified as sensitizers. The extracted skin sensitization data was analyzed to identify relationships in skin sensitization guidelines, visualize structural relationships of sensitizers, and build models to predict sensitization. A chemical with molecular weight > 500 Da is generally considered non-sensitizing owing to low bioavailability, but 49 sensitizing chemicals with a molecular weight > 500 Da were found. A chemical similarity map was produced using PubChem’s 2D Tanimoto similarity metric and Gephi force layout visualization. Nine clusters of chemicals were identified by Blondel’s module recognition algorithm revealing wide module-dependent variation. Approximately 31% of mapped chemicals are Michael’s acceptors but alone this does not imply skin sensitization. A simple sensitization model using molecular weight and five ToxTree structural alerts showed a balanced accuracy of 65.8% (specificity 80.4%, sensitivity 51.4%), demonstrating that structural alerts have information value. A simple variant of k-nearest neighbors outperformed the ToxTree approach even at 75% similarity threshold (82% balanced accuracy at 0.95 threshold). At higher thresholds, the balanced accuracy increased. Lower similarity thresholds decrease sensitivity faster than specificity. This analysis scopes the landscape of chemical skin sensitization, demonstrating the value of large public datasets for health hazard prediction. PMID:26863411
Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal.
Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D; Hubbi, Basil; Liu, Xuan
2015-11-01
Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue.
Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal
Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D.; Hubbi, Basil; Liu, Xuan
2015-01-01
Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue. PMID:26600996
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
Lu, Zhiming
2018-01-30
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhiming
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Tsao, Chia-Wen; Yang, Zhi-Jie
2015-10-14
Desorption/ionization on silicon (DIOS) is a high-performance matrix-free mass spectrometry (MS) analysis method that involves using silicon nanostructures as a matrix for MS desorption/ionization. In this study, gold nanoparticles grafted onto a nanostructured silicon (AuNPs-nSi) surface were demonstrated as a DIOS-MS analysis approach with high sensitivity and high detection specificity for glucose detection. A glucose sample deposited on the AuNPs-nSi surface was directly catalyzed to negatively charged gluconic acid molecules on a single AuNPs-nSi chip for MS analysis. The AuNPs-nSi surface was fabricated using two electroless deposition steps and one electroless etching step. The effects of the electroless fabrication parameters on the glucose detection efficiency were evaluated. Practical application of AuNPs-nSi MS glucose analysis in urine samples was also demonstrated in this study.
Childs, Paul; Wong, Allan C L; Fu, H Y; Liao, Yanbiao; Tam, Hwayaw; Lu, Chao; Wai, P K A
2010-12-20
We measured the hydrostatic pressure dependence of the birefringence and birefringent dispersion of a Sagnac interferometric sensor incorporating a length of highly birefringent photonic crystal fiber using Fourier analysis. Sensitivity of both the phase and chirp spectra to hydrostatic pressure is demonstrated. Using this analysis, phase-based measurements showed a good linearity with an effective sensitivity of 9.45 nm/MPa and an accuracy of ±7.8 kPa using wavelength-encoded data and an effective sensitivity of -55.7 cm(-1)/MPa and an accuracy of ±4.4 kPa using wavenumber-encoded data. Chirp-based measurements, though nonlinear in response, showed an improvement in accuracy at certain pressure ranges with an accuracy of ±5.5 kPa for the full range of measured pressures using wavelength-encoded data and dropping to within ±2.5 kPa in the range of 0.17 to 0.4 MPa using wavenumber-encoded data. Improvements of the accuracy demonstrated the usefulness of implementing chirp-based analysis for sensing purposes.
Using archived ITS data for sensitivity analysis in the estimation of mobile source emissions
DOT National Transportation Integrated Search
2000-12-01
The study described in this paper demonstrates the use of archived ITS data from San Antonio's TransGuide traffic management center (TMC) for sensitivity analyses in the estimation of on-road mobile source emissions. Because of the stark comparison b...
Hestekin, Christa N; Lin, Jennifer S; Senderowicz, Lionel; Jakupciak, John P; O'Connell, Catherine; Rademaker, Alfred; Barron, Annelise E
2011-11-01
Knowledge of the genetic changes that lead to disease has grown and continues to grow at a rapid pace. However, there is a need for clinical devices that can be used routinely to translate this knowledge into the treatment of patients. Use in a clinical setting requires high sensitivity and specificity (>97%) in order to prevent misdiagnoses. Single-strand conformational polymorphism (SSCP) and heteroduplex analysis (HA) are two DNA-based, complementary methods for mutation detection that are inexpensive and relatively easy to implement. However, both methods are most commonly detected by slab gel electrophoresis, which can be labor-intensive, time-consuming, and often the methods are unable to produce high sensitivity and specificity without the use of multiple analysis conditions. Here, we demonstrate the first blinded study using microchip electrophoresis (ME)-SSCP/HA. We demonstrate the ability of ME-SSCP/HA to detect with 98% sensitivity and specificity >100 samples from the p53 gene exons 5-9 in a blinded study in an analysis time of <10 min. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kühnemund, Malte; Hernández-Neuta, Iván; Sharif, Mohd Istiaq; Cornaglia, Matteo; Gijs, Martin A.M.
2017-01-01
Abstract Single molecule quantification assays provide the ultimate sensitivity and precision for molecular analysis. However, most digital analysis techniques, i.e. droplet PCR, require sophisticated and expensive instrumentation for molecule compartmentalization, amplification and analysis. Rolling circle amplification (RCA) provides a simpler means for digital analysis. Nevertheless, the sensitivity of RCA assays has until now been limited by inefficient detection methods. We have developed a simple microfluidic strategy for enrichment of RCA products into a single field of view of a low magnification fluorescent sensor, enabling ultra-sensitive digital quantification of nucleic acids over a dynamic range from 1.2 aM to 190 fM. We prove the broad applicability of our analysis platform by demonstrating 5-plex detection of as little as ∼1 pg (∼300 genome copies) of pathogenic DNA with simultaneous antibiotic resistance marker detection, and the analysis of rare oncogene mutations. Our method is simpler, more cost-effective and faster than other digital analysis techniques and provides the means to implement digital analysis in any laboratory equipped with a standard fluorescent microscope. PMID:28077562
High-sensitivity ESCA instrument
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, R.D.; Herglotz, H.K.; Lee, J.D.
1973-01-01
A new electron spectroscopy for chemical analysis (ESCA) instrument has been developed to provide high sensitivity and efficient operation for laboratory analysis of composition and chemical bonding in very thin surface layers of solid samples. High sensitivity is achieved by means of the high-intensity, efficient x-ray source described by Davies and Herglotz at the 1968 Denver X-Ray Conference, in combination with the new electron energy analyzer described by Lee at the 1972 Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy. A sample chamber designed to provide for rapid introduction and replacement of samples has adequate facilities for various sample treatmentsmore » and conditiouing followed immediately by ESCA analysis of the sample. Examples of application are presented, demonstrating the sensitivity and resolution achievable with this instrument. Its usefulness in trace surface analysis is shown and some chemical shifts'' measured by the instrument are compared with those obtained by x-ray spectroscopy. (auth)« less
Sensitivity analysis of automatic flight control systems using singular value concepts
NASA Technical Reports Server (NTRS)
Herrera-Vaillard, A.; Paduano, J.; Downing, D.
1985-01-01
A sensitivity analysis is presented that can be used to judge the impact of vehicle dynamic model variations on the relative stability of multivariable continuous closed-loop control systems. The sensitivity analysis uses and extends the singular-value concept by developing expressions for the gradients of the singular value with respect to variations in the vehicle dynamic model and the controller design. Combined with a priori estimates of the accuracy of the model, the gradients are used to identify the elements in the vehicle dynamic model and controller that could severely impact the system's relative stability. The technique is demonstrated for a yaw/roll damper stability augmentation designed for a business jet.
Material and morphology parameter sensitivity analysis in particulate composite materials
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyu; Oskay, Caglar
2017-12-01
This manuscript presents a novel parameter sensitivity analysis framework for damage and failure modeling of particulate composite materials subjected to dynamic loading. The proposed framework employs global sensitivity analysis to study the variance in the failure response as a function of model parameters. In view of the computational complexity of performing thousands of detailed microstructural simulations to characterize sensitivities, Gaussian process (GP) surrogate modeling is incorporated into the framework. In order to capture the discontinuity in response surfaces, the GP models are integrated with a support vector machine classification algorithm that identifies the discontinuities within response surfaces. The proposed framework is employed to quantify variability and sensitivities in the failure response of polymer bonded particulate energetic materials under dynamic loads to material properties and morphological parameters that define the material microstructure. Particular emphasis is placed on the identification of sensitivity to interfaces between the polymer binder and the energetic particles. The proposed framework has been demonstrated to identify the most consequential material and morphological parameters under vibrational and impact loads.
Digital Correlation Microwave Polarimetry: Analysis and Demonstration
NASA Technical Reports Server (NTRS)
Piepmeier, J. R.; Gasiewski, A. J.; Krebs, Carolyn A. (Technical Monitor)
2000-01-01
The design, analysis, and demonstration of a digital-correlation microwave polarimeter for use in earth remote sensing is presented. We begin with an analysis of three-level digital correlation and develop the correlator transfer function and radiometric sensitivity. A fifth-order polynomial regression is derived for inverting the digital correlation coefficient into the analog statistic. In addition, the effects of quantizer threshold asymmetry and hysteresis are discussed. A two-look unpolarized calibration scheme is developed for identifying correlation offsets. The developed theory and calibration method are verified using a 10.7 GHz and a 37.0 GHz polarimeter. The polarimeters are based upon 1-GS/s three-level digital correlators and measure the first three Stokes parameters. Through experiment, the radiometric sensitivity is shown to approach the theoretical as derived earlier in the paper and the two-look unpolarized calibration method is successfully compared with results using a polarimetric scheme. Finally, sample data from an aircraft experiment demonstrates that the polarimeter is highly-useful for ocean wind-vector measurement.
Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B
2016-11-01
As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Fujarewicz, Krzysztof; Lakomiec, Krzysztof
2016-12-01
We investigate a spatial model of growth of a tumor and its sensitivity to radiotherapy. It is assumed that the radiation dose may vary in time and space, like in intensity modulated radiotherapy (IMRT). The change of the final state of the tumor depends on local differences in the radiation dose and varies with the time and the place of these local changes. This leads to the concept of a tumor's spatiotemporal sensitivity to radiation, which is a function of time and space. We show how adjoint sensitivity analysis may be applied to calculate the spatiotemporal sensitivity of the finite difference scheme resulting from the partial differential equation describing the tumor growth. We demonstrate results of this approach to the tumor proliferation, invasion and response to radiotherapy (PIRT) model and we compare the accuracy and the computational effort of the method to the simple forward finite difference sensitivity analysis. Furthermore, we use the spatiotemporal sensitivity during the gradient-based optimization of the spatiotemporal radiation protocol and present results for different parameters of the model.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
NASA Astrophysics Data System (ADS)
Stepanov, E. V.; Milyaev, Varerii A.
2002-11-01
The application of tunable diode lasers for a highly sensitive analysis of gaseous biomarkers in exhaled air in biomedical diagnostics is discussed. The principle of operation and the design of a laser analyser for studying the composition of exhaled air are described. The results of detection of gaseous biomarkers in exhaled air, including clinical studies, which demonstrate the diagnostic possibilities of the method, are presented.
A Non-Intrusive Algorithm for Sensitivity Analysis of Chaotic Flow Simulations
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2017-01-01
We demonstrate a novel algorithm for computing the sensitivity of statistics in chaotic flow simulations to parameter perturbations. The algorithm is non-intrusive but requires exposing an interface. Based on the principle of shadowing in dynamical systems, this algorithm is designed to reduce the effect of the sampling error in computing sensitivity of statistics in chaotic simulations. We compare the effectiveness of this method to that of the conventional finite difference method.
Esfahlani, Farnaz Zamani; Sayama, Hiroki; Visser, Katherine Frost; Strauss, Gregory P
2017-12-01
Objective: The Positive and Negative Syndrome Scale is a primary outcome measure in clinical trials examining the efficacy of antipsychotic medications. Although the Positive and Negative Syndrome Scale has demonstrated sensitivity as a measure of treatment change in studies using traditional univariate statistical approaches, its sensitivity to detecting network-level changes in dynamic relationships among symptoms has yet to be demonstrated using more sophisticated multivariate analyses. In the current study, we examined the sensitivity of the Positive and Negative Syndrome Scale to detecting antipsychotic treatment effects as revealed through network analysis. Design: Participants included 1,049 individuals diagnosed with psychotic disorders from the Phase I portion of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) study. Of these participants, 733 were clinically determined to be treatment-responsive and 316 were found to be treatment-resistant. Item level data from the Positive and Negative Syndrome Scale were submitted to network analysis, and macroscopic, mesoscopic, and microscopic network properties were evaluated for the treatment-responsive and treatment-resistant groups at baseline and post-phase I antipsychotic treatment. Results: Network analysis indicated that treatment-responsive patients had more densely connected symptom networks after antipsychotic treatment than did treatment-responsive patients at baseline, and that symptom centralities increased following treatment. In contrast, symptom networks of treatment-resistant patients behaved more randomly before and after treatment. Conclusions: These results suggest that the Positive and Negative Syndrome Scale is sensitive to detecting treatment effects as revealed through network analysis. Its findings also provide compelling new evidence that strongly interconnected symptom networks confer an overall greater probability of treatment responsiveness in patients with psychosis, suggesting that antipsychotics achieve their effect by enhancing a number of central symptoms, which then facilitate reduction of other highly coupled symptoms in a network-like fashion.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu
2014-06-15
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate ourmore » algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilaly, A.K.; Sikdar, S.K.
In this study, the authors introduced several modifications to the WAR (waste reduction) algorithm developed earlier. These modifications were made for systematically handling sensitivity analysis and various tasks of waste minimization. A design hierarchy was formulated to promote appropriate waste reduction tasks at designated levels of the hierarchy. A sensitivity coefficient was used to measure the relative impacts of process variables on the pollution index of a process. The use of the WAR algorithm was demonstrated by a fermentation process for making penicillin.
Kühnemund, Malte; Hernández-Neuta, Iván; Sharif, Mohd Istiaq; Cornaglia, Matteo; Gijs, Martin A M; Nilsson, Mats
2017-05-05
Single molecule quantification assays provide the ultimate sensitivity and precision for molecular analysis. However, most digital analysis techniques, i.e. droplet PCR, require sophisticated and expensive instrumentation for molecule compartmentalization, amplification and analysis. Rolling circle amplification (RCA) provides a simpler means for digital analysis. Nevertheless, the sensitivity of RCA assays has until now been limited by inefficient detection methods. We have developed a simple microfluidic strategy for enrichment of RCA products into a single field of view of a low magnification fluorescent sensor, enabling ultra-sensitive digital quantification of nucleic acids over a dynamic range from 1.2 aM to 190 fM. We prove the broad applicability of our analysis platform by demonstrating 5-plex detection of as little as ∼1 pg (∼300 genome copies) of pathogenic DNA with simultaneous antibiotic resistance marker detection, and the analysis of rare oncogene mutations. Our method is simpler, more cost-effective and faster than other digital analysis techniques and provides the means to implement digital analysis in any laboratory equipped with a standard fluorescent microscope. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Agasti, Sarit S; Liong, Monty; Peterson, Vanessa M; Lee, Hakho; Weissleder, Ralph
2012-11-14
DNA barcoding is an attractive technology, as it allows sensitive and multiplexed target analysis. However, DNA barcoding of cellular proteins remains challenging, primarily because barcode amplification and readout techniques are often incompatible with the cellular microenvironment. Here we describe the development and validation of a photocleavable DNA barcode-antibody conjugate method for rapid, quantitative, and multiplexed detection of proteins in single live cells. Following target binding, this method allows DNA barcodes to be photoreleased in solution, enabling easy isolation, amplification, and readout. As a proof of principle, we demonstrate sensitive and multiplexed detection of protein biomarkers in a variety of cancer cells.
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
NASA Astrophysics Data System (ADS)
Hoshor, Cory; Young, Stephan; Rogers, Brent; Currie, James; Oakes, Thomas; Scott, Paul; Miller, William; Caruso, Anthony
2014-03-01
A novel application of the Pearson Cross-Correlation to neutron spectral discernment in a moderating type neutron spectrometer is introduced. This cross-correlation analysis will be applied to spectral response data collected through both MCNP simulation and empirical measurement by the volumetrically sensitive spectrometer for comparison in 1, 2, and 3 spatial dimensions. The spectroscopic analysis methods discussed will be demonstrated to discern various common spectral and monoenergetic neutron sources.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.
1997-01-01
A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.
Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement
Yang, Bo; Hu, Di; Wu, Lei
2016-01-01
A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive simulation and analysis. PMID:27399716
Sensitive Amino Acid Composition and Chirality Analysis with the Mars Organic Analyzer (MOA)
NASA Technical Reports Server (NTRS)
Skelley, Alison M.; Scherer, James R.; Aubrey, Andrew D.; Grover, William H.; Ivester, Robin H. C.; Ehrenfreund, Pascale; Grunthaner, Frank J.; Bada, Jeffrey L.; Mathies, Richard A.
2005-01-01
Detection of life on Mars requires definition of a suitable biomarker and development of sensitive yet compact instrumentation capable of performing in situ analyses. Our studies are focused on amino acid analysis because amino acids are more resistant to decomposition than other biomolecules, and because amino acid chirality is a well-defined biomarker. Amino acid composition and chirality analysis has been previously demonstrated in the lab using microfabricated capillary electrophoresis (CE) chips. To analyze amino acids in the field, we have developed the Mars Organic Analyzer (MOA), a portable analysis system that consists of a compact instrument and a novel multi-layer CE microchip.
A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method
NASA Astrophysics Data System (ADS)
Chen, Leilei; Zheng, Changjun; Chen, Haibo
2013-09-01
This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.
Design sensitivity analysis and optimization tool (DSO) for sizing design applications
NASA Technical Reports Server (NTRS)
Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa
1992-01-01
The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steill, Jeffrey D.; Huang, Haifeng; Hoops, Alexandra A.
This report summarizes our development of spectroscopic chemical analysis techniques and spectral modeling for trace-gas measurements of highly-regulated low-concentration species present in flue gas emissions from utility coal boilers such as HCl under conditions of high humidity. Detailed spectral modeling of the spectroscopy of HCl and other important combustion and atmospheric species such as H 2 O, CO 2 , N 2 O, NO 2 , SO 2 , and CH 4 demonstrates that IR-laser spectroscopy is a sensitive multi-component analysis strategy. Experimental measurements from techniques based on IR laser spectroscopy are presented that demonstrate sub-ppm sensitivity levels to thesemore » species. Photoacoustic infrared spectroscopy is used to detect and quantify HCl at ppm levels with extremely high signal-to-noise even under conditions of high relative humidity. Additionally, cavity ring-down IR spectroscopy is used to achieve an extremely high sensitivity to combustion trace gases in this spectral region; ppm level CH 4 is one demonstrated example. The importance of spectral resolution in the sensitivity of a trace-gas measurement is examined by spectral modeling in the mid- and near-IR, and efforts to improve measurement resolution through novel instrument development are described. While previous project reports focused on benefits and complexities of the dual-etalon cavity ring-down infrared spectrometer, here details on steps taken to implement this unique and potentially revolutionary instrument are described. This report also illustrates and critiques the general strategy of IR- laser photodetection of trace gases leading to the conclusion that mid-IR laser spectroscopy techniques provide a promising basis for further instrument development and implementation that will enable cost-effective sensitive detection of multiple key contaminant species simultaneously.« less
Time to angiographic reperfusion in acute ischemic stroke: decision analysis.
Vagal, Achala S; Khatri, Pooja; Broderick, Joseph P; Tomsick, Thomas A; Yeatts, Sharon D; Eckman, Mark H
2014-12-01
Our objective was to use decision analytic modeling to compare 2 treatment strategies of intravenous recombinant tissue-type plasminogen activator (r-tPA) alone versus combined intravenous r-tPA/endovascular therapy in a subgroup of patients with large vessel (internal carotid artery terminus, M1, and M2) occlusion based on varying times to angiographic reperfusion and varying rates of reperfusion. We developed a decision model using Interventional Management of Stroke (IMS) III trial data and comprehensive literature review. We performed 1-way sensitivity analyses for time to reperfusion and 2-way sensitivity for time to reperfusion and rate of reperfusion success. We also performed probabilistic sensitivity analyses to address uncertainty in total time to reperfusion for the endovascular approach. In the base case, endovascular approach yielded a higher expected utility (6.38 quality-adjusted life years) than the intravenous-only arm (5.42 quality-adjusted life years). One-way sensitivity analyses demonstrated superiority of endovascular treatment to intravenous-only arm unless time to reperfusion exceeded 347 minutes. Two-way sensitivity analysis demonstrated that endovascular treatment was preferred when probability of reperfusion is high and time to reperfusion is small. Probabilistic sensitivity results demonstrated an average gain for endovascular therapy of 0.76 quality-adjusted life years (SD 0.82) compared with the intravenous-only approach. In our post hoc model with its underlying limitations, endovascular therapy after intravenous r-tPA is the preferred treatment as compared with intravenous r-tPA alone. However, if time to reperfusion exceeds 347 minutes, intravenous r-tPA alone is the recommended strategy. This warrants validation in a randomized, prospective trial among patients with large vessel occlusions. © 2014 American Heart Association, Inc.
Logistic Map for Cancellable Biometrics
NASA Astrophysics Data System (ADS)
Supriya, V. G., Dr; Manjunatha, Ramachandra, Dr
2017-08-01
This paper presents design and implementation of secured biometric template protection system by transforming the biometric template using binary chaotic signals and 3 different key streams to obtain another form of template and demonstrating its efficiency by the results and investigating on its security through analysis including, key space analysis, information entropy and key sensitivity analysis.
Sizing and phenotyping of cellular vesicles using Nanoparticle Tracking Analysis
Dragovic, Rebecca A.; Gardiner, Christopher; Brooks, Alexandra S.; Tannetta, Dionne S.; Ferguson, David J.P.; Hole, Patrick; Carr, Bob; Redman, Christopher W.G.; Harris, Adrian L.; Dobson, Peter J.; Harrison, Paul; Sargent, Ian L.
2011-01-01
Cellular microvesicles and nanovesicles (exosomes) are involved in many disease processes and have major potential as biomarkers. However, developments in this area are constrained by limitations in the technology available for their measurement. Here we report on the use of fluorescence nanoparticle tracking analysis (NTA) to rapidly size and phenotype cellular vesicles. In this system vesicles are visualized by light scattering using a light microscope. A video is taken, and the NTA software tracks the brownian motion of individual vesicles and calculates their size and total concentration. Using human placental vesicles and plasma, we have demonstrated that NTA can measure cellular vesicles as small as ∼50 nm and is far more sensitive than conventional flow cytometry (lower limit ∼300 nm). By combining NTA with fluorescence measurement we have demonstrated that vesicles can be labeled with specific antibody-conjugated quantum dots, allowing their phenotype to be determined. From the Clinical Editor The authors of this study utilized fluorescence nanoparticle tracking analysis (NTA) to rapidly size and phenotype cellular vesicles, demonstrating that NTA is far more sensitive than conventional flow cytometry. PMID:21601655
NASA Technical Reports Server (NTRS)
Fu, Lee-Lueng; Chao, Yi
1996-01-01
It has been demonstrated that current-generation global ocean general circulation models (OGCM) are able to simulate large-scale sea level variations fairly well. In this study, a GFDL/MOM-based OGCM was used to investigate its sensitivity to different wind forcing. Simulations of global sea level using wind forcing from the ERS-1 Scatterometer and the NMC operational analysis were compared to the observations made by the TOPEX/Poseidon (T/P) radar altimeter for a two-year period. The result of the study has demonstrated the sensitivity of the OGCM to the quality of wind forcing, as well as the synergistic use of two spaceborne sensors in advancing the study of wind-driven ocean dynamics.
Enhanced electrochemical nanoring electrode for analysis of cytosol in single cells.
Zhuang, Lihong; Zuo, Huanzhen; Wu, Zengqiang; Wang, Yu; Fang, Danjun; Jiang, Dechen
2014-12-02
A microelectrode array has been applied for single cell analysis with relatively high throughput; however, the cells were typically cultured on the microelectrodes under cell-size microwell traps leading to the difficulty in the functionalization of an electrode surface for higher detection sensitivity. Here, nanoring electrodes embedded under the microwell traps were fabricated to achieve the isolation of the electrode surface and the cell support, and thus, the electrode surface can be modified to obtain enhanced electrochemical sensitivity for single cell analysis. Moreover, the nanometer-sized electrode permitted a faster diffusion of analyte to the surface for additional improvement in the sensitivity, which was evidenced by the electrochemical characterization and the simulation. To demonstrate the concept of the functionalized nanoring electrode for single cell analysis, the electrode surface was deposited with prussian blue to detect intracellular hydrogen peroxide at a single cell. Hundreds of picoamperes were observed on our functionalized nanoring electrode exhibiting the enhanced electrochemical sensitivity. The success in the achievement of a functionalized nanoring electrode will benefit the development of high throughput single cell electrochemical analysis.
NASA Astrophysics Data System (ADS)
Xu, Zhida; Jiang, Jing; Wang, Xinhao; Han, Kevin; Ameen, Abid; Khan, Ibrahim; Chang, Te-Wei; Liu, Gang Logan
2016-03-01
We demonstrated a highly-sensitive, wafer-scale, highly-uniform plasmonic nano-mushroom substrate based on plastic for naked-eye plasmonic colorimetry and surface-enhanced Raman spectroscopy (SERS). We gave it the name FlexBrite. The dual-mode functionality of FlexBrite allows for label-free qualitative analysis by SERS with an enhancement factor (EF) of 108 and label-free quantitative analysis by naked-eye colorimetry with a sensitivity of 611 nm RIU-1. The SERS EF of FlexBrite in the wet state was found to be 4.81 × 108, 7 times stronger than in the dry state, making FlexBrite suitable for aqueous environments such as microfluid systems. The label-free detection of biotin-streptavidin interaction by both SERS and colorimetry was demonstrated with FlexBrite. The detection of trace amounts of the narcotic drug methamphetamine in drinking water by SERS was implemented with a handheld Raman spectrometer and FlexBrite. This plastic-based dual-mode nano-mushroom substrate has the potential to be used as a sensing platform for easy and fast analysis in chemical and biological assays.We demonstrated a highly-sensitive, wafer-scale, highly-uniform plasmonic nano-mushroom substrate based on plastic for naked-eye plasmonic colorimetry and surface-enhanced Raman spectroscopy (SERS). We gave it the name FlexBrite. The dual-mode functionality of FlexBrite allows for label-free qualitative analysis by SERS with an enhancement factor (EF) of 108 and label-free quantitative analysis by naked-eye colorimetry with a sensitivity of 611 nm RIU-1. The SERS EF of FlexBrite in the wet state was found to be 4.81 × 108, 7 times stronger than in the dry state, making FlexBrite suitable for aqueous environments such as microfluid systems. The label-free detection of biotin-streptavidin interaction by both SERS and colorimetry was demonstrated with FlexBrite. The detection of trace amounts of the narcotic drug methamphetamine in drinking water by SERS was implemented with a handheld Raman spectrometer and FlexBrite. This plastic-based dual-mode nano-mushroom substrate has the potential to be used as a sensing platform for easy and fast analysis in chemical and biological assays. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr08357e
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
We demonstrate a spatially-explicit regional assessment of current condition of aquatic ecoservices in the Coal River Basin (CRB), with limited sensitivity analysis for the atmospheric contaminant mercury. The integrated modeling framework (IMF) forecasts water quality and quant...
Highly Sensitive Hot-Wire Anemometry Based on Macro-Sized Double-Walled Carbon Nanotube Strands.
Wang, Dingqu; Xiong, Wei; Zhou, Zhaoying; Zhu, Rong; Yang, Xing; Li, Weihua; Jiang, Yueyuan; Zhang, Yajun
2017-08-01
This paper presents a highly sensitive flow-rate sensor with carbon nanotubes (CNTs) as sensing elements. The sensor uses micro-size centimeters long double-walled CNT (DWCNT) strands as hot-wires to sense fluid velocity. In the theoretical analysis, the sensitivity of the sensor is demonstrated to be positively related to the ratio of its surface. We assemble the flow sensor by suspending the DWCNT strand directly on two tungsten prongs and dripping a small amount of silver glue onto each contact between the DWCNT and the prongs. The DWCNT exhibits a positive TCR of 1980 ppm/K. The self-heating effect on the DWCNT was observed while constant current was applied between the two prongs. This sensor can evidently respond to flow rate, and requires only several milliwatts to operate. We have, thus far, demonstrated that the CNT-based flow sensor has better sensitivity than the Pt-coated DWCNT sensor.
van Dijken, Bart R J; van Laar, Peter Jan; Holtman, Gea A; van der Hoorn, Anouk
2017-10-01
Treatment response assessment in high-grade gliomas uses contrast enhanced T1-weighted MRI, but is unreliable. Novel advanced MRI techniques have been studied, but the accuracy is not well known. Therefore, we performed a systematic meta-analysis to assess the diagnostic accuracy of anatomical and advanced MRI for treatment response in high-grade gliomas. Databases were searched systematically. Study selection and data extraction were done by two authors independently. Meta-analysis was performed using a bivariate random effects model when ≥5 studies were included. Anatomical MRI (five studies, 166 patients) showed a pooled sensitivity and specificity of 68% (95%CI 51-81) and 77% (45-93), respectively. Pooled apparent diffusion coefficients (seven studies, 204 patients) demonstrated a sensitivity of 71% (60-80) and specificity of 87% (77-93). DSC-perfusion (18 studies, 708 patients) sensitivity was 87% (82-91) with a specificity of 86% (77-91). DCE-perfusion (five studies, 207 patients) sensitivity was 92% (73-98) and specificity was 85% (76-92). The sensitivity of spectroscopy (nine studies, 203 patients) was 91% (79-97) and specificity was 95% (65-99). Advanced techniques showed higher diagnostic accuracy than anatomical MRI, the highest for spectroscopy, supporting the use in treatment response assessment in high-grade gliomas. • Treatment response assessment in high-grade gliomas with anatomical MRI is unreliable • Novel advanced MRI techniques have been studied, but diagnostic accuracy is unknown • Meta-analysis demonstrates that advanced MRI showed higher diagnostic accuracy than anatomical MRI • Highest diagnostic accuracy for spectroscopy and perfusion MRI • Supports the incorporation of advanced MRI in high-grade glioma treatment response assessment.
NASA Astrophysics Data System (ADS)
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2012 CFR
2012-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
49 CFR Appendix B to Part 236 - Risk Assessment Criteria
Code of Federal Regulations, 2014 CFR
2014-10-01
... availability calculations for subsystems and components, Fault Tree Analysis (FTA) of the subsystems, and... upper bound, as estimated with a sensitivity analysis, and the risk value selected must be demonstrated... interconnected subsystems/components? The risk assessment of each safety-critical system (product) must account...
Application of a sensitivity analysis technique to high-order digital flight control systems
NASA Technical Reports Server (NTRS)
Paduano, James D.; Downing, David R.
1987-01-01
A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
Fafin-Lefevre, Mélanie; Morlais, Fabrice; Guittet, Lydia; Clin, Bénédicte; Launoy, Guy; Galateau-Sallé, Françoise; Plancoulaine, Benoît; Herlin, Paulette; Letourneux, Marc
2011-08-01
To identify which morphologic or densitometric parameters are modified in cell nuclei from bronchopulmonary cancer based on 18 parameters involving shape, intensity, chromatin, texture, and DNA content and develop a bronchopulmonary cancer screening method relying on analysis of sputum sample cell nuclei. A total of 25 sputum samples from controls and 22 bronchial aspiration samples from patients presenting with bronchopulmonary cancer who were professionally exposed to cancer were used. After Feulgen staining, 18 morphologic and DNA content parameters were measured on cell nuclei, via image cytom- etry. A method was developed for analyzing distribution quantiles, compared with simply interpreting mean values, to characterize morphologic modifications in cell nuclei. Distribution analysis of parameters enabled us to distinguish 13 of 18 parameters that demonstrated significant differences between controls and cancer cases. These parameters, used alone, enabled us to distinguish two population types, with both sensitivity and specificity > 70%. Three parameters offered 100% sensitivity and specificity. When mean values offered high sensitivity and specificity, comparable or higher sensitivity and specificity values were observed for at least one of the corresponding quantiles. Analysis of modification in morphologic parameters via distribution analysis proved promising for screening bronchopulmonary cancer from sputum.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.
Source apportionment and sensitivity analysis: two methodologies with two different purposes
NASA Astrophysics Data System (ADS)
Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe
2017-11-01
This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts
(sensitivity analysis) and contributions
(source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.
Convergence Estimates for Multidisciplinary Analysis and Optimization
NASA Technical Reports Server (NTRS)
Arian, Eyal
1997-01-01
A quantitative analysis of coupling between systems of equations is introduced. This analysis is then applied to problems in multidisciplinary analysis, sensitivity, and optimization. For the sensitivity and optimization problems both multidisciplinary and single discipline feasibility schemes are considered. In all these cases a "convergence factor" is estimated in terms of the Jacobians and Hessians of the system, thus it can also be approximated by existing disciplinary analysis and optimization codes. The convergence factor is identified with the measure for the "coupling" between the disciplines in the system. Applications to algorithm development are discussed. Demonstration of the convergence estimates and numerical results are given for a system composed of two non-linear algebraic equations, and for a system composed of two PDEs modeling aeroelasticity.
General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models
Miller, David A.W.
2012-01-01
Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.
Sensitivity of control-augmented structure obtained by a system decomposition method
NASA Technical Reports Server (NTRS)
Sobieszczanskisobieski, Jaroslaw; Bloebaum, Christina L.; Hajela, Prabhat
1988-01-01
The verification of a method for computing sensitivity derivatives of a coupled system is presented. The method deals with a system whose analysis can be partitioned into subsets that correspond to disciplines and/or physical subsystems that exchange input-output data with each other. The method uses the partial sensitivity derivatives of the output with respect to input obtained for each subset separately to assemble a set of linear, simultaneous, algebraic equations that are solved for the derivatives of the coupled system response. This sensitivity analysis is verified using an example of a cantilever beam augmented with an active control system to limit the beam's dynamic displacements under an excitation force. The verification shows good agreement of the method with reference data obtained by a finite difference technique involving entire system analysis. The usefulness of a system sensitivity method in optimization applications by employing a piecewise-linear approach to the same numerical example is demonstrated. The method's principal merits are its intrinsically superior accuracy in comparison with the finite difference technique, and its compatibility with the traditional division of work in complex engineering tasks among specialty groups.
Pain sensitivity profiles in patients with advanced knee osteoarthritis
Frey-Law, Laura A.; Bohr, Nicole L.; Sluka, Kathleen A.; Herr, Keela; Clark, Charles R.; Noiseux, Nicolas O.; Callaghan, John J; Zimmerman, M Bridget; Rakel, Barbara A.
2016-01-01
The development of patient profiles to subgroup individuals on a variety of variables has gained attention as a potential means to better inform clinical decision-making. Patterns of pain sensitivity response specific to quantitative sensory testing (QST) modality have been demonstrated in healthy subjects. It has not been determined if these patterns persist in a knee osteoarthritis population. In a sample of 218 participants, 19 QST measures along with pain, psychological factors, self-reported function, and quality of life were assessed prior to total knee arthroplasty. Component analysis was used to identify commonalities across the 19 QST assessments to produce standardized pain sensitivity factors. Cluster analysis then grouped individuals that exhibited similar patterns of standardized pain sensitivity component scores. The QST resulted in four pain sensitivity components: heat, punctate, temporal summation, and pressure. Cluster analysis resulted in five pain sensitivity profiles: a “low pressure pain” group, an “average pain” group, and three “high pain” sensitivity groups who were sensitive to different modalities (punctate, heat, and temporal summation). Pain and function differed between pain sensitivity profiles, along with sex distribution; however no differences in OA grade, medication use, or psychological traits were found. Residualizing QST data by age and sex resulted in similar components and pain sensitivity profiles. Further, these profiles are surprisingly similar to those reported in healthy populations suggesting that individual differences in pain sensitivity are a robust finding even in an older population with significant disease. PMID:27152688
Landscape sensitivity in a dynamic environment
NASA Astrophysics Data System (ADS)
Lin, Jiun-Chuan; Jen, Chia-Horn
2010-05-01
Landscape sensitivity at different scales and topics is presented in this study. Methodological approach composed most of this paper. According to the environmental records in the south eastern Asia, the environment change is highly related with five factors, such as scale of influence area, background of environment characters, magnitude and frequency of events, thresholds of occurring hazards and influence by time factor. This paper tries to demonstrate above five points from historical and present data. It is found that landscape sensitivity is highly related to the degree of vulnerability of the land and the processes which put on the ground including human activities. The scale of sensitivity and evaluation of sensitivities is demonstrated in this paper by the data around east Asia. The methods of classification are mainly from the analysis of environmental data and the records of hazards. From the trend of rainfall records, rainfall intensity and change of temperature, the magnitude and frequency of earthquake, dust storm, days of draught, number of hazards, there are many coincidence on these factors with landscape sensitivities. In conclusion, the landscape sensitivities could be classified as four groups: physical stable, physical unstable, unstable, extremely unstable. This paper explain the difference.
A Quad-Cantilevered Plate micro-sensor for intracranial pressure measurement.
Lalkov, Vasko; Qasaimeh, Mohammad A
2017-07-01
This paper proposes a new design for pressure-sensing micro-plate platform to bring higher sensitivity to a pressure sensor based on piezoresistive MEMS sensing mechanism. The proposed design is composed of a suspended plate having four stepped cantilever beams connected to its corners, and thus defined as Quad-Cantilevered Plate (QCP). Finite element analysis was performed to determine the optimal design for sensitivity and structural stability under a range of applied forces. Furthermore, a piezoresistive analysis was performed to calculate sensor sensitivity. Both the maximum stress and the change in resistance of the piezoresistor associated with the QCP were found to be higher compared to previously published designs, and linearly related to the applied pressure as desired. Therefore, the QCP demonstrates greater sensitivity, and could be potentially used as an efficient pressure sensor for intracranial pressure measurement.
Complex-valued time-series correlation increases sensitivity in FMRI analysis.
Kociuba, Mary C; Rowe, Daniel B
2016-07-01
To develop a linear matrix representation of correlation between complex-valued (CV) time-series in the temporal Fourier frequency domain, and demonstrate its increased sensitivity over correlation between magnitude-only (MO) time-series in functional MRI (fMRI) analysis. The standard in fMRI is to discard the phase before the statistical analysis of the data, despite evidence of task related change in the phase time-series. With a real-valued isomorphism representation of Fourier reconstruction, correlation is computed in the temporal frequency domain with CV time-series data, rather than with the standard of MO data. A MATLAB simulation compares the Fisher-z transform of MO and CV correlations for varying degrees of task related magnitude and phase amplitude change in the time-series. The increased sensitivity of the complex-valued Fourier representation of correlation is also demonstrated with experimental human data. Since the correlation description in the temporal frequency domain is represented as a summation of second order temporal frequencies, the correlation is easily divided into experimentally relevant frequency bands for each voxel's temporal frequency spectrum. The MO and CV correlations for the experimental human data are analyzed for four voxels of interest (VOIs) to show the framework with high and low contrast-to-noise ratios in the motor cortex and the supplementary motor cortex. The simulation demonstrates the increased strength of CV correlations over MO correlations for low magnitude contrast-to-noise time-series. In the experimental human data, the MO correlation maps are noisier than the CV maps, and it is more difficult to distinguish the motor cortex in the MO correlation maps after spatial processing. Including both magnitude and phase in the spatial correlation computations more accurately defines the correlated left and right motor cortices. Sensitivity in correlation analysis is important to preserve the signal of interest in fMRI data sets with high noise variance, and avoid excessive processing induced correlation. Copyright © 2016 Elsevier Inc. All rights reserved.
Technical Note: Asteroid Detection Demonstration from SkySat-3 - B612 Data Using Synthetic Tracking
NASA Technical Reports Server (NTRS)
Zhai, C.; Shao, M.; Lai, S.; Boerner, P.; Dyer, J.; Lu, E.; Reitsema, H.; Buie, M.
2018-01-01
We report results from analyzing the data taken by the sCMOS cameras on board of SkySat3 using the synthetic tracking technique. The analysis demonstrates the expected sensitivity improvement in the signal-to-noise ratio of the faint asteroids from properly stacking up the short exposure images in post-processing.
Briggs, Andrew H; Ades, A E; Price, Martin J
2003-01-01
In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.
Desland, Fiona A; Afzal, Aqeela; Warraich, Zuha; Mocco, J
2014-01-01
Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao Yang; Luo, Gang; Jiang, Fangming
2010-05-01
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less
Brock Stewart; Chris J. Cieszewski; Michal Zasada
2005-01-01
This paper presents a sensitivity analysis of the impact of various definitions and inclusions of different variables in the Forest Inventory and Analysis (FIA) inventory on data compilation results. FIA manuals have been changing recently to make the inventory consistent between all the States. Our analysis demonstrates the importance (or insignificance) of different...
NASA Astrophysics Data System (ADS)
Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan
2014-09-01
A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.
2010-01-01
Blocking oncogenic signaling induced by the BRAFV600E mutation is a promising approach for melanoma treatment. We tested the anti-tumor effects of a specific inhibitor of Raf protein kinases, PLX4032/RG7204, in melanoma cell lines. PLX4032 decreased signaling through the MAPK pathway only in cell lines with the BRAFV600E mutation. Seven out of 10 BRAFV600E mutant cell lines displayed sensitivity based on cell viability assays and three were resistant at concentrations up to 10 μM. Among the sensitive cell lines, four were highly sensitive with IC50 values below 1 μM, and three were moderately sensitive with IC50 values between 1 and 10 μM. There was evidence of MAPK pathway inhibition and cell cycle arrest in both sensitive and resistant cell lines. Genomic analysis by sequencing, genotyping of close to 400 oncogeninc mutations by mass spectrometry, and SNP arrays demonstrated no major differences in BRAF locus amplification or in other oncogenic events between sensitive and resistant cell lines. However, metabolic tracer uptake studies demonstrated that sensitive cell lines had a more profound inhibition of FDG uptake upon exposure to PLX4032 than resistant cell lines. In conclusion, BRAFV600E mutant melanoma cell lines displayed a range of sensitivities to PLX4032 and metabolic imaging using PET probes can be used to assess sensitivity. PMID:20406486
A fiber-optic water flow sensor based on laser-heated silicon Fabry-Pérot cavity
NASA Astrophysics Data System (ADS)
Liu, Guigen; Sheng, Qiwen; Resende Lisboa Piassetta, Geraldo; Hou, Weilin; Han, Ming
2016-05-01
A hot-wire fiber-optic water flow sensor based on laser-heated silicon Fabry-Pérot interferometer (FPI) has been proposed and demonstrated in this paper. The operation of the sensor is based on the convective heat loss to water from a heated silicon FPI attached to the cleaved enface of a piece of single-mode fiber. The flow-induced change in the temperature is demodulated by the spectral shifts of the reflection fringes. An analytical model based on the FPI theory and heat transfer analysis has been developed for performance analysis. Numerical simulations based on finite element analysis have been conducted. The analytical and numerical results agree with each other in predicting the behavior of the sensor. Experiments have also been carried to demonstrate the sensing principle and verify the theoretical analysis. Investigations suggest that the sensitivity at low flow rates are much larger than that at high flow rates and the sensitivity can be easily improved by increasing the heating laser power. Experimental results show that an average sensitivity of 52.4 nm/(m/s) for the flow speed range of 1.5 mm/s to 12 mm/s was obtained with a heating power of ~12 mW, suggesting a resolution of ~1 μm/s assuming a wavelength resolution of 0.05 pm.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Hou, Gene J. W.
1994-01-01
A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented. The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem. This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders). Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis. Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems.
Performance evaluation of a lossy transmission lines based diode detector at cryogenic temperature.
Villa, E; Aja, B; de la Fuente, L; Artal, E
2016-01-01
This work is focused on the design, fabrication, and performance analysis of a square-law Schottky diode detector based on lossy transmission lines working under cryogenic temperature (15 K). The design analysis of a microwave detector, based on a planar gallium-arsenide low effective Schottky barrier height diode, is reported, which is aimed for achieving large input return loss as well as flat sensitivity versus frequency. The designed circuit demonstrates good sensitivity, as well as a good return loss in a wide bandwidth at Ka-band, at both room (300 K) and cryogenic (15 K) temperatures. A good sensitivity of 1000 mV/mW and input return loss better than 12 dB have been achieved when it works as a zero-bias Schottky diode detector at room temperature, increasing the sensitivity up to a minimum of 2200 mV/mW, with the need of a DC bias current, at cryogenic temperature.
Nagel, Michael; Bolivar, Peter Haring; Brucherseifer, Martin; Kurz, Heinrich; Bosserhoff, Anja; Büttner, Reinhard
2002-04-01
A promising label-free approach for the analysis of genetic material by means of detecting the hybridization of polynucleotides with electromagnetic waves at terahertz (THz) frequencies is presented. Using an integrated waveguide approach, incorporating resonant THz structures as sample carriers and transducers for the analysis of the DNA molecules, we achieve a sensitivity down to femtomolar levels. The approach is demonstrated with time-domain ultrafast techniques based on femtosecond laser pulses for generating and electro-optically detecting broadband THz signals, although the principle can certainly be transferred to other THz technologies.
Monoallelic mutation analysis (MAMA) for identifying germline mutations.
Papadopoulos, N; Leach, F S; Kinzler, K W; Vogelstein, B
1995-09-01
Dissection of germline mutations in a sensitive and specific manner presents a continuing challenge. In dominantly inherited diseases, mutations occur in only one allele and are often masked by the normal allele. Here we report the development of a sensitive and specific diagnostic strategy based on somatic cell hybridization termed MAMA (monoallelic mutation analysis). We have demonstrated the utility of this strategy in two different hereditary colorectal cancer syndromes, one caused by a defective tumour suppressor gene on chromosome 5 (familial adenomatous polyposis, FAP) and the other caused by a defective mismatch repair gene on chromosome 2 (hereditary non-polyposis colorectal cancer, HNPCC).
Digression and Value Concatenation to Enable Privacy-Preserving Regression.
Li, Xiao-Bai; Sarkar, Sumit
2014-09-01
Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and data-mining technique, can be used to effectively reveal individuals' sensitive data. This problem, which we call a "regression attack," has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression , which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using real-world financial, economic and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.
Felhofer, Jessica L.; Scida, Karen; Penick, Mark; Willis, Peter A.; Garcia, Carlos D.
2013-01-01
To overcome the problem of poor sensitivity of capillary electrophoresis-UV absorbance for the detection of aliphatic amines, a solid phase extraction and derivatization scheme was developed. This work demonstrates successful coupling of amines to a chromophore immobilized on a solid phase and subsequent cleavage and analysis. Although the analysis of many types of amines is relevant for myriad applications, this paper focuses on the derivatization and separation of amines with environmental relevance. This work aims to provide the foundations for future developments of an integrated sample preparation microreactor capable of performing simultaneous derivatization, preconcentration, and sample cleanup for sensitive analysis of primary amines. PMID:24054648
Avis, Tyler J.; Michaud, Mélanie; Tweddell, Russell J.
2007-01-01
Aluminum chloride and sodium metabisulfite have shown high efficacy at low doses in controlling postharvest pathogens on potato tubers. Direct effects of these two salts included the loss of cell membrane integrity in exposed pathogens. In this work, four fungal potato pathogens were studied in order to elucidate the role of membrane lipids and lipid peroxidation in the relative sensitivity of microorganisms exposed to these salts. Inhibition of mycelial growth in these fungi varied considerably and revealed sensitivity groups within the tested fungi. Analysis of fatty acids in these fungi demonstrated that sensitivity was related to high intrinsic fatty acid unsaturation. When exposed to the antifungal salts, sensitive fungi demonstrated a loss of fatty acid unsaturation, which was accompanied by an elevation in malondialdehyde content (a biochemical marker of lipid peroxidation). Our data suggest that aluminum chloride and sodium metabisulfite could induce lipid peroxidation in sensitive fungi, which may promote the ensuing loss of integrity in the plasma membrane. This direct effect on fungal membranes may contribute, at least in part, to the observed antimicrobial effects of these two salts. PMID:17337539
An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts
NASA Astrophysics Data System (ADS)
Yan, Kun; Cheng, Gengdong
2018-03-01
For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.
Initial Results: An Ultra-Low-Background Germanium Crystal Array
2010-09-01
data (focused on γ -γ coincidence signatures) (Smith et al., 2004) and the Multi- Isotope Coincidence Analysis code (MICA) (Warren et al., 2006). The...The follow-on “CASCADES” project aims to develop a multicoincidence data- analysis package and make robust fission-product demonstration measurements...sensitivity. This effort is focused on improving gamma analysis capabilities for nuclear detonation detection (NDD) applications, e.g., nuclear treaty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
Dai, Heng; Ye, Ming; Walker, Anthony P.; ...
2017-03-28
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
Barrett, K. E.; Ellis, K. D.; Glass, C. R.; ...
2015-12-01
The goal of the Accident Tolerant Fuel (ATF) program is to develop the next generation of Light Water Reactor (LWR) fuels with improved performance, reliability, and safety characteristics during normal operations and accident conditions and with reduced waste generation. An irradiation test series has been defined to assess the performance of proposed ATF concepts under normal LWR operating conditions. The Phase I ATF irradiation test series is planned to be performed as a series of drop-in capsule tests to be irradiated in the Advanced Test Reactor (ATR) operated by the Idaho National Laboratory (INL). Design, analysis, and fabrication processes formore » ATR drop-in capsule experiment preparation are presented in this paper to demonstrate the importance of special design considerations, parameter sensitivity analysis, and precise fabrication and inspection techniques for figure innovative materials used in ATF experiment assemblies. A Taylor Series Method sensitivity analysis approach was used to identify the most critical variables in cladding and rodlet stress, temperature, and pressure calculations for design analyses. The results showed that internal rodlet pressure calculations are most sensitive to the fission gas release rate uncertainty while temperature calculations are most sensitive to cladding I.D. and O.D. dimensional uncertainty. The analysis showed that stress calculations are most sensitive to rodlet internal pressure uncertainties, however the results also indicated that the inside radius, outside radius, and internal pressure were all magnified as they propagate through the stress equation. This study demonstrates the importance for ATF concept development teams to provide the fabricators as much information as possible about the material properties and behavior observed in prototype testing, mock-up fabrication and assembly, and chemical and mechanical testing of the materials that may have been performed in the concept development phase. Special handling, machining, welding, and inspection of materials, if known, should also be communicated to the experiment fabrication and inspection team.« less
Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities
NASA Astrophysics Data System (ADS)
Esposito, Gaetano
Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.
Development and evaluation of a microdevice for amino acid biomarker detection and analysis on Mars
Skelley, Alison M.; Scherer, James R.; Aubrey, Andrew D.; Grover, William H.; Ivester, Robin H. C.; Ehrenfreund, Pascale; Grunthaner, Frank J.; Bada, Jeffrey L.; Mathies, Richard A.
2005-01-01
The Mars Organic Analyzer (MOA), a microfabricated capillary electrophoresis (CE) instrument for sensitive amino acid biomarker analysis, has been developed and evaluated. The microdevice consists of a four-wafer sandwich combining glass CE separation channels, microfabricated pneumatic membrane valves and pumps, and a nanoliter fluidic network. The portable MOA instrument integrates high voltage CE power supplies, pneumatic controls, and fluorescence detection optics necessary for field operation. The amino acid concentration sensitivities range from micromolar to 0.1 nM, corresponding to part-per-trillion sensitivity. The MOA was first used in the lab to analyze soil extracts from the Atacama Desert, Chile, detecting amino acids ranging from 10–600 parts per billion. Field tests of the MOA in the Panoche Valley, CA, successfully detected amino acids at 70 parts per trillion to 100 parts per billion in jarosite, a sulfate-rich mineral associated with liquid water that was recently detected on Mars. These results demonstrate the feasibility of using the MOA to perform sensitive in situ amino acid biomarker analysis on soil samples representative of a Mars-like environment. PMID:15657130
Development and evaluation of a microdevice for amino acid biomarker detection and analysis on Mars.
Skelley, Alison M; Scherer, James R; Aubrey, Andrew D; Grover, William H; Ivester, Robin H C; Ehrenfreund, Pascale; Grunthaner, Frank J; Bada, Jeffrey L; Mathies, Richard A
2005-01-25
The Mars Organic Analyzer (MOA), a microfabricated capillary electrophoresis (CE) instrument for sensitive amino acid biomarker analysis, has been developed and evaluated. The microdevice consists of a four-wafer sandwich combining glass CE separation channels, microfabricated pneumatic membrane valves and pumps, and a nanoliter fluidic network. The portable MOA instrument integrates high voltage CE power supplies, pneumatic controls, and fluorescence detection optics necessary for field operation. The amino acid concentration sensitivities range from micromolar to 0.1 nM, corresponding to part-per-trillion sensitivity. The MOA was first used in the lab to analyze soil extracts from the Atacama Desert, Chile, detecting amino acids ranging from 10-600 parts per billion. Field tests of the MOA in the Panoche Valley, CA, successfully detected amino acids at 70 parts per trillion to 100 parts per billion in jarosite, a sulfate-rich mineral associated with liquid water that was recently detected on Mars. These results demonstrate the feasibility of using the MOA to perform sensitive in situ amino acid biomarker analysis on soil samples representative of a Mars-like environment.
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
Bifocal Fresnel Lens Based on the Polarization-Sensitive Metasurface
NASA Astrophysics Data System (ADS)
Markovich, Hen; Filonov, Dmitrii; Shishkin, Ivan; Ginzburg, Pavel
2018-05-01
Thin structured surfaces allow flexible control over propagation of electromagnetic waves. Focusing and polarization state analysis are among functions, required for effective manipulation of radiation. Here a polarization sensitive Fresnel zone plate lens is proposed and experimentally demonstrated for GHz spectral range. Two spatially separated focal spots for orthogonal polarizations are obtained by designing metasurface pattern, made of overlapping tightly packed cross and rod shaped antennas with a strong polarization selectivity. Optimized subwavelength pattern allows multiplexing two different lenses with low polarization crosstalk on the same substrate and provides a control over focal spots of the lens only by changing of the polarization state of the incident wave. More than a wavelength separation between the focal spots was demonstrated for a broad spectral range, covering half a decade in frequency. The proposed concept could be straightforwardly extended for THz and visible spectra, where polarization-sensitive elements utilize localized plasmon resonance phenomenon.
Ashok, Praveen C.; Praveen, Bavishna B.; Bellini, Nicola; Riches, Andrew; Dholakia, Kishan; Herrington, C. Simon
2013-01-01
We report a multimodal optical approach using both Raman spectroscopy and optical coherence tomography (OCT) in tandem to discriminate between colonic adenocarcinoma and normal colon. Although both of these non-invasive techniques are capable of discriminating between normal and tumour tissues, they are unable individually to provide both the high specificity and high sensitivity required for disease diagnosis. We combine the chemical information derived from Raman spectroscopy with the texture parameters extracted from OCT images. The sensitivity obtained using Raman spectroscopy and OCT individually was 89% and 78% respectively and the specificity was 77% and 74% respectively. Combining the information derived using the two techniques increased both sensitivity and specificity to 94% demonstrating that combining complementary optical information enhances diagnostic accuracy. These data demonstrate that multimodal optical analysis has the potential to achieve accurate non-invasive cancer diagnosis. PMID:24156073
Application of design sensitivity analysis for greater improvement on machine structural dynamics
NASA Technical Reports Server (NTRS)
Yoshimura, Masataka
1987-01-01
Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.
Dynamic sensitivity analysis of biological systems
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2008-01-01
Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016
NASA Astrophysics Data System (ADS)
Erskine, David J.; Edelstein, Jerry; Wishnow, Edward H.; Sirk, Martin; Muirhead, Philip S.; Muterspaugh, Matthew W.; Lloyd, James P.; Ishikawa, Yuzo; McDonald, Eliza A.; Shourt, William V.; Vanderburg, Andrew M.
2016-04-01
High-resolution broadband spectroscopy at near-infrared wavelengths (950 to 2450 nm) has been performed using externally dispersed interferometry (EDI) at the Hale telescope at Mt. Palomar. Observations of stars were performed with the "TEDI" interferometer mounted within the central hole of the 200-in. primary mirror in series with the comounted TripleSpec near-infrared echelle spectrograph. These are the first multidelay EDI demonstrations on starlight, as earlier measurements used a single delay or laboratory sources. We demonstrate very high (10×) resolution boost, from original 2700 to 27,000 with current set of delays (up to 3 cm), well beyond the classical limits enforced by the slit width and detector pixel Nyquist limit. Significantly, the EDI used with multiple delays rather than a single delay as used previously yields an order of magnitude or more improvement in the stability against native spectrograph point spread function (PSF) drifts along the dispersion direction. We observe a dramatic (20×) reduction in sensitivity to PSF shift using our standard processing. A recently realized method of further reducing the PSF shift sensitivity to zero is described theoretically and demonstrated in a simple simulation which produces a 350× times reduction. We demonstrate superb rejection of fixed pattern noise due to bad detector pixels-EDI only responds to changes in pixel intensity synchronous to applied dithering. This part 1 describes data analysis, results, and instrument noise. A section on theoretical photon limited sensitivity is in a companion paper, part 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erskine, David J.; Edelstein, Jerry; Wishnow, Edward H.
High-resolution broadband spectroscopy at near-infrared wavelengths (950 to 2450 nm) has been performed using externally dispersed interferometry (EDI) at the Hale telescope at Mt. Palomar. Observations of stars were performed with the “TEDI” interferometer mounted within the central hole of the 200-in. primary mirror in series with the comounted TripleSpec near-infrared echelle spectrograph. These are the first multidelay EDI demonstrations on starlight, as earlier measurements used a single delay or laboratory sources. We demonstrate very high (10×) resolution boost, from original 2700 to 27,000 with current set of delays (up to 3 cm), well beyond the classical limits enforced bymore » the slit width and detector pixel Nyquist limit. Significantly, the EDI used with multiple delays rather than a single delay as used previously yields an order of magnitude or more improvement in the stability against native spectrograph point spread function (PSF) drifts along the dispersion direction. We observe a dramatic (20×) reduction in sensitivity to PSF shift using our standard processing. A recently realized method of further reducing the PSF shift sensitivity to zero is described theoretically and demonstrated in a simple simulation which produces a 350× times reduction. We demonstrate superb rejection of fixed pattern noise due to bad detector pixels—EDI only responds to changes in pixel intensity synchronous to applied dithering. This part 1 describes data analysis, results, and instrument noise. Lastly, a section on theoretical photon limited sensitivity is in a companion paper, part 2.« less
Erskine, David J.; Edelstein, Jerry; Wishnow, Edward H.; ...
2016-05-27
High-resolution broadband spectroscopy at near-infrared wavelengths (950 to 2450 nm) has been performed using externally dispersed interferometry (EDI) at the Hale telescope at Mt. Palomar. Observations of stars were performed with the “TEDI” interferometer mounted within the central hole of the 200-in. primary mirror in series with the comounted TripleSpec near-infrared echelle spectrograph. These are the first multidelay EDI demonstrations on starlight, as earlier measurements used a single delay or laboratory sources. We demonstrate very high (10×) resolution boost, from original 2700 to 27,000 with current set of delays (up to 3 cm), well beyond the classical limits enforced bymore » the slit width and detector pixel Nyquist limit. Significantly, the EDI used with multiple delays rather than a single delay as used previously yields an order of magnitude or more improvement in the stability against native spectrograph point spread function (PSF) drifts along the dispersion direction. We observe a dramatic (20×) reduction in sensitivity to PSF shift using our standard processing. A recently realized method of further reducing the PSF shift sensitivity to zero is described theoretically and demonstrated in a simple simulation which produces a 350× times reduction. We demonstrate superb rejection of fixed pattern noise due to bad detector pixels—EDI only responds to changes in pixel intensity synchronous to applied dithering. This part 1 describes data analysis, results, and instrument noise. Lastly, a section on theoretical photon limited sensitivity is in a companion paper, part 2.« less
Influence of ECG sampling rate in fetal heart rate variability analysis.
De Jonckheere, J; Garabedian, C; Charlier, P; Champion, C; Servan-Schreiber, E; Storme, L; Debarge, V; Jeanne, M; Logier, R
2017-07-01
Fetal hypoxia results in a fetal blood acidosis (pH<;7.10). In such a situation, the fetus develops several adaptation mechanisms regulated by the autonomic nervous system. Many studies demonstrated significant changes in heart rate variability in hypoxic fetuses. So, fetal heart rate variability analysis could be of precious help for fetal hypoxia prediction. Commonly used fetal heart rate variability analysis methods have been shown to be sensitive to the ECG signal sampling rate. Indeed, a low sampling rate could induce variability in the heart beat detection which will alter the heart rate variability estimation. In this paper, we introduce an original fetal heart rate variability analysis method. We hypothesize that this method will be less sensitive to ECG sampling frequency changes than common heart rate variability analysis methods. We then compared the results of this new heart rate variability analysis method with two different sampling frequencies (250-1000 Hz).
Dos Santos, Denise Takehana; Costa e Silva, Adriana Paula Andrade; Vannier, Michael Walter; Cavalcanti, Marcelo Gusmão Paraiso
2004-12-01
The purpose of this study was to demonstrate the sensitivity and specificity of multislice computerized tomography (CT) for diagnosis of maxillofacial fractures following specific protocols using an independent workstation. The study population consisted of 56 patients with maxillofacial fractures who were submitted to a multislice CT. The original data were transferred to an independent workstation using volumetric imaging software to generate axial images and simultaneous multiplanar (MPR) and 3-dimensional (3D-CT) volume rendering reconstructed images. The images were then processed and interpreted by 2 examiners using the following protocols independently of each other: axial, MPR/axial, 3D-CT images, and the association of axial/MPR/3D images. The clinical/surgical findings were considered the gold standard corroborating the diagnosis of the fractures and their anatomic localization. The statistical analysis was carried out using validity and chi-squared tests. The association of axial/MPR/3D images indicated a higher sensitivity (range 95.8%) and specificity (range 99%) than the other methods regarding the analysis of all regions. CT imaging demonstrated high specificity and sensitivity for maxillofacial fractures. The association of axial/MPR/3D-CT images added important information in relationship to other CT protocols.
Extension of the ADjoint Approach to a Laminar Navier-Stokes Solver
NASA Astrophysics Data System (ADS)
Paige, Cody
The use of adjoint methods is common in computational fluid dynamics to reduce the cost of the sensitivity analysis in an optimization cycle. The forward mode ADjoint is a combination of an adjoint sensitivity analysis method with a forward mode automatic differentiation (AD) and is a modification of the reverse mode ADjoint method proposed by Mader et al.[1]. A colouring acceleration technique is presented to reduce the computational cost increase associated with forward mode AD. The forward mode AD facilitates the implementation of the laminar Navier-Stokes (NS) equations. The forward mode ADjoint method is applied to a three-dimensional computational fluid dynamics solver. The resulting Euler and viscous ADjoint sensitivities are compared to the reverse mode Euler ADjoint derivatives and a complex-step method to demonstrate the reduced computational cost and accuracy. Both comparisons demonstrate the benefits of the colouring method and the practicality of using a forward mode AD. [1] Mader, C.A., Martins, J.R.R.A., Alonso, J.J., and van der Weide, E. (2008) ADjoint: An approach for the rapid development of discrete adjoint solvers. AIAA Journal, 46(4):863-873. doi:10.2514/1.29123.
NASA Astrophysics Data System (ADS)
Wiesauer, Karin; Pircher, Michael; Goetzinger, Erich; Hitzenberger, Christoph K.; Engelke, Rainer; Ahrens, Gisela; Pfeiffer, Karl; Ostrzinski, Ute; Gruetzner, Gabi; Oster, Reinhold; Stifter, David
2006-02-01
Optical coherence tomography (OCT) is a contactless and non-invasive technique nearly exclusively applied for bio-medical imaging of tissues. Besides the internal structure, additionally strains within the sample can be mapped when OCT is performed in a polarization sensitive (PS) way. In this work, we demonstrate the benefits of PS-OCT imaging for non-biological applications. We have developed the OCT technique beyond the state-of-the-art: based on transversal ultra-high resolution (UHR-)OCT, where an axial resolution below 2 μm within materials is obtained using a femtosecond laser as light source, we have modified the setup for polarization sensitive measurements (transversal UHR-PS-OCT). We perform structural analysis and strain mapping for different types of samples: for a highly strained elastomer specimen we demonstrate the necessity of UHR-imaging. Furthermore, we investigate epoxy waveguide structures, photoresist moulds for the fabrication of micro-electromechanical parts (MEMS), and the glass-fibre composite outer shell of helicopter rotor blades where cracks are present. For these examples, transversal scanning UHR-PS-OCT is shown to provide important information about the structural properties and the strain distribution within the samples.
Emerging spectra of singular correlation matrices under small power-map deformations
NASA Astrophysics Data System (ADS)
Vinayak; Schäfer, Rudi; Seligman, Thomas H.
2013-09-01
Correlation matrices are a standard tool in the analysis of the time evolution of complex systems in general and financial markets in particular. Yet most analysis assume stationarity of the underlying time series. This tends to be an assumption of varying and often dubious validity. The validity of the assumption improves as shorter time series are used. If many time series are used, this implies an analysis of highly singular correlation matrices. We attack this problem by using the so-called power map, which was introduced to reduce noise. Its nonlinearity breaks the degeneracy of the zero eigenvalues and we analyze the sensitivity of the so-emerging spectra to correlations. This sensitivity will be demonstrated for uncorrelated and correlated Wishart ensembles.
Emerging spectra of singular correlation matrices under small power-map deformations.
Vinayak; Schäfer, Rudi; Seligman, Thomas H
2013-09-01
Correlation matrices are a standard tool in the analysis of the time evolution of complex systems in general and financial markets in particular. Yet most analysis assume stationarity of the underlying time series. This tends to be an assumption of varying and often dubious validity. The validity of the assumption improves as shorter time series are used. If many time series are used, this implies an analysis of highly singular correlation matrices. We attack this problem by using the so-called power map, which was introduced to reduce noise. Its nonlinearity breaks the degeneracy of the zero eigenvalues and we analyze the sensitivity of the so-emerging spectra to correlations. This sensitivity will be demonstrated for uncorrelated and correlated Wishart ensembles.
Noise spectroscopy as an equilibrium analysis tool for highly sensitive electrical biosensing
NASA Astrophysics Data System (ADS)
Guo, Qiushi; Kong, Tao; Su, Ruigong; Zhang, Qi; Cheng, Guosheng
2012-08-01
We demonstrate an approach for highly sensitive bio-detection based on silicon nanowire field-effect transistors by employing low frequency noise spectroscopy analysis. The inverse of noise amplitude of the device exhibits an enhanced gate coupling effect in strong inversion regime when measured in buffer solution than that in air. The approach was further validated by the detection of cardiac troponin I of 0.23 ng/ml in fetal bovine serum, in which 2 orders of change in noise amplitude was characterized. The selectivity of the proposed approach was also assessed by the addition of 10 μg/ml bovine serum albumin solution.
Tsai, Chung-Yu
2017-07-01
A refractive laser beam shaper comprising two free-form profiles is presented. The profiles are designed using a free-form profile construction method such that each incident ray is directed in a certain user-specified direction or to a particular point on the target surface so as to achieve the required illumination distribution of the output beam. The validity of the proposed design method is demonstrated by means of ZEMAX simulations. The method is mathematically straightforward and easily implemented in computer code. It thus provides a convenient tool for the design and sensitivity analysis of laser beam shapers and similar optical components.
Parallel human genome analysis: microarray-based expression monitoring of 1000 genes.
Schena, M; Shalon, D; Heller, R; Chai, A; Brown, P O; Davis, R W
1996-01-01
Microarrays containing 1046 human cDNAs of unknown sequence were printed on glass with high-speed robotics. These 1.0-cm2 DNA "chips" were used to quantitatively monitor differential expression of the cognate human genes using a highly sensitive two-color hybridization assay. Array elements that displayed differential expression patterns under given experimental conditions were characterized by sequencing. The identification of known and novel heat shock and phorbol ester-regulated genes in human T cells demonstrates the sensitivity of the assay. Parallel gene analysis with microarrays provides a rapid and efficient method for large-scale human gene discovery. Images Fig. 1 Fig. 2 Fig. 3 PMID:8855227
NASA Astrophysics Data System (ADS)
Montero, C.; Orea, J. M.; Soledad Muñoz, M.; Lobo, R. F. M.; González Ureña, A.
A laser desorption (LD) coupled with resonance-enhanced multiphoton ionisation (REMPI) and time-of-flight mass spectrometry (TOFMS) technique for non-volatile trace analysis compounds is presented. Essential features are: (a) an enhanced desorption yield due to the mixing of metal powder with the analyte in the sample preparation, (b) a high resolution, great sensitivity and low detection limit due to laser resonant ionisation and mass spectrometry detection. Application to resveratrol content in grapes demonstrated the capability of the analytical method with a sensitivity of 0.2 pg per single laser shot and a detection limit of 5 ppb.
Comparison between two methodologies for urban drainage decision aid.
Moura, P M; Baptista, M B; Barraud, S
2006-01-01
The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Stacy; English, Shawn; Briggs, Timothy
Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-31
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian
2017-01-28
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO 2 (110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
NASA Astrophysics Data System (ADS)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-01
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
Harvey, John J; Chester, Stephanie; Burke, Stephen A; Ansbro, Marisela; Aden, Tricia; Gose, Remedios; Sciulli, Rebecca; Bai, Jing; DesJardin, Lucy; Benfer, Jeffrey L; Hall, Joshua; Smole, Sandra; Doan, Kimberly; Popowich, Michael D; St George, Kirsten; Quinlan, Tammy; Halse, Tanya A; Li, Zhen; Pérez-Osorio, Ailyn C; Glover, William A; Russell, Denny; Reisdorf, Erik; Whyte, Thomas; Whitaker, Brett; Hatcher, Cynthia; Srinivasan, Velusamy; Tatti, Kathleen; Tondella, Maria Lucia; Wang, Xin; Winchell, Jonas M; Mayer, Leonard W; Jernigan, Daniel; Mawle, Alison C
2016-02-01
In this study, a multicenter evaluation of the Life Technologies TaqMan(®) Array Card (TAC) with 21 custom viral and bacterial respiratory assays was performed on the Applied Biosystems ViiA™ 7 Real-Time PCR System. The goal of the study was to demonstrate the analytical performance of this platform when compared to identical individual pathogen specific laboratory developed tests (LDTs) designed at the Centers for Disease Control and Prevention (CDC), equivalent LDTs provided by state public health laboratories, or to three different commercial multi-respiratory panels. CDC and Association of Public Health Laboratories (APHL) LDTs had similar analytical sensitivities for viral pathogens, while several of the bacterial pathogen APHL LDTs demonstrated sensitivities one log higher than the corresponding CDC LDT. When compared to CDC LDTs, TAC assays were generally one to two logs less sensitive depending on the site performing the analysis. Finally, TAC assays were generally more sensitive than their counterparts in three different commercial multi-respiratory panels. TAC technology allows users to spot customized assays and design TAC layout, simplify assay setup, conserve specimen, dramatically reduce contamination potential, and as demonstrated in this study, analyze multiple samples in parallel with good reproducibility between instruments and operators. Copyright © 2015 Elsevier B.V. All rights reserved.
Automated, Ultra-Sterile Solid Sample Handling and Analysis on a Chip
NASA Technical Reports Server (NTRS)
Mora, Maria F.; Stockton, Amanda M.; Willis, Peter A.
2013-01-01
There are no existing ultra-sterile lab-on-a-chip systems that can accept solid samples and perform complete chemical analyses without human intervention. The proposed solution is to demonstrate completely automated lab-on-a-chip manipulation of powdered solid samples, followed by on-chip liquid extraction and chemical analysis. This technology utilizes a newly invented glass micro-device for solid manipulation, which mates with existing lab-on-a-chip instrumentation. Devices are fabricated in a Class 10 cleanroom at the JPL MicroDevices Lab, and are plasma-cleaned before and after assembly. Solid samples enter the device through a drilled hole in the top. Existing micro-pumping technology is used to transfer milligrams of powdered sample into an extraction chamber where it is mixed with liquids to extract organic material. Subsequent chemical analysis is performed using portable microchip capillary electrophoresis systems (CE). These instruments have been used for ultra-highly sensitive (parts-per-trillion, pptr) analysis of organic compounds including amines, amino acids, aldehydes, ketones, carboxylic acids, and thiols. Fully autonomous amino acid analyses in liquids were demonstrated; however, to date there have been no reports of completely automated analysis of solid samples on chip. This approach utilizes an existing portable instrument that houses optics, high-voltage power supplies, and solenoids for fully autonomous microfluidic sample processing and CE analysis with laser-induced fluorescence (LIF) detection. Furthermore, the entire system can be sterilized and placed in a cleanroom environment for analyzing samples returned from extraterrestrial targets, if desired. This is an entirely new capability never demonstrated before. The ability to manipulate solid samples, coupled with lab-on-a-chip analysis technology, will enable ultraclean and ultrasensitive end-to-end analysis of samples that is orders of magnitude more sensitive than the ppb goal given in the Science Instruments.
Direct magnetic field estimation based on echo planar raw data.
Testud, Frederik; Splitthoff, Daniel Nicolas; Speck, Oliver; Hennig, Jürgen; Zaitsev, Maxim
2010-07-01
Gradient recalled echo echo planar imaging is widely used in functional magnetic resonance imaging. The fast data acquisition is, however, very sensitive to field inhomogeneities which manifest themselves as artifacts in the images. Typically used correction methods have the common deficit that the data for the correction are acquired only once at the beginning of the experiment, assuming the field inhomogeneity distribution B(0) does not change over the course of the experiment. In this paper, methods to extract the magnetic field distribution from the acquired k-space data or from the reconstructed phase image of a gradient echo planar sequence are compared and extended. A common derivation for the presented approaches provides a solid theoretical basis, enables a fair comparison and demonstrates the equivalence of the k-space and the image phase based approaches. The image phase analysis is extended here to calculate the local gradient in the readout direction and improvements are introduced to the echo shift analysis, referred to here as "k-space filtering analysis." The described methods are compared to experimentally acquired B(0) maps in phantoms and in vivo. The k-space filtering analysis presented in this work demonstrated to be the most sensitive method to detect field inhomogeneities.
Sensitivity of surface meteorological analyses to observation networks
NASA Astrophysics Data System (ADS)
Tyndall, Daniel Paul
A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.
North Atlantic storm driving of extreme wave heights in the North Sea
NASA Astrophysics Data System (ADS)
Bell, R. J.; Gray, S. L.; Jones, O. P.
2017-04-01
The relationship between storms and extreme ocean waves in the North Sea is assessed using a long-period wave data set and storms identified in the Interim ECMWF Re-Analysis (ERA-Interim). An ensemble sensitivity analysis is used to provide information on the spatial and temporal forcing from mean sea-level pressure and surface wind associated with extreme ocean wave height responses. Extreme ocean waves in the central North Sea arise due to intense extratropical cyclone winds from either the cold conveyor belt (northerly-wind events) or the warm conveyor belt (southerly-wind events). The largest wave heights are associated with northerly-wind events which tend to have stronger wind speeds and occur as the cold conveyor belt wraps rearward round the cyclone to the cold side of the warm front. The northerly-wind events provide a larger fetch to the central North Sea to aid wave growth. Southerly-wind events are associated with the warm conveyor belts of intense extratropical cyclones that develop in the left upper tropospheric jet exit region. Ensemble sensitivity analysis can provide early warning of extreme wave events by demonstrating a relationship between wave height and high pressure to the west of the British Isles for northerly-wind events 48 h prior. Southerly-wind extreme events demonstrate sensitivity to low pressure to the west of the British Isles 36 h prior.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
Global Sensitivity Analysis for Process Identification under Model Uncertainty
NASA Astrophysics Data System (ADS)
Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.
2015-12-01
The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.
John M. Johnston; Mahion C. Barber; Kurt Wolfe; Mike Galvin; Mike Cyterski; Rajbir Parmar; Luis Suarez
2016-01-01
We demonstrate a spatially-explicit regional assessment of current condition of aquatic ecoservices in the Coal River Basin (CRB), with limited sensitivity analysis for the atmospheric contaminant mercury. The integrated modeling framework (IMF) forecasts water quality and quantity, habitat suitability for aquatic biota, fish biomasses, population densities, ...
Nelson, Stacy; English, Shawn; Briggs, Timothy
2016-05-06
Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less
Millard, Daniel; Dang, Qianyu; Shi, Hong; Zhang, Xiaou; Strock, Chris; Kraushaar, Udo; Zeng, Haoyu; Levesque, Paul; Lu, Hua-Rong; Guillon, Jean-Michel; Wu, Joseph C; Li, Yingxin; Luerman, Greg; Anson, Blake; Guo, Liang; Clements, Mike; Abassi, Yama A; Ross, James; Pierson, Jennifer; Gintant, Gary
2018-04-27
Recent in vitro cardiac safety studies demonstrate the ability of human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) to detect electrophysiologic effects of drugs. However, variability contributed by unique approaches, procedures, cell lines and reagents across laboratories makes comparisons of results difficult, leading to uncertainty about the role of hiPSC-CMs in defining proarrhythmic risk in drug discovery and regulatory submissions. A blinded pilot study was conducted to evaluate the electrophysiologic effects of eight well-characterized drugs on four cardiomyocyte lines using a standardized protocol across three microelectrode array (MEA) platforms (18 individual studies). Drugs were selected to define assay sensitivity of prominent repolarizing currents (E-4031 for IKr, JNJ303 for IKs) and depolarizing currents (nifedipine for ICaL, mexiletine for INa) as well as drugs affecting multi-channel block (flecainide, moxifloxacin, quinidine, and ranolazine). Inclusion criteria for final analysis was based on demonstrated sensitivity to IKr block (20% prolongation with E-4031) and L-type calcium current block (20% shortening with nifedipine). Despite differences in baseline characteristics across cardiomyocyte lines, multiple sites and instrument platforms, 10 of 18 studies demonstrated adequate sensitivity to IKr block with E-4031 and ICaL block with nifedipine for inclusion in the final analysis. Concentration-dependent effects on repolarization were observed with this qualified dataset consistent with known ionic mechanisms of single and multi-channel blocking drugs. hiPSC-CMs can detect repolarization effects elicited by single and multi-channel blocking drugs after defining pharmacologic sensitivity to IKr and ICaL block, supporting further validation efforts using hiPSC-CMs for cardiac safety studies.
NASA Astrophysics Data System (ADS)
Graham, Eleanor; Cuore Collaboration
2017-09-01
The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.
Laithwaite, J E; Benn, S J; Marshall, W S; FitzGerald, D J; LaMarre, J
2001-09-01
Pseudomonas exotoxin A (PEA) is an extracellular virulence factor produced by the opportunistic human pathogen Pseudomonas aerguinosa. PEA intoxification begins when PEA binds to the low-density lipoprotein receptor-related protein (LRP). The liver is the primary target of systemic PEA, due largely to the high levels of functional LRP expressed by liver cells. Using a 3H-leucine incorporation assay to measure inhibition of protein synthesis we have demonstrated that normal (BNL CL.2) and transformed (BNL 1ME A7R.1) liver cells exhibit divergent PEA sensitivity; with BNL 1ME A7R.1 cells demonstrating greater PEA sensitivity than their non-transformed counterparts. The receptor-associated protein, a LRP antagonist, decreased PEA toxicity in BNL 1ME A7R.1 cells, confirming the importance of the LRP in PEA intoxification in this cell type. Increased PEA sensitivity in BNL 1ME A7R.1 cells was associated with increased functional cell surface LRP expression, as measured by alpha2-macroglobulin binding and internalization studies, and increased LRP mRNA levels, as determined by Northern blot analysis. Interestingly, BNL CL.2 cells were more sensitive than BNL 1ME A7R.1 cells to conjugate and mutant PEA toxins that do not utilize the LRP for cellular entry. These data demonstrate that increased LRP expression is an important mechanism by which PEA sensitivity is increased in BNL 1ME A7R.1 transformed liver cells.
A Model for Analyzing Disability Policy
ERIC Educational Resources Information Center
Turnbull, Rud; Stowe, Matthew J.
2017-01-01
This article describes a 12-step model that can be used for policy analysis. The model encompasses policy development, implementation, and evaluation; takes into account structural foundations of policy; addresses both legal formalism and legal realism; demonstrates contextual sensitivity; and addresses application issues and different…
Zhang, Na; Zhang, Jian
2016-01-01
The moral hazards and poor public image of the insurance industry, arising from insurance agents' unethical behavior, affect both the normal operation of an insurance company and decrease applicants' confidence in the company. Contrarily, these scandals may demonstrate that the organizations were "bad barrels" in which insurance agents' unethical decisions were supported or encouraged by the organization's leadership or climate. The present study brings two organization-level factors (ethical leadership and ethical climate) together and explores the role of ethical climate on the relationship between the ethical leadership and business ethical sensitivity of Chinese insurance agents. Through the multilevel analysis of 502 insurance agents from 56 organizations, it is found that organizational ethical leadership is positively related to the organizational ethical climate; organizational ethical climate is positively related to business ethical sensitivity, and organizational ethical climate fully mediates the relationship between organizational ethical leadership and business ethical sensitivity. Organizational ethical climate plays a completely mediating role in the relationship between organizational ethical leadership and business ethical sensitivity. The integrated model of ethical leadership, ethical climate and business ethical sensitivity makes several contributions to ethics theory, research and management.
Performance analysis of higher mode spoof surface plasmon polariton for terahertz sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Haizi; Tu, Wanli; Zhong, Shuncong, E-mail: zhongshuncong@hotmail.com
2015-04-07
We investigated the spoof surface plasmon polaritons (SSPPs) on 1D grooved metal surface for terahertz sensing of refractive index of the filling analyte through a prism-coupling attenuated total reflection setup. From the dispersion relation analysis and the finite element method-based simulation, we revealed that the dispersion curve of SSPP got suppressed as the filling refractive index increased, which cause the coupling resonance frequency redshifting in the reflection spectrum. The simulated results for testing various refractive indexes demonstrated that the incident angle of terahertz radiation has a great effect on the performance of sensing. Smaller incident angle will result in amore » higher sensitive sensing with a narrower detection range. In the meanwhile, the higher order mode SSPP-based sensing has a higher sensitivity with a narrower detection range. The maximum sensitivity is 2.57 THz/RIU for the second-order mode sensing at 45° internal incident angle. The proposed SSPP-based method has great potential for high sensitive terahertz sensing.« less
Prevalence of potent skin sensitizers in oxidative hair dye products in Korea.
Kim, Hyunji; Kim, Kisok
2016-09-01
The objective of the present study was to elucidate the prevalence of potent skin sensitizers in oxidative hair dye products manufactured by Korean domestic companies. A database on hair dye products made by domestic companies and selling in the Korean market in 2013 was used to obtain information on company name, brand name, quantity of production, and ingredients. The prevalence of substances categorized as potent skin sensitizers was calculated using the hair dye ingredient database, and the pattern of concomitant presence of hair dye ingredients was analyzed using network analysis software. A total of 19 potent skin sensitizers were identified from a database that included 99 hair dye products manufactured by Korean domestic companies. Among 19 potent skin sensitizers, the four most frequent were resorcinol, m-aminophenol, p-phenylenediamine (PPD), and p-aminophenol; these four skin-sensitizing ingredients were found in more than 50% of the products studied. Network analysis showed that resorcinol, m-aminophenol, and PPD existed together in many hair dye products. In 99 products examined, the average product contained 4.4 potent sensitizers, and 82% of the products contained four or more skin sensitizers. The present results demonstrate that oxidative hair dye products made by Korean domestic manufacturers contain various numbers and types of potent skin sensitizers. Furthermore, these results suggest that some hair dye products should be used with caution to prevent adverse effects on the skin, including allergic contact dermatitis.
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
Surrogate models for efficient stability analysis of brake systems
NASA Astrophysics Data System (ADS)
Nechak, Lyes; Gillot, Frédéric; Besset, Sébastien; Sinou, Jean-Jacques
2015-07-01
This study assesses capacities of the global sensitivity analysis combined together with the kriging formalism to be useful in the robust stability analysis of brake systems, which is too costly when performed with the classical complex eigenvalues analysis (CEA) based on finite element models (FEMs). By considering a simplified brake system, the global sensitivity analysis is first shown very helpful for understanding the effects of design parameters on the brake system's stability. This is allowed by the so-called Sobol indices which discriminate design parameters with respect to their influence on the stability. Consequently, only uncertainty of influent parameters is taken into account in the following step, namely, the surrogate modelling based on kriging. The latter is then demonstrated to be an interesting alternative to FEMs since it allowed, with a lower cost, an accurate estimation of the system's proportions of instability corresponding to the influent parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, T.; Jensen, R.; Christensen, M. K.
2012-07-15
We demonstrate a combined microreactor and time of flight system for testing and characterization of heterogeneous catalysts with high resolution mass spectrometry and high sensitivity. Catalyst testing is performed in silicon-based microreactors which have high sensitivity and fast thermal response. Gas analysis is performed with a time of flight mass spectrometer with a modified nude Bayard-Alpert ionization gauge as gas ionization source. The mass resolution of the time of flight mass spectrometer using the ion gauge as ionization source is estimated to m/{Delta}m > 2500. The system design is superior to conventional batch and flow reactors with accompanying product detectionmore » by quadrupole mass spectrometry or gas chromatography not only due to the high sensitivity, fast temperature response, high mass resolution, and fast acquisition time of mass spectra but it also allows wide mass range (0-5000 amu in the current configuration). As a demonstration of the system performance we present data from ammonia oxidation on a Pt thin film showing resolved spectra of OH and NH{sub 3}.« less
NASA Astrophysics Data System (ADS)
Andersen, T.; Jensen, R.; Christensen, M. K.; Pedersen, T.; Hansen, O.; Chorkendorff, I.
2012-07-01
We demonstrate a combined microreactor and time of flight system for testing and characterization of heterogeneous catalysts with high resolution mass spectrometry and high sensitivity. Catalyst testing is performed in silicon-based microreactors which have high sensitivity and fast thermal response. Gas analysis is performed with a time of flight mass spectrometer with a modified nude Bayard-Alpert ionization gauge as gas ionization source. The mass resolution of the time of flight mass spectrometer using the ion gauge as ionization source is estimated to m/Δm > 2500. The system design is superior to conventional batch and flow reactors with accompanying product detection by quadrupole mass spectrometry or gas chromatography not only due to the high sensitivity, fast temperature response, high mass resolution, and fast acquisition time of mass spectra but it also allows wide mass range (0-5000 amu in the current configuration). As a demonstration of the system performance we present data from ammonia oxidation on a Pt thin film showing resolved spectra of OH and NH3.
Andersen, T; Jensen, R; Christensen, M K; Pedersen, T; Hansen, O; Chorkendorff, I
2012-07-01
We demonstrate a combined microreactor and time of flight system for testing and characterization of heterogeneous catalysts with high resolution mass spectrometry and high sensitivity. Catalyst testing is performed in silicon-based microreactors which have high sensitivity and fast thermal response. Gas analysis is performed with a time of flight mass spectrometer with a modified nude Bayard-Alpert ionization gauge as gas ionization source. The mass resolution of the time of flight mass spectrometer using the ion gauge as ionization source is estimated to m/Δm > 2500. The system design is superior to conventional batch and flow reactors with accompanying product detection by quadrupole mass spectrometry or gas chromatography not only due to the high sensitivity, fast temperature response, high mass resolution, and fast acquisition time of mass spectra but it also allows wide mass range (0-5000 amu in the current configuration). As a demonstration of the system performance we present data from ammonia oxidation on a Pt thin film showing resolved spectra of OH and NH(3).
Huang, Jiacong; Gao, Junfeng; Yan, Renhua
2016-08-15
Phosphorus (P) export from lowland polders has caused severe water pollution. Numerical models are an important resource that help water managers control P export. This study coupled three models, i.e., Phosphorus Dynamic model for Polders (PDP), Integrated Catchments model of Phosphorus dynamics (INCA-P) and Universal Soil Loss Equation (USLE), to describe the P dynamics in polders. Based on the coupled models and a dataset collected from Polder Jian in China, sensitivity analysis were carried out to analyze the cause-effect relationships between environmental factors and P export from Polder Jian. The sensitivity analysis results showed that P export from Polder Jian were strongly affected by air temperature, precipitation and fertilization. Proper fertilization management should be a strategic priority for reducing P export from Polder Jian. This study demonstrated the success of model coupling, and its application in investigating potential strategies to support pollution control in polder systems. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
NASA Technical Reports Server (NTRS)
Price J. M.; Ortega, R.
1998-01-01
Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.
Analyzing reflective narratives to assess the ethical reasoning of pediatric residents.
Moon, Margaret; Taylor, Holly A; McDonald, Erin L; Hughes, Mark T; Beach, Mary Catherine; Carrese, Joseph A
2013-01-01
A limiting factor in ethics education in medical training has been difficulty in assessing competence in ethics. This study was conducted to test the concept that content analysis of pediatric residents' personal reflections about ethics experiences can identify changes in ethical sensitivity and reasoning over time. Analysis of written narratives focused on two of our ethics curriculum's goals: 1) To raise sensitivity to ethical issues in everyday clinical practice and 2) to enhance critical reflection on personal and professional values as they affect patient care. Content analysis of written reflections was guided by a tool developed to identify and assess the level of ethical reasoning in eight domains determined to be important aspects of ethical competence. Based on the assessment of narratives written at two times (12 to 16 months/apart) during their training, residents showed significant progress in two specific domains: use of professional values, and use of personal values. Residents did not show decline in ethical reasoning in any domain. This study demonstrates that content analysis of personal narratives may provide a useful method for assessment of developing ethical sensitivity and reasoning.
Liu, Jianhua; Jiang, Hongbo; Zhang, Hao; Guo, Chun; Wang, Lei; Yang, Jing; Nie, Shaofa
2017-06-27
In the summer of 2014, an influenza A(H3N2) outbreak occurred in Yichang city, Hubei province, China. A retrospective study was conducted to collect and interpret hospital and epidemiological data on it using social network analysis and global sensitivity and uncertainty analyses. Results for degree (χ2=17.6619, P<0.0001) and betweenness(χ2=21.4186, P<0.0001) centrality suggested that the selection of sampling objects were different between traditional epidemiological methods and newer statistical approaches. Clique and network diagrams demonstrated that the outbreak actually consisted of two independent transmission networks. Sensitivity analysis showed that the contact coefficient (k) was the most important factor in the dynamic model. Using uncertainty analysis, we were able to better understand the properties and variations over space and time on the outbreak. We concluded that use of newer approaches were significantly more efficient for managing and controlling infectious diseases outbreaks, as well as saving time and public health resources, and could be widely applied on similar local outbreaks.
Chang, Yuqing; Yang, Bo; Zhao, Xue; Linhardt, Robert J.
2012-01-01
A quantitative and highly sensitive method for the analysis of glycosaminoglycan (GAG)-derived disaccharides is presented that relies on capillary electrophoresis (CE) with laser-induced fluorescence (LIF) detection. This method enables complete separation of seventeen GAG-derived disaccharides in a single run. Unsaturated disaccharides were derivatized with 2-aminoacridone (AMAC) to improve sensitivity. The limit of detection was at the attomole level and about 100-fold more sensitive than traditional CE-ultraviolet detection. A CE separation timetable was developed to achieve complete resolution and shorten analysis time. The RSD of migration time and peak areas at both low and high concentrations of unsaturated disaccharides are all less than 2.7% and 3.2%, respectively, demonstrating that this is a reproducible method. This analysis was successfully applied to cultured Chinese hamster ovary cell samples for determination of GAG disaccharides. The current method simplifies GAG extraction steps, and reduces inaccuracy in calculating ratios of heparin/heparan sulfate to chondroitin sulfate/dermatan sulfate, resulting from the separate analyses of a single sample. PMID:22609076
Stacked graphene nanofibers for electrochemical oxidation of DNA bases.
Ambrosi, Adriano; Pumera, Martin
2010-08-21
In this article, we show that stacked graphene nanofibers (SGNFs) demonstrate superior electrochemical performance for oxidation of DNA bases over carbon nanotubes (CNTs). This is due to an exceptionally high number of accessible graphene sheet edges on the surface of the nanofibers when compared to carbon nanotubes, as shown by transmission electron microscopy and Raman spectroscopy. The oxidation signals of adenine, guanine, cytosine, and thymine exhibit two to four times higher currents than on CNT-based electrodes. SGNFs also exhibit higher sensitivity than do edge-plane pyrolytic graphite, glassy carbon, or graphite microparticle-based electrodes. We also demonstrate that influenza A(H1N1)-related strands can be sensitively oxidized on SGNF-based electrodes, which could therefore be applied to label-free DNA analysis.
Smith, Matthew R.; Artz, Nathan S.; Koch, Kevin M.; Samsonov, Alexey; Reeder, Scott B.
2014-01-01
Purpose To demonstrate feasibility of exploiting the spatial distribution of off-resonance surrounding metallic implants for accelerating multispectral imaging techniques. Theory Multispectral imaging (MSI) techniques perform time-consuming independent 3D acquisitions with varying RF frequency offsets to address the extreme off-resonance from metallic implants. Each off-resonance bin provides a unique spatial sensitivity that is analogous to the sensitivity of a receiver coil, and therefore provides a unique opportunity for acceleration. Methods Fully sampled MSI was performed to demonstrate retrospective acceleration. A uniform sampling pattern across off-resonance bins was compared to several adaptive sampling strategies using a total hip replacement phantom. Monte Carlo simulations were performed to compare noise propagation of two of these strategies. With a total knee replacement phantom, positive and negative off-resonance bins were strategically sampled with respect to the B0 field to minimize aliasing. Reconstructions were performed with a parallel imaging framework to demonstrate retrospective acceleration. Results An adaptive sampling scheme dramatically improved reconstruction quality, which was supported by the noise propagation analysis. Independent acceleration of negative and positive off-resonance bins demonstrated reduced overlapping of aliased signal to improve the reconstruction. Conclusion This work presents the feasibility of acceleration in the presence of metal by exploiting the spatial sensitivities of off-resonance bins. PMID:24431210
Lewis, Grace E. M.; Gross, Andrew J.; Kasprzyk‐Hordern, Barbara; Lubben, Anneke T.
2015-01-01
An electrochemical flow cell with a boron‐doped diamond dual‐plate microtrench electrode has been developed and demonstrated for hydroquinone flow injection electroanalysis in phosphate buffer pH 7. Using the electrochemical generator‐collector feedback detector improves the sensitivity by one order of magnitude (when compared to a single working electrode detector). The diffusion process is switched from an analyte consuming “external” process to an analyte regenerating “internal” process with benefits in selectivity and sensitivity. PMID:25735831
Sensitivity analysis for dose deposition in radiotherapy via a Fokker–Planck model
Barnard, Richard C.; Frank, Martin; Krycki, Kai
2016-02-09
In this paper, we study the sensitivities of electron dose calculations with respect to stopping power and transport coefficients. We focus on the application to radiotherapy simulations. We use a Fokker–Planck approximation to the Boltzmann transport equation. Equations for the sensitivities are derived by the adjoint method. The Fokker–Planck equation and its adjoint are solved numerically in slab geometry using the spherical harmonics expansion (P N) and an Harten-Lax-van Leer finite volume method. Our method is verified by comparison to finite difference approximations of the sensitivities. Finally, we present numerical results of the sensitivities for the normalized average dose depositionmore » depth with respect to the stopping power and the transport coefficients, demonstrating the increase in relative sensitivities as beam energy decreases. In conclusion, this in turn gives estimates on the uncertainty in the normalized average deposition depth, which we present.« less
Kalmar, Alain F; Absalom, Anthony; Rombouts, Pieter; Roets, Jelle; Dewaele, Frank; Verdonck, Pascal; Stemerdink, Arjanne; Zijlstra, Jan G; Monsieurs, Koenraad G
2016-08-01
Unrecognised endotracheal tube misplacement in emergency intubations has a reported incidence of up to 17%. Current detection methods have many limitations restricting their reliability and availability in these circumstances. There is therefore a clinical need for a device that is small enough to be practical in emergency situations and that can detect oesophageal intubation within seconds. In a first reported evaluation, we demonstrated an algorithm based on pressure waveform analysis, able to determine tube location with high reliability in healthy patients. The aim of this study was to validate the specificity of the algorithm in patients with abnormal pulmonary compliance, and to demonstrate the reliability of a newly developed small device that incorporates the technology. Intubated patients with mild to moderate lung injury, admitted to intensive care were included in the study. The device was connected to the endotracheal tube, and three test ventilations were performed in each patient. All diagnostic data were recorded on PC for subsequent specificity/sensitivity analysis. A total of 105 ventilations in 35 patients with lung injury were analysed. With the threshold D-value of 0.1, the system showed a 100% sensitivity and specificity to diagnose tube location. The algorithm retained its specificity in patients with decreased pulmonary compliance. We also demonstrated the feasibility to integrate sensors and diagnostic hardware in a small, portable hand-held device for convenient use in emergency situations. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Chen, Y C; Tsai, M F
2000-01-01
Previous work has demonstrated that a combination of solid-phase extraction with surface-assisted laser desorption/ionization (SPE-SALDI) mass spectrometry can be applied to the determination of trace nitrophenols in water. An improved method to lower the detection limit of this hyphenated technique is described in this present study. Activated carbon powder is used as both the SPE adsorbent and the SALDI solid in the analysis by SPE-SALDI. The surface of the activated carbon is modified by passing an aqueous solution of a cationic surfactant through the SPE cartridge. The results demonstrate that the sensitivity for nitrophenols in the analysis by SPE-SALDI can be improved by using cationic surfactants to modify the surface of the activated carbon. The detection limit for nitrophenols is about 25 ppt based on a signal-to-noise ratio of 3 by sampling from 100 mL of solution. Copyright 2000 John Wiley & Sons, Ltd.
Vives, Alejandra; González, Francisca; Moncada, Salvador; Llorens, Clara; Benach, Joan
2015-01-01
This study examines the psychometric properties of the revised Employment Precariousness Scale (EPRES-2010) in a context of economic crisis and growing unemployment. Data correspond to salaried workers with a contract (n=4,750) from the second Psychosocial Work Environment Survey (Spain, 2010). Analyses included acceptability, scale score distributions, Cronbach's alpha coefficient and exploratory factor analysis. Response rates were 80% or above, scores were widely distributed with reductions in floor effects for temporariness among permanent workers and for vulnerability. Cronbach's alpha coefficients were 0.70 or above; exploratory factor analysis confirmed the theoretical allocation of 21 out of 22 items. The revised version of the EPRES demonstrated good metric properties and improved sensitivity to worker vulnerability and employment instability among permanent workers. Furthermore, it was sensitive to increased levels of precariousness in some dimensions despite decreases in others, demonstrating responsiveness to the context of the economic crisis affecting the Spanish labour market. Copyright © 2015 SESPAS. Published by Elsevier Espana. All rights reserved.
French, Michael T; Salomé, Helena J; Sindelar, Jody L; McLellan, A Thomas
2002-04-01
To provide detailed methodological guidelines for using the Drug Abuse Treatment Cost Analysis Program (DATCAP) and Addiction Severity Index (ASI) in a benefit-cost analysis of addiction treatment. A representative benefit-cost analysis of three outpatient programs was conducted to demonstrate the feasibility and value of the methodological guidelines. Procedures are outlined for using resource use and cost data collected with the DATCAP. Techniques are described for converting outcome measures from the ASI to economic (dollar) benefits of treatment. Finally, principles are advanced for conducting a benefit-cost analysis and a sensitivity analysis of the estimates. The DATCAP was administered at three outpatient drug-free programs in Philadelphia, PA, for 2 consecutive fiscal years (1996 and 1997). The ASI was administered to a sample of 178 treatment clients at treatment entry and at 7-months postadmission. The DATCAP and ASI appear to have significant potential for contributing to an economic evaluation of addiction treatment. The benefit-cost analysis and subsequent sensitivity analysis all showed that total economic benefit was greater than total economic cost at the three outpatient programs, but this representative application is meant to stimulate future economic research rather than justifying treatment per se. This study used previously validated, research-proven instruments and methods to perform a practical benefit-cost analysis of real-world treatment programs. The study demonstrates one way to combine economic and clinical data and offers a methodological foundation for future economic evaluations of addiction treatment.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2015-04-01
Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol, Morris, etc.) are actually limiting cases of our approach under specific conditions. Multiple case studies are used to demonstrate the value of the new framework. The results show that the new framework provides a fundamental understanding of the underlying sensitivities for any given problem, while requiring orders of magnitude fewer model runs.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less
Mosely, Jackie A; Stokes, Peter; Parker, David; Dyer, Philip W; Messinis, Antonis M
2018-02-01
A novel method has been developed that enables chemical compounds to be transferred from an inert atmosphere glove box and into the atmospheric pressure ion source of a mass spectrometer whilst retaining a controlled chemical environment. This innovative method is simple and cheap to implement on some commercially available mass spectrometers. We have termed this approach inert atmospheric pressure solids analysis probe ( iASAP) and demonstrate the benefit of this methodology for two air-/moisture-sensitive chemical compounds whose characterisation by mass spectrometry is now possible and easily achieved. The simplicity of the design means that moving between iASAP and standard ASAP is straightforward and quick, providing a highly flexible platform with rapid sample turnaround.
Drosophila Nociceptive Sensitization Requires BMP Signaling via the Canonical SMAD Pathway.
Follansbee, Taylor L; Gjelsvik, Kayla J; Brann, Courtney L; McParland, Aidan L; Longhurst, Colin A; Galko, Michael J; Ganter, Geoffrey K
2017-08-30
Nociceptive sensitization is a common feature in chronic pain, but its basic cellular mechanisms are only partially understood. The present study used the Drosophila melanogaster model system and a candidate gene approach to identify novel components required for modulation of an injury-induced nociceptive sensitization pathway presumably downstream of Hedgehog. This study demonstrates that RNAi silencing of a member of the Bone Morphogenetic Protein (BMP) signaling pathway, Decapentaplegic (Dpp), specifically in the Class IV multidendritic nociceptive neuron, significantly attenuated ultraviolet injury-induced sensitization. Furthermore, overexpression of Dpp in Class IV neurons was sufficient to induce thermal hypersensitivity in the absence of injury. The requirement of various BMP receptors and members of the SMAD signal transduction pathway in nociceptive sensitization was also demonstrated. The effects of BMP signaling were shown to be largely specific to the sensitization pathway and not associated with changes in nociception in the absence of injury or with changes in dendritic morphology. Thus, the results demonstrate that Dpp and its pathway play a crucial and novel role in nociceptive sensitization. Because the BMP family is so strongly conserved between vertebrates and invertebrates, it seems likely that the components analyzed in this study represent potential therapeutic targets for the treatment of chronic pain in humans. SIGNIFICANCE STATEMENT This report provides a genetic analysis of primary nociceptive neuron mechanisms that promote sensitization in response to injury. Drosophila melanogaster larvae whose primary nociceptive neurons were reduced in levels of specific components of the BMP signaling pathway, were injured and then tested for nocifensive responses to a normally subnoxious stimulus. Results suggest that nociceptive neurons use the BMP2/4 ligand, along with identified receptors and intracellular transducers to transition to a sensitized state. These findings are consistent with the observation that BMP receptor hyperactivation correlates with bone abnormalities and pain sensitization in fibrodysplasia ossificans progressiva (Kitterman et al., 2012). Because nociceptive sensitization is associated with chronic pain, these findings indicate that human BMP pathway components may represent targets for novel pain-relieving drugs. Copyright © 2017 the authors 0270-6474/17/378524-10$15.00/0.
Drosophila Nociceptive Sensitization Requires BMP Signaling via the Canonical SMAD Pathway
Follansbee, Taylor L.; Gjelsvik, Kayla J.; Brann, Courtney L.; McParland, Aidan L.
2017-01-01
Nociceptive sensitization is a common feature in chronic pain, but its basic cellular mechanisms are only partially understood. The present study used the Drosophila melanogaster model system and a candidate gene approach to identify novel components required for modulation of an injury-induced nociceptive sensitization pathway presumably downstream of Hedgehog. This study demonstrates that RNAi silencing of a member of the Bone Morphogenetic Protein (BMP) signaling pathway, Decapentaplegic (Dpp), specifically in the Class IV multidendritic nociceptive neuron, significantly attenuated ultraviolet injury-induced sensitization. Furthermore, overexpression of Dpp in Class IV neurons was sufficient to induce thermal hypersensitivity in the absence of injury. The requirement of various BMP receptors and members of the SMAD signal transduction pathway in nociceptive sensitization was also demonstrated. The effects of BMP signaling were shown to be largely specific to the sensitization pathway and not associated with changes in nociception in the absence of injury or with changes in dendritic morphology. Thus, the results demonstrate that Dpp and its pathway play a crucial and novel role in nociceptive sensitization. Because the BMP family is so strongly conserved between vertebrates and invertebrates, it seems likely that the components analyzed in this study represent potential therapeutic targets for the treatment of chronic pain in humans. SIGNIFICANCE STATEMENT This report provides a genetic analysis of primary nociceptive neuron mechanisms that promote sensitization in response to injury. Drosophila melanogaster larvae whose primary nociceptive neurons were reduced in levels of specific components of the BMP signaling pathway, were injured and then tested for nocifensive responses to a normally subnoxious stimulus. Results suggest that nociceptive neurons use the BMP2/4 ligand, along with identified receptors and intracellular transducers to transition to a sensitized state. These findings are consistent with the observation that BMP receptor hyperactivation correlates with bone abnormalities and pain sensitization in fibrodysplasia ossificans progressiva (Kitterman et al., 2012). Because nociceptive sensitization is associated with chronic pain, these findings indicate that human BMP pathway components may represent targets for novel pain-relieving drugs. PMID:28855331
Modeling and Analysis of a Combined Stress-Vibration Fiber Bragg Grating Sensor
Yao, Kun; Lin, Qijing; Jiang, Zhuangde; Zhao, Na; Tian, Bian; Shi, Peng; Peng, Gang-Ding
2018-01-01
A combined stress-vibration sensor was developed to measure stress and vibration simultaneously based on fiber Bragg grating (FBG) technology. The sensor is composed of two FBGs and a stainless steel plate with a special design. The two FBGs sense vibration and stress and the sensor can realize temperature compensation by itself. The stainless steel plate can significantly increase sensitivity of vibration measurement. Theoretical analysis and Finite Element Method (FEM) were used to analyze the sensor’s working mechanism. As demonstrated with analysis, the obtained sensor has working range of 0–6000 Hz for vibration sensing and 0–100 MPa for stress sensing, respectively. The corresponding sensitivity for vibration is 0.46 pm/g and the resulted stress sensitivity is 5.94 pm/MPa, while the nonlinearity error for vibration and stress measurement is 0.77% and 1.02%, respectively. Compared to general FBGs, the vibration sensitivity of this sensor is 26.2 times higher. Therefore, the developed sensor can be used to concurrently detect vibration and stress. As this sensor has height of 1 mm and weight of 1.15 g, it is beneficial for minimization and integration. PMID:29494544
Modeling and Analysis of a Combined Stress-Vibration Fiber Bragg Grating Sensor.
Yao, Kun; Lin, Qijing; Jiang, Zhuangde; Zhao, Na; Tian, Bian; Shi, Peng; Peng, Gang-Ding
2018-03-01
A combined stress-vibration sensor was developed to measure stress and vibration simultaneously based on fiber Bragg grating (FBG) technology. The sensor is composed of two FBGs and a stainless steel plate with a special design. The two FBGs sense vibration and stress and the sensor can realize temperature compensation by itself. The stainless steel plate can significantly increase sensitivity of vibration measurement. Theoretical analysis and Finite Element Method (FEM) were used to analyze the sensor's working mechanism. As demonstrated with analysis, the obtained sensor has working range of 0-6000 Hz for vibration sensing and 0-100 MPa for stress sensing, respectively. The corresponding sensitivity for vibration is 0.46 pm/g and the resulted stress sensitivity is 5.94 pm/MPa, while the nonlinearity error for vibration and stress measurement is 0.77% and 1.02%, respectively. Compared to general FBGs, the vibration sensitivity of this sensor is 26.2 times higher. Therefore, the developed sensor can be used to concurrently detect vibration and stress. As this sensor has height of 1 mm and weight of 1.15 g, it is beneficial for minimization and integration.
NASA Astrophysics Data System (ADS)
Podgornova, O.; Leaney, S.; Liang, L.
2018-07-01
Extracting medium properties from seismic data faces some limitations due to the finite frequency content of the data and restricted spatial positions of the sources and receivers. Some distributions of the medium properties make low impact on the data (including none). If these properties are used as the inversion parameters, then the inverse problem becomes overparametrized, leading to ambiguous results. We present an analysis of multiparameter resolution for the linearized inverse problem in the framework of elastic full-waveform inversion. We show that the spatial and multiparameter sensitivities are intertwined and non-sensitive properties are spatial distributions of some non-trivial combinations of the conventional elastic parameters. The analysis accounts for the Hessian information and frequency content of the data; it is semi-analytical (in some scenarios analytical), easy to interpret and enhances results of the widely used radiation pattern analysis. Single-type scattering is shown to have limited sensitivity, even for full-aperture data. Finite-frequency data lose multiparameter sensitivity at smooth and fine spatial scales. Also, we establish ways to quantify a spatial-multiparameter coupling and demonstrate that the theoretical predictions agree well with the numerical results.
Wilde, Elisabeth A.; Moretti, Paolo; MacLeod, Marianne C.; Pedroza, Claudia; Drever, Pamala; Fourwinds, Sierra; Frisby, Melisa L.; Beers, Sue R.; Scott, James N.; Hunter, Jill V.; Traipe, Elfrides; Valadka, Alex B.; Okonkwo, David O.; Zygun, David A.; Puccio, Ava M.; Clifton, Guy L.
2013-01-01
Abstract The Neurological Outcome Scale for Traumatic Brain Injury (NOS-TBI) is a measure assessing neurological functioning in patients with TBI. We hypothesized that the NOS-TBI would exhibit adequate concurrent and predictive validity and demonstrate more sensitivity to change, compared with other well-established outcome measures. We analyzed data from the National Acute Brain Injury Study: Hypothermia-II clinical trial. Participants were 16–45 years of age with severe TBI assessed at 1, 3, 6, and 12 months postinjury. For analysis of criterion-related validity (concurrent and predictive), Spearman's rank-order correlations were calculated between the NOS-TBI and the Glasgow Outcome Scale (GOS), GOS-Extended (GOS-E), Disability Rating Scale (DRS), and Neurobehavioral Rating Scale-Revised (NRS-R). Concurrent validity was demonstrated through significant correlations between the NOS-TBI and GOS, GOS-E, DRS, and NRS-R measured contemporaneously at 3, 6, and 12 months postinjury (all p<0.0013). For prediction analyses, the multiplicity-adjusted p value using the false discovery rate was <0.015. The 1-month NOS-TBI score was a significant predictor of outcome in the GOS, GOS-E, and DRS at 3 and 6 months postinjury (all p<0.015). The 3-month NOS-TBI significantly predicted GOS, GOS-E, DRS, and NRS-R outcomes at 6 and 12 months postinjury (all p<0.0015). Sensitivity to change was analyzed using Wilcoxon's signed rank-sum test of subsamples demonstrating no change in the GOS or GOS-E between 3 and 6 months. The NOS-TBI demonstrated higher sensitivity to change, compared with the GOS (p<0.038) and GOS-E (p<0.016). In summary, the NOS-TBI demonstrated adequate concurrent and predictive validity as well as sensitivity to change, compared with gold-standard outcome measures. The NOS-TBI may enhance prediction of outcome in clinical practice and measurement of outcome in TBI research. PMID:23617608
Space system operations and support cost analysis using Markov chains
NASA Technical Reports Server (NTRS)
Unal, Resit; Dean, Edwin B.; Moore, Arlene A.; Fairbairn, Robert E.
1990-01-01
This paper evaluates the use of Markov chain process in probabilistic life cycle cost analysis and suggests further uses of the process as a design aid tool. A methodology is developed for estimating operations and support cost and expected life for reusable space transportation systems. Application of the methodology is demonstrated for the case of a hypothetical space transportation vehicle. A sensitivity analysis is carried out to explore the effects of uncertainty in key model inputs.
Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map
2014-01-01
We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970
van der Walt, Anita; Lopata, Andreas L; Nieuwenhuizen, Natalie E; Jeebhay, Mohamed F
2010-01-01
Three spice mill workers developed work-related allergy and asthma after prolonged exposure to high levels (>10 mg/m(3)) of inhalable spice dust. Patterns of sensitization to a variety of spices and putative allergens were identified. Work-related allergy and asthma were assessed on history, clinical evaluation, pulmonary function and fractional exhaled nitric oxide. Specific IgE reactivity to a range of common inhalant, food and spice allergens was evaluated using ImmunoCAP and allergen microarray. The presence of non-IgE-mediated reactions was determined by basophil stimulation (CAST-ELISA). Specific allergens were identified by immunoblotting to extracts of raw and dried processed garlic, onion and chili pepper. Asthma was confirmed in all 3 subjects, with work-related patterns prominent in worker 1 and 3. Sensitization to multiple spices and pollen was observed in both atopic workers 1 and 2, whereas garlic and chili pepper sensitization featured in all 3 workers. Microarray analysis demonstrated prominent profilin reactivity in atopic worker 2. Immunoblotting demonstrated a 50-kDa cross-reactive allergen in garlic and onion, and allergens of approximately 40 and 52 kDa in chili pepper. Dry powdered garlic and onion demonstrated greater IgE binding. This study demonstrated IgE reactivity to multiple spice allergens in workers exposed to high levels of inhalable spice dust. Processed garlic and onion powder demonstrated stronger IgE reactivity than the raw plant. Atopy and polysensitization to various plant profilins, suggesting pollen-food syndrome, represent additional risk factors for sensitizer-induced work-related asthma in spice mill workers. 2010 S. Karger AG, Basel.
High-Speed Real-Time Resting-State fMRI Using Multi-Slab Echo-Volumar Imaging
Posse, Stefan; Ackley, Elena; Mutihac, Radu; Zhang, Tongsheng; Hummatov, Ruslan; Akhtari, Massoud; Chohan, Muhammad; Fisch, Bruce; Yonas, Howard
2013-01-01
We recently demonstrated that ultra-high-speed real-time fMRI using multi-slab echo-volumar imaging (MEVI) significantly increases sensitivity for mapping task-related activation and resting-state networks (RSNs) compared to echo-planar imaging (Posse et al., 2012). In the present study we characterize the sensitivity of MEVI for mapping RSN connectivity dynamics, comparing independent component analysis (ICA) and a novel seed-based connectivity analysis (SBCA) that combines sliding-window correlation analysis with meta-statistics. This SBCA approach is shown to minimize the effects of confounds, such as movement, and CSF and white matter signal changes, and enables real-time monitoring of RSN dynamics at time scales of tens of seconds. We demonstrate highly sensitive mapping of eloquent cortex in the vicinity of brain tumors and arterio-venous malformations, and detection of abnormal resting-state connectivity in epilepsy. In patients with motor impairment, resting-state fMRI provided focal localization of sensorimotor cortex compared with more diffuse activation in task-based fMRI. The fast acquisition speed of MEVI enabled segregation of cardiac-related signal pulsation using ICA, which revealed distinct regional differences in pulsation amplitude and waveform, elevated signal pulsation in patients with arterio-venous malformations and a trend toward reduced pulsatility in gray matter of patients compared with healthy controls. Mapping cardiac pulsation in cortical gray matter may carry important functional information that distinguishes healthy from diseased tissue vasculature. This novel fMRI methodology is particularly promising for mapping eloquent cortex in patients with neurological disease, having variable degree of cooperation in task-based fMRI. In conclusion, ultra-high-real-time speed fMRI enhances the sensitivity of mapping the dynamics of resting-state connectivity and cerebro-vascular pulsatility for clinical and neuroscience research applications. PMID:23986677
Quantitative molecular analysis in mantle cell lymphoma.
Brízová, H; Hilská, I; Mrhalová, M; Kodet, R
2011-07-01
A molecular analysis has three major roles in modern oncopathology--as an aid in the differential diagnosis, in molecular monitoring of diseases, and in estimation of the potential prognosis. In this report we review the application of the molecular analysis in a group of patients with mantle cell lymphoma (MCL). We demonstrate that detection of the cyclin D1 mRNA level is a molecular marker in 98% of patients with MCL. Cyclin D1 quantitative monitoring is specific and sensitive for the differential diagnosis and for the molecular monitoring of the disease in the bone marrow. Moreover, the dynamics of cyclin D1 in bone marrow reflects the disease development and it predicts the clinical course. We employed the molecular analysis for a precise quantitative detection of proliferation markers, Ki-67, topoisomerase IIalpha, and TPX2, that are described as effective prognostic factors. Using the molecular approach it is possible to measure the proliferation rate in a reproducible, standard way which is an essential prerequisite for using the proliferation activity as a routine clinical tool. Comparing with immunophenotyping we may conclude that the quantitative PCR-based analysis is a useful, reliable, rapid, reproducible, sensitive and specific method broadening our diagnostic tools in hematopathology. In comparison to interphase FISH in paraffin sections quantitative PCR is less technically demanding and less time-consuming and furthermore it is more sensitive in detecting small changes in the mRNA level. Moreover, quantitative PCR is the only technology which provides precise and reproducible quantitative information about the expression level. Therefore it may be used to demonstrate the decrease or increase of a tumor-specific marker in bone marrow in comparison with a previously aspirated specimen. Thus, it has a powerful potential to monitor the course of the disease in correlation with clinical data.
King, Carly J.; Woodward, Josha; Schwartzman, Jacob; Coleman, Daniel J.; Lisac, Robert; Wang, Nicholas J.; Van Hook, Kathryn; Gao, Lina; Urrutia, Joshua; Dane, Mark A.; Heiser, Laura M.; Alumkal, Joshi J.
2017-01-01
Recent work demonstrates that castration-resistant prostate cancer (CRPC) tumors harbor countless genomic aberrations that control many hallmarks of cancer. While some specific mutations in CRPC may be actionable, many others are not. We hypothesized that genomic aberrations in cancer may operate in concert to promote drug resistance and tumor progression, and that organization of these genomic aberrations into therapeutically targetable pathways may improve our ability to treat CRPC. To identify the molecular underpinnings of enzalutamide-resistant CRPC, we performed transcriptional and copy number profiling studies using paired enzalutamide-sensitive and resistant LNCaP prostate cancer cell lines. Gene networks associated with enzalutamide resistance were revealed by performing an integrative genomic analysis with the PAthway Representation and Analysis by Direct Reference on Graphical Models (PARADIGM) tool. Amongst the pathways enriched in the enzalutamide-resistant cells were those associated with MEK, EGFR, RAS, and NFKB. Functional validation studies of 64 genes identified 10 candidate genes whose suppression led to greater effects on cell viability in enzalutamide-resistant cells as compared to sensitive parental cells. Examination of a patient cohort demonstrated that several of our functionally-validated gene hits are deregulated in metastatic CRPC tumor samples, suggesting that they may be clinically relevant therapeutic targets for patients with enzalutamide-resistant CRPC. Altogether, our approach demonstrates the potential of integrative genomic analyses to clarify determinants of drug resistance and rational co-targeting strategies to overcome resistance. PMID:29340039
In this study, we introduced several modifications to the WAR (waste reduction) algorithm developed earlier. These modifications were made for systematically handling sensitivity analysis and various tasks of waste minimization. A design hierarchy was formulated to promote appro...
Granqvist, Niko; Hanning, Anders; Eng, Lars; Tuppurainen, Jussi; Viitala, Tapani
2013-01-01
Surface plasmon resonance (SPR) is a well-established optical biosensor technology with many proven applications in the study of molecular interactions as well as in surface and material science. SPR is usually applied in the label-free mode which may be advantageous in cases where the presence of a label may potentially interfere with the studied interactions per se. However, the fundamental challenges of label-free SPR in terms of limited sensitivity and specificity are well known. Here we present a new concept called label-enhanced SPR, which is based on utilizing strongly absorbing dye molecules in combination with the evaluation of the full shape of the SPR curve, whereby the sensitivity as well as the specificity of SPR is significantly improved. The performance of the new label-enhanced SPR method was demonstrated by two simple model assays: a small molecule assay and a DNA hybridization assay. The small molecule assay was used to demonstrate the sensitivity enhancement of the method, and how competitive assays can be used for relative affinity determination. The DNA assay was used to demonstrate the selectivity of the assay, and the capabilities in eliminating noise from bulk liquid composition variations. PMID:24217357
Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan
The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.
Identification of Proteus mirabilis Mutants with Increased Sensitivity to Antimicrobial Peptides
McCoy, Andrea J.; Liu, Hongjian; Falla, Timothy J.; Gunn, John S.
2001-01-01
Antimicrobial peptides (APs) are important components of the innate defenses of animals, plants, and microorganisms. However, some bacterial pathogens are resistant to the action of APs. For example, Proteus mirabilis is highly resistant to the action of APs, such as polymyxin B (PM), protegrin, and the synthetic protegrin analog IB-367. To better understand this resistance, a transposon mutagenesis approach was used to generate P. mirabilis mutants sensitive to APs. Four unique PM-sensitive mutants of P. mirabilis were identified (these mutants were >2 to >128 times more sensitive than the wild type). Two of these mutants were also sensitive to IB-367 (16 and 128 times more sensitive than the wild type). Lipopolysaccharide (LPS) profiles of the PM- and protegrin-sensitive mutants demonstrated marked differences in both the lipid A and O-antigen regions, while the PM-sensitive mutants appeared to have alterations of either lipid A or O antigen. Matrix-assisted laser desorption ionization–time of flight mass spectrometry analysis of the wild-type and PM-sensitive mutant lipid A showed species with one or two aminoarabinose groups, while lipid A from the PM- and protegrin-sensitive mutants was devoid of aminoarabinose. When the mutants were streaked on an agar-containing medium, the swarming motility of the PM- and protegrin-sensitive mutants was completely inhibited and the swarming motility of the mutants sensitive to only PM was markedly decreased. DNA sequence analysis of the mutagenized loci revealed similarities to an O-acetyltransferase (PM and protegrin sensitive) and ATP synthase and sap loci (PM sensitive). These data further support the role of LPS modifications as an elaborate mechanism in the resistance of certain bacterial species to APs and suggest that LPS surface charge alterations may play a role in P. mirabilis swarming motility. PMID:11408219
Flow immune photoacoustic sensor for real-time and fast sampling of trace gases
NASA Astrophysics Data System (ADS)
Petersen, Jan C.; Balslev-Harder, David; Pelevic, Nikola; Brusch, Anders; Persijn, Stefan; Lassen, Mikael
2018-02-01
A photoacoustic (PA) sensor for fast and real-time gas sensing is demonstrated. The PA cell has been designed for flow noise immunity using computational fluid dynamics (CFD) analysis. PA measurements were conducted at different flow rates by exciting molecular C-H stretch vibrational bands of hexane (C6H14) in clean air at 2950cm-1 (3.38 μm) with a custom made mid-infrared interband cascade laser (ICL). The PA sensor will contribute to solve a major problem in a number of industries using compressed air by the detection of oil contaminants in high purity compressed air. We observe a (1σ, standard deviation) sensitivity of 0.4 +/-0.1 ppb (nmol/mol) for hexane in clean air at flow rates up to 2 L/min, corresponding to a normalized noise equivalent absorption (NNEA) coefficient of 2.5×10-9 W cm-1 Hz1/2, thus demonstrating high sensitivity and fast and real-time gas analysis. The PA sensor is not limited to molecules with C-H stretching modes, but can be tailored to measure any trace gas by simply changing the excitation wavelength (i.e. the laser source) making it useful for many different applications where fast and sensitive trace gas measurements are needed.
NASA Astrophysics Data System (ADS)
Khoei, A. R.; Samimi, M.; Azami, A. R.
2007-02-01
In this paper, an application of the reproducing kernel particle method (RKPM) is presented in plasticity behavior of pressure-sensitive material. The RKPM technique is implemented in large deformation analysis of powder compaction process. The RKPM shape function and its derivatives are constructed by imposing the consistency conditions. The essential boundary conditions are enforced by the use of the penalty approach. The support of the RKPM shape function covers the same set of particles during powder compaction, hence no instability is encountered in the large deformation computation. A double-surface plasticity model is developed in numerical simulation of pressure-sensitive material. The plasticity model includes a failure surface and an elliptical cap, which closes the open space between the failure surface and hydrostatic axis. The moving cap expands in the stress space according to a specified hardening rule. The cap model is presented within the framework of large deformation RKPM analysis in order to predict the non-uniform relative density distribution during powder die pressing. Numerical computations are performed to demonstrate the applicability of the algorithm in modeling of powder forming processes and the results are compared to those obtained from finite element simulation to demonstrate the accuracy of the proposed model.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2003-01-01
An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2001-01-01
An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number
Wilson, Rachel L; Simion, Cristian Eugen; Blackman, Christopher S; Carmalt, Claire J; Stanoiu, Adelina; Di Maggio, Francesco; Covington, James A
2018-03-01
Analyte sensitivity for gas sensors based on semiconducting metal oxides should be highly dependent on the film thickness, particularly when that thickness is on the order of the Debye length. This thickness dependence has previously been demonstrated for SnO₂ and inferred for TiO₂. In this paper, TiO₂ thin films have been prepared by Atomic Layer Deposition (ALD) using titanium isopropoxide and water as precursors. The deposition process was performed on standard alumina gas sensor platforms and microscope slides (for analysis purposes), at a temperature of 200 °C. The TiO₂ films were exposed to different concentrations of CO, CH₄, NO₂, NH₃ and SO₂ to evaluate their gas sensitivities. These experiments showed that the TiO₂ film thickness played a dominant role within the conduction mechanism and the pattern of response for the electrical resistance towards CH₄ and NH₃ exposure indicated typical n -type semiconducting behavior. The effect of relative humidity on the gas sensitivity has also been demonstrated.
Zhou, Xixi; Cooper, Karen L.; Sun, Xi; Liu, Ke J.; Hudson, Laurie G.
2015-01-01
Cysteine oxidation induced by reactive oxygen species (ROS) on redox-sensitive targets such as zinc finger proteins plays a critical role in redox signaling and subsequent biological outcomes. We found that arsenic exposure led to oxidation of certain zinc finger proteins based on arsenic interaction with zinc finger motifs. Analysis of zinc finger proteins isolated from arsenic-exposed cells and zinc finger peptides by mass spectrometry demonstrated preferential oxidation of C3H1 and C4 zinc finger configurations. C2H2 zinc finger proteins that do not bind arsenic were not oxidized by arsenic-generated ROS in the cellular environment. The findings suggest that selectivity in arsenic binding to zinc fingers with three or more cysteines defines the target proteins for oxidation by ROS. This represents a novel mechanism of selective protein oxidation and demonstrates how an environmental factor may sensitize certain target proteins for oxidation, thus altering the oxidation profile and redox regulation. PMID:26063799
Multidisciplinary optimization of controlled space structures with global sensitivity equations
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.
1991-01-01
A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.
High-Sensitivity Fast Neutron Detector KNK-2-8M
NASA Astrophysics Data System (ADS)
Koshelev, A. S.; Dovbysh, L. Ye.; Ovchinnikov, M. A.; Pikulina, G. N.; Drozdov, Yu. M.; Chuklyaev, S. V.; Pepyolyshev, Yu. N.
2017-12-01
The design of the fast neutron detector KNK-2-8M is outlined. The results of he detector study in the pulse counting mode with pulses from 238U nuclei fission in the radiator of the neutron-sensitive section and in the current mode with separation of functional section currents are presented. The possibilities of determination of the effective number of 238U nuclei in the radiator of the neutron-sensitive section are considered. The diagnostic capabilities of the detector in the counting mode are demonstrated, as exemplified by the analysis of reference data on characteristics of neutron fields in the BR-1 reactor hall. The diagnostic capabilities of the detector in the current mode are demonstrated, as exemplified by the results of measurements of 238U fission intensity in the power startup of the BR-K1 reactor in the fission pulse generation mode with delayed neutrons and the detector placed in the reactor cavity in conditions of large-scale variation of the reactor radiation fields.
WANG, YUNYUN; LIU, YONG; LI, GUO; SU, ZHONGWU; REN, SHULING; TAN, PINGQING; ZHANG, XIN; QIU, YUANZHENG; TIAN, YONGQUAN
2015-01-01
Ephrin type-A receptor 2 (EphA2) is a receptor tyrosine kinase that is associated with cancer cell metastasis. There has been little investigation into its impact on the regulation of sensitivity to paclitaxel in nasopharyngeal carcinoma (NPC). In the present study, upregulation of EphA2 expression enhanced the survival of NPC 5-8F cells, compared with control cells exposed to the same concentrations of paclitaxel. Flow cytometry and western blot analysis demonstrated that over-expression of EphA2 decreased NPC cancer cell sensitivity to paclitaxel by regulating paclitaxel-mediated cell cycle progression but not apoptosis in vitro. This was accompanied by alterations in the expression of cyclin-dependent kinase inhibitors, p21 and p27, and of inactive phosphorylated-retinoblastoma protein. Furthermore, paclitaxel stimulation and EphA2 over-expression resulted in activation of the phosphoinositide 3-kinase (PI3K)/Akt signalling pathway in NPC cells. Inhibition of the PI3K/Akt signalling pathway restored sensitivity to paclitaxel in 5-8F cells over-expressing EphA2, which indicated that the PI3K/Akt pathway is involved in EphA2-mediated paclitaxel sensitivity. The current study demonstrated that EphA2 mediates sensitivity to paclitaxel via the regulation of the PI3K/Akt signalling pathway in NPC. PMID:25351620
NASA Astrophysics Data System (ADS)
Gao, Feng; Yang, Chuan-Lu; Wang, Mei-Shan; Ma, Xiao-Guang; Liu, Wen-Wang
2018-04-01
The feasibility of nanocomposites of cir-coronene graphene quantum dot (GQD) with phthalocyanine, tetrabenzoporphyrin, tetrabenzotriazaporphyrins, cis-tetrabenzodiazaporphyrins, tetrabenzomonoazaporphyrins and their Cu-metallated macrocycles as a sensitizer of dye-sensitized solar cells (DSSC) are investigated. Based on the first principles density functional theory (DFT), the geometrical structures of the separate GQD and 10 macrocycles, and their hybridized nanocomposites are fully optimized. The energy stabilities of the obtained structures are confirmed by harmonic frequency analysis. The optical absorptions of the optimized structures are calculated with time-dependent DFT. The feasibility of the nanocomposites as the sensitizer of DSSC is examined by the charge spatial separation, the electron transfer, the molecular orbital energy levels of the nanocomposites and the electrolyte, and the conduction band minimum of TiO2 electrode. The results demonstrate that all the nanocomposites have enhanced absorptions in the visible light range, and their molecular orbital energies satisfy the requirement of sensitizers. However, only two of the ten considered nanocomposites demonstrate significantly charge spatial separation. The GQD-Cu-TBP is identified as the most favorable candidate sensitizer of DSSC by the most enhanced in optical absorption, obvious charge spatial separation, suitable LUMO energy levels and driving force for electron transfer, and low recombination rate of electron and hole.
Hedgehog signaling regulates nociceptive sensitization.
Babcock, Daniel T; Shi, Shanping; Jo, Juyeon; Shaw, Michael; Gutstein, Howard B; Galko, Michael J
2011-09-27
Nociceptive sensitization is a tissue damage response whereby sensory neurons near damaged tissue enhance their responsiveness to external stimuli. This sensitization manifests as allodynia (aversive withdrawal to previously nonnoxious stimuli) and/or hyperalgesia (exaggerated responsiveness to noxious stimuli). Although some factors mediating nociceptive sensitization are known, inadequacies of current analgesic drugs have prompted a search for additional targets. Here we use a Drosophila model of thermal nociceptive sensitization to show that Hedgehog (Hh) signaling is required for both thermal allodynia and hyperalgesia following ultraviolet irradiation (UV)-induced tissue damage. Sensitization does not appear to result from developmental changes in the differentiation or arborization of nociceptive sensory neurons. Genetic analysis shows that Hh signaling acts in parallel to tumor necrosis factor (TNF) signaling to mediate allodynia and that distinct transient receptor potential (TRP) channels mediate allodynia and hyperalgesia downstream of these pathways. We also demonstrate a role for Hh in analgesic signaling in mammals. Intrathecal or peripheral administration of cyclopamine (CP), a specific inhibitor of Sonic Hedgehog signaling, blocked the development of analgesic tolerance to morphine (MS) or morphine antinociception in standard assays of inflammatory pain in rats and synergistically augmented and sustained morphine analgesia in assays of neuropathic pain. We demonstrate a novel physiological role for Hh signaling, which has not previously been implicated in nociception. Our results also identify new potential therapeutic targets for pain treatment. Copyright © 2011 Elsevier Ltd. All rights reserved.
Liu, Ting; He, Xiang-ge
2006-05-01
To evaluate the overall diagnostic capabilities of frequency-doubling technology (FDT) in patients of primary glaucoma, with standard automated perimetry (SAP) and/or optic disc appearance as the gold standard. A comprehensive electric search in MEDLINE, EMBASE, Cochrane Library, BIOSIS, Previews, HMIC, IPA, OVID, CNKI, CBMdisc, VIP information, CMCC, CCPD, SSreader and 21dmedia and a manual search in related textbooks, journals, congress articles and their references were performed to identify relevant English and Chinese language articles. Criteria for adaptability were established according to validity criteria for diagnostic research published by the Cochrane Methods Group on Screening and Diagnostic Tests. Quality of the included articles was assessed and relevant materials were extracted for studying. Statistical analysis was performed with Meta Test version 0.6 software. Heterogeneity of the included articles was tested, which was used to select appropriate effect model to calculate pooled weighted sensitivity and specificity. Summary Receiver Operating Characteristic (SROC) curve was established and the area under the curve (AUC) was calculated. Finally, sensitivity analysis was performed. Fifteen English articles (21 studies) of 206 retrieved articles were included in the present study, with a total of 3172 patients. The reported sensitivity of FDT ranged from 0.51 to 1.00, and specificity from 0.58 to 1.00. The pooled weighted sensitivity and specificity for FDT with 95% confidence intervals (95% CI) after correction for standard error were 0.86 (0.80 - 0.90), 0.87 (0.81 - 0.91), respectively. The AUC of SROC was 93.01%. Sensitivity analysis demonstrated no disproportionate influences of individual study. The included articles are of good quality and FDT can be a highly efficient diagnostic test for primary glaucoma based on Meta-analysis. However, a high quality perspective study is still required for further analysis.
Surface Desorption Dielectric-Barrier Discharge Ionization Mass Spectrometry.
Zhang, Hong; Jiang, Jie; Li, Na; Li, Ming; Wang, Yingying; He, Jing; You, Hong
2017-07-18
A variant of dielectric-barrier discharge named surface desorption dielectric-barrier discharge ionization (SDDBDI) mass spectrometry was developed for high-efficiency ion transmission and high spatial resolution imaging. In SDDBDI, a tungsten nanotip and the inlet of the mass spectrometer are used as electrodes, and a piece of coverslip is used as a sample plate as well as an insulating dielectric barrier, which simplifies the configuration of instrument and thus the operation. Different from volume dielectric-barrier discharge (VDBD), the microdischarges are generated on the surface at SDDBDI, and therefore the plasma density is extremely high. Analyte ions are guided directly into the MS inlet without any deflection. This configuration significantly improves the ion transmission efficiency and thus the sensitivity. The dependence of sensitivity and spatial resolution of the SDDBDI on the operation parameters were systematically investigated. The application of SDDBDI was successfully demonstrated by analysis of multiple species including amino acids, pharmaceuticals, putative cancer biomarkers, and mixtures of both fatty acids and hormones. Limits of detection (S/N = 3) were determined to be 0.84 and 0.18 pmol, respectively, for the analysis of l-alanine and metronidazole. A spatial resolution of 22 μm was obtained for the analysis of an imprinted cyclophosphamide pattern, and imaging of a "T" character was successfully demonstrated under ambient conditions. These results indicate that SDDBDI has high-efficiency ion transmission, high sensitivity, and high spatial resolution, which render it a potential tool for mass spectrometry imaging.
Bellanger, Martine; Demeneix, Barbara; Grandjean, Philippe; Zoeller, R Thomas; Trasande, Leonardo
2015-04-01
Epidemiological studies and animal models demonstrate that endocrine-disrupting chemicals (EDCs) contribute to cognitive deficits and neurodevelopmental disabilities. The objective was to estimate neurodevelopmental disability and associated costs that can be reasonably attributed to EDC exposure in the European Union. An expert panel applied a weight-of-evidence characterization adapted from the Intergovernmental Panel on Climate Change. Exposure-response relationships and reference levels were evaluated for relevant EDCs, and biomarker data were organized from peer-reviewed studies to represent European exposure and approximate burden of disease. Cost estimation as of 2010 utilized lifetime economic productivity estimates, lifetime cost estimates for autism spectrum disorder, and annual costs for attention-deficit hyperactivity disorder. Setting, Patients and Participants, and Intervention: Cost estimation was carried out from a societal perspective, ie, including direct costs (eg, treatment costs) and indirect costs such as productivity loss. The panel identified a 70-100% probability that polybrominated diphenyl ether and organophosphate exposures contribute to IQ loss in the European population. Polybrominated diphenyl ether exposures were associated with 873,000 (sensitivity analysis, 148,000 to 2.02 million) lost IQ points and 3290 (sensitivity analysis, 3290 to 8080) cases of intellectual disability, at costs of €9.59 billion (sensitivity analysis, €1.58 billion to €22.4 billion). Organophosphate exposures were associated with 13.0 million (sensitivity analysis, 4.24 million to 17.1 million) lost IQ points and 59 300 (sensitivity analysis, 16,500 to 84,400) cases of intellectual disability, at costs of €146 billion (sensitivity analysis, €46.8 billion to €194 billion). Autism spectrum disorder causation by multiple EDCs was assigned a 20-39% probability, with 316 (sensitivity analysis, 126-631) attributable cases at a cost of €199 million (sensitivity analysis, €79.7 million to €399 million). Attention-deficit hyperactivity disorder causation by multiple EDCs was assigned a 20-69% probability, with 19 300 to 31 200 attributable cases at a cost of €1.21 billion to €2.86 billion. EDC exposures in Europe contribute substantially to neurobehavioral deficits and disease, with a high probability of >€150 billion costs/year. These results emphasize the advantages of controlling EDC exposure.
NASA Technical Reports Server (NTRS)
Hill, Geoffrey A.; Olson, Erik D.
2004-01-01
Due to the growing problem of noise in today's air transportation system, there have arisen needs to incorporate noise considerations in the conceptual design of revolutionary aircraft. Through the use of response surfaces, complex noise models may be converted into polynomial equations for rapid and simplified evaluation. This conversion allows many of the commonly used response surface-based trade space exploration methods to be applied to noise analysis. This methodology is demonstrated using a noise model of a notional 300 passenger Blended-Wing-Body (BWB) transport. Response surfaces are created relating source noise levels of the BWB vehicle to its corresponding FAR-36 certification noise levels and the resulting trade space is explored. Methods demonstrated include: single point analysis, parametric study, an optimization technique for inverse analysis, sensitivity studies, and probabilistic analysis. Extended applications of response surface-based methods in noise analysis are also discussed.
Raman spectral analysis for rapid screening of dengue infection
NASA Astrophysics Data System (ADS)
Mahmood, T.; Nawaz, H.; Ditta, A.; Majeed, M. I.; Hanif, M. A.; Rashid, N.; Bhatti, H. N.; Nargis, H. F.; Saleem, M.; Bonnier, F.; Byrne, H. J.
2018-07-01
Infection with the dengue virus is currently clinically detected according to different biomarkers in human blood plasma, commonly measured by enzyme linked immunosorbent assays, including non-structural proteins (Ns1), immunoglobulin M (IgM) and immunoglobulin G (IgG). However, there is little or no mutual correlation between the biomarkers, as demonstrated in this study by a comparison of their levels in samples from 17 patients. As an alternative, the label free, rapid screening technique, Raman spectroscopy has been used for the characterisation/diagnosis of healthy and dengue infected human blood plasma samples. In dengue positive samples, changes in specific Raman spectral bands associated with lipidic and amino acid/protein content are observed and assigned based on literature and these features can be considered as markers associated with dengue development. Based on the spectroscopic analysis of the current, albeit limited, cohort of samples, Principal Components Analysis (PCA) coupled Factorial Discriminant Analysis, yielded values of 97.95% sensitivity and 95.40% specificity for identification of dengue infection. Furthermore, in a comparison of the normal samples to the patient samples which scored low for only one of the biomarker tests, but high or medium for either or both of the other two, PCA-FDA demonstrated a sensitivity of 97.38% and specificity of 86.18%, thus providing an unambiguous screening technology.
Lassen, Mikael; Balslev-Harder, David; Brusch, Anders; Pelevic, Nikola; Persijn, Stefan; Petersen, Jan C
2018-02-01
A photoacoustic (PA) sensor for fast and real-time gas sensing is demonstrated. The PA sensor is a stand-alone system controlled by a field-programmable gate array. The PA cell has been designed for flow noise immunity using computational fluid dynamics (CFD) analysis. The aim of the CFD analysis was to investigate and minimize the influence of the gas distribution and flow noise on the PA signal. PA measurements were conducted at different flow rates by exciting molecular C-H stretch vibrational bands of hexane (C 6 H 14 ) and decane (C 10 H 22 ) molecules in clean air at 2950 cm -1 (3.38 μm) with a custom-made mid-infrared interband cascade laser. We observe a (1σ, standard deviation) sensitivity of 0.4±0.1 ppb (nmol/mol) for hexane in clean air at flow rates up to 1.7 L/min, corresponding to a normalized noise equivalent absorption coefficient of 2.5×10 -9 W cm -1 Hz -1/2 , demonstrating high sensitivity and fast real-time gas analysis. An Allan deviation analysis for decane shows that the detection limit at optimum integration time is 0.25 ppbV (nmol/mol).
NASA Astrophysics Data System (ADS)
Chirvi, Sajal
Biomolecular interaction analysis (BIA) plays vital role in wide variety of fields, which include biomedical research, pharmaceutical industry, medical diagnostics, and biotechnology industry. Study and quantification of interactions between natural biomolecules (proteins, enzymes, DNA) and artificially synthesized molecules (drugs) is routinely done using various labeled and label-free BIA techniques. Labeled BIA (Chemiluminescence, Fluorescence, Radioactive) techniques suffer from steric hindrance of labels on interaction site, difficulty of attaching labels to molecules, higher cost and time of assay development. Label free techniques with real time detection capabilities have demonstrated advantages over traditional labeled techniques. The gold standard for label free BIA is surface Plasmon resonance (SPR) that detects and quantifies the changes in refractive index of the ligand-analyte complex molecule with high sensitivity. Although SPR is a highly sensitive BIA technique, it requires custom-made sensor chips and is not well suited for highly multiplexed BIA required in high throughput applications. Moreover implementation of SPR on various biosensing platforms is limited. In this research work spectral domain phase sensitive interferometry (SD-PSI) has been developed for label-free BIA and biosensing applications to address limitations of SPR and other label free techniques. One distinct advantage of SD-PSI compared to other label-free techniques is that it does not require use of custom fabricated biosensor substrates. Laboratory grade, off-the-shelf glass or plastic substrates of suitable thickness with proper surface functionalization are used as biosensor chips. SD-PSI is tested on four separate BIA and biosensing platforms, which include multi-well plate, flow cell, fiber probe with integrated optics and fiber tip biosensor. Sensitivity of 33 ng/ml for anti-IgG is achieved using multi-well platform. Principle of coherence multiplexing for multi-channel label-free biosensing applications is introduced. Simultaneous interrogation of multiple biosensors is achievable with a single spectral domain phase sensitive interferometer by coding the individual sensograms in coherence-multiplexed channels. Experimental results demonstrating multiplexed quantitative biomolecular interaction analysis of antibodies binding to antigen coated functionalized biosensor chip surfaces on different platforms are presented.
Xu, Zhida; Jiang, Jing; Wang, Xinhao; Han, Kevin; Ameen, Abid; Khan, Ibrahim; Chang, Te-Wei; Liu, Gang Logan
2016-03-21
We demonstrated a highly-sensitive, wafer-scale, highly-uniform plasmonic nano-mushroom substrate based on plastic for naked-eye plasmonic colorimetry and surface-enhanced Raman spectroscopy (SERS). We gave it the name FlexBrite. The dual-mode functionality of FlexBrite allows for label-free qualitative analysis by SERS with an enhancement factor (EF) of 10(8) and label-free quantitative analysis by naked-eye colorimetry with a sensitivity of 611 nm RIU(-1). The SERS EF of FlexBrite in the wet state was found to be 4.81 × 10(8), 7 times stronger than in the dry state, making FlexBrite suitable for aqueous environments such as microfluid systems. The label-free detection of biotin-streptavidin interaction by both SERS and colorimetry was demonstrated with FlexBrite. The detection of trace amounts of the narcotic drug methamphetamine in drinking water by SERS was implemented with a handheld Raman spectrometer and FlexBrite. This plastic-based dual-mode nano-mushroom substrate has the potential to be used as a sensing platform for easy and fast analysis in chemical and biological assays.
The Cluster Sensitivity Index: A Basic Measure of Classification Robustness
ERIC Educational Resources Information Center
Hom, Willard C.
2010-01-01
Analysts of institutional performance have occasionally used a peer grouping approach in which they compared institutions only to other institutions with similar characteristics. Because analysts historically have used cluster analysis to define peer groups (i.e., the group of comparable institutions), the author proposes and demonstrates with…
We demonstrate a novel, spatially explicit assessment of the current condition of aquatic ecosystem services, with limited sensitivity analysis for the atmospheric contaminant mercury. The Integrated Ecological Modeling System (IEMS) forecasts water quality and quantity, habitat ...
Theodore, M Jordan; Anderson, Raydel D; Wang, Xin; Katz, Lee S; Vuong, Jeni T; Bell, Melissa E; Juni, Billie A; Lowther, Sara A; Lynfield, Ruth; MacNeil, Jessica R; Mayer, Leonard W
2012-04-01
PCR detecting the protein D (hpd) and fuculose kinase (fucK) genes showed high sensitivity and specificity for identifying Haemophilus influenzae and differentiating it from H. haemolyticus. Phylogenetic analysis using the 16S rRNA gene demonstrated two distinct groups for H. influenzae and H. haemolyticus.
Sentence Context Affects the Brain Response to Masked Words
ERIC Educational Resources Information Center
Coulson, Seana; Brang, David
2010-01-01
Historically, language researchers have assumed that lexical, or word-level processing is fast and automatic, while slower, more controlled post-lexical processes are sensitive to contextual information from higher levels of linguistic analysis. Here we demonstrate the impact of sentence context on the processing of words not available for…
Detection of proteolytic activity by covalent tethering of fluorogenic substrates in zymogram gels.
Deshmukh, Ameya A; Weist, Jessica L; Leight, Jennifer L
2018-05-01
Current zymographic techniques detect only a subset of known proteases due to the limited number of native proteins that have been optimized for incorporation into polyacrylamide gels. To address this limitation, we have developed a technique to covalently incorporate fluorescently labeled, protease-sensitive peptides using an azido-PEG3-maleimide crosslinker. Peptides incorporated into gels enabled measurement of MMP-2, -9, -14, and bacterial collagenase. Sensitivity analysis demonstrated that use of peptide functionalized gels could surpass detection limits of current techniques. Finally, electrophoresis of conditioned media from cultured cells resulted in the appearance of several proteolytic bands, some of which were undetectable by gelatin zymography. Taken together, these results demonstrate that covalent incorporation of fluorescent substrates can greatly expand the library of detectable proteases using zymographic techniques.
Nuclear magnetic resonance detection and spectroscopy of single proteins using quantum logic
NASA Astrophysics Data System (ADS)
Lovchinsky, I.; Sushkov, A. O.; Urbach, E.; de Leon, N. P.; Choi, S.; De Greve, K.; Evans, R.; Gertner, R.; Bersin, E.; Müller, C.; McGuinness, L.; Jelezko, F.; Walsworth, R. L.; Park, H.; Lukin, M. D.
2016-02-01
Nuclear magnetic resonance spectroscopy is a powerful tool for the structural analysis of organic compounds and biomolecules but typically requires macroscopic sample quantities. We use a sensor, which consists of two quantum bits corresponding to an electronic spin and an ancillary nuclear spin, to demonstrate room temperature magnetic resonance detection and spectroscopy of multiple nuclear species within individual ubiquitin proteins attached to the diamond surface. Using quantum logic to improve readout fidelity and a surface-treatment technique to extend the spin coherence time of shallow nitrogen-vacancy centers, we demonstrate magnetic field sensitivity sufficient to detect individual proton spins within 1 second of integration. This gain in sensitivity enables high-confidence detection of individual proteins and allows us to observe spectral features that reveal information about their chemical composition.
Analysis of the stochastic excitability in the flow chemical reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bashkirtseva, Irina
2015-11-30
A dynamic model of the thermochemical process in the flow reactor is considered. We study an influence of the random disturbances on the stationary regime of this model. A phenomenon of noise-induced excitability is demonstrated. For the analysis of this phenomenon, a constructive technique based on the stochastic sensitivity functions and confidence domains is applied. It is shown how elaborated technique can be used for the probabilistic analysis of the generation of mixed-mode stochastic oscillations in the flow chemical reactor.
Analysis of the stochastic excitability in the flow chemical reactor
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina
2015-11-01
A dynamic model of the thermochemical process in the flow reactor is considered. We study an influence of the random disturbances on the stationary regime of this model. A phenomenon of noise-induced excitability is demonstrated. For the analysis of this phenomenon, a constructive technique based on the stochastic sensitivity functions and confidence domains is applied. It is shown how elaborated technique can be used for the probabilistic analysis of the generation of mixed-mode stochastic oscillations in the flow chemical reactor.
Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation
NASA Astrophysics Data System (ADS)
Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter
2015-04-01
Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.
Kalanda, Gertrude C; Hill, Jenny; Verhoeff, Francine H; Brabin, Bernard J
2006-05-01
To compare the efficacy of chloroquine and sulphadoxine-pyremethamine against Plasmodium falciparum infection in pregnant women and in children from the same endemic areas of Africa, with the aim of determining the level of correspondence in efficacy determinations in these two risk groups. Meta-analysis of nine published and unpublished in vivo antimalarial efficacy studies in pregnant women and in children across five African countries. Pregnant women (all gravidae) were more likely to be sensitive than children to both chloroquine (odds ratio: 2.07; 95% confidence interval: 1.5, 2.9) and sulphadoxine-pyrimethamine (odds ratio: 2.66; 95% confidence interval: 11.1, 6.7). Pregnant women demonstrated an almost uniform increased sensitivity for peripheral parasite clearance at day 14 compared with children. This finding was consistent across a wide range of drug sensitivities. Primigravidae at day 14 showed lower clearance to antimalarial drugs than multigravidae (P<0.05). There was no significant difference between parasite clearance in primigravidae and in children. The greater drug sensitivity in pregnant women probably indicates differences in host susceptibility rather than parasite resistance. Parasite sensitivity patterns in children may be a suitable guide to antimalarial policy in pregnant women.
NASA Astrophysics Data System (ADS)
Newman, James Charles, III
1997-10-01
The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for several two- and three-dimensional cases. In twodimensions, an initially symmetric NACA-0012 airfoil and a high-lift multielement airfoil were examined. For the three-dimensional configurations, an initially rectangular wing with uniform NACA-0012 cross-sections was optimized; in addition, a complete Boeing 747-200 aircraft was studied. Furthermore, the current study also examines the effect of inconsistency in the order of spatial accuracy between the nonlinear fluid and linear shape sensitivity equations. The second step was to develop a computationally efficient, high-fidelity, integrated static aeroelastic analysis procedure. To accomplish this, a structural analysis code was coupled with the aforementioned unstructured grid aerodynamic analysis solver. The use of an unstructured grid scheme for the aerodynamic analysis enhances the interaction compatibility with the wing structure. The structural analysis utilizes finite elements to model the wing so that accurate structural deflections may be obtained. In the current work, parameters have been introduced to control the interaction of the computational fluid dynamics and structural analyses; these control parameters permit extremely efficient static aeroelastic computations. To demonstrate and evaluate this procedure, static aeroelastic analysis results for a flexible wing in low subsonic, high subsonic (subcritical), transonic (supercritical), and supersonic flow conditions are presented.
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
First- and second-order sensitivity analysis of linear and nonlinear structures
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Mroz, Z.
1986-01-01
This paper employs the principle of virtual work to derive sensitivity derivatives of structural response with respect to stiffness parameters using both direct and adjoint approaches. The computations required are based on additional load conditions characterized by imposed initial strains, body forces, or surface tractions. As such, they are equally applicable to numerical or analytical solution techniques. The relative efficiency of various approaches for calculating first and second derivatives is assessed. It is shown that for the evaluation of second derivatives the most efficient approach is one that makes use of both the first-order sensitivities and adjoint vectors. Two example problems are used for demonstrating the various approaches.
Vignati, A. M.; Aguirre, C. P.; Artusa, D. R.; ...
2015-03-24
CUORE-0 is an experiment built to test and demonstrate the performance of the upcoming CUORE experiment. Composed of 52 TeO 2 bolometers of 750 g each, it is expected to reach a sensitivity to the 0νββ half-life of 130Te around 3 · 10 24 y in one year of live time. We present the first data, corresponding to an exposure of 7.1 kg y. An analysis of the background indicates that the CUORE sensitivity goal is within reach, validating our techniques to reduce the α radioactivity of the detector.
NASA Astrophysics Data System (ADS)
Vignati, A. M.; Aguirre, C. P.; Artusa, D. R.; Avignone, F. T., III; Azzolini, O.; Balata, M.; Banks, T. I.; Bari, G.; Beeman, J.; Bellini, F.; Bersani, A.; Biassoni, M.; Brofferio, C.; Bucci, C.; Cai, X. Z.; Camacho, A.; Canonica, L.; Cao, X.; Capelli, S.; Carbone, L.; Cardani, L.; Carrettoni, M.; Casali, N.; Chiesa, D.; Chott, N.; Clemenza, M.; Cosmelli, C.; Cremonesi, O.; Creswick, R. J.; Dafinei, I.; Dally, A.; Datskov, V.; De Biasi, A.; Deninno, M. M.; Di Domizio, S.; di Vacri, M. L.; Ejzak, L.; Fang, D. Q.; Farach, H. A.; Faverzani, M.; Fernandes, G.; Ferri, E.; Ferroni, F.; Fiorini, E.; Franceschi, M. A.; Freedman, S. J.; Fujikawa, B. K.; Giachero, A.; Gironi, L.; Giuliani, A.; Goett, J.; Gorla, P.; Gotti, C.; Gutierrez, T. D.; Haller, E. E.; Han, K.; Heeger, K. M.; Hennings-Yeomans, R.; Huang, H. Z.; Kadel, R.; Kazkaz, K.; Keppel, G.; Kolomensky, Yu. G.; Li, Y. L.; Ligi, C.; Lim, K. E.; Liu, X.; Ma, Y. G.; Maiano, C.; Maino, M.; Martinez, M.; Maruyama, R. H.; Mei, Y.; Moggi, N.; Morganti, S.; Napolitano, T.; Nisi, S.; Nones, C.; Norman, E. B.; Nucciotti, A.; O'Donnell, T.; Orio, F.; Orlandi, D.; Ouellet, J. L.; Pallavicini, M.; Palmieri, V.; Pattavina, L.; Pavan, M.; Pedretti; Pessina, G.; Piperno, G.; Pira, C.; Pirro, S.; Previtali, E.; Rampazzo, V.; Rosenfeld, C.; Rusconi, C.; Sala, E.; Sangiorgio, S.; Scielzo, N. D.; Sisti, M.; Smith, A. R.; Taffarello, L.; Tenconi, M.; Terranova, F.; Tian, W. D.; Tomei, C.; Trentalange, S.; Ventura, G.; Wang, B. S.; Wang, H. W.; Wielgus, L.; Wilson, J.; Winslow, L. A.; Wise, T.; Woodcraft, A.; Zanotti, L.; Zarra, C.; Zhu, B. X.; Zucchelli, S.
CUORE-0 is an experiment built to test and demonstrate the performance of the upcoming CUORE experiment. Com- posed of 52 TeO2 bolometers of 750 g each, it is expected to reach a sensitivity to the 0νββ half-life of 130Te around 3 · 1024 y in one year of live time. We present the first data, corresponding to an exposure of 7.1 kg y. An analysis of the background indicates that the CUORE sensitivity goal is within reach, validating our techniques to reduce the α radioactivity of the detector.
The clinical evaluation of the CADence device in the acoustic detection of coronary artery disease.
Thomas, Joseph L; Ridner, Michael; Cole, Jason H; Chambers, Jeffrey W; Bokhari, Sabahat; Yannopoulos, Demetris; Kern, Morton; Wilson, Robert F; Budoff, Matthew J
2018-06-23
The noninvasive detection of turbulent coronary flow may enable diagnosis of significant coronary artery disease (CAD) using novel sensor and analytic technology. Eligible patients (n = 1013) with chest pain and CAD risk factors undergoing nuclear stress testing were studied using the CADence (AUM Cardiovascular Inc., Northfield MN) acoustic detection (AD) system. The trial was designed to demonstrate non-inferiority of AD for diagnostic accuracy in detecting significant CAD as compared to an objective performance criteria (sensitivity 83% and specificity 80%, with 15% non-inferiority margins) for nuclear stress testing. AD analysis was blinded to clinical, core lab-adjudicated angiographic, and nuclear data. The presence of significant CAD was determined by computed tomographic (CCTA) or invasive angiography. A total of 1013 subjects without prior coronary revascularization or Q-wave myocardial infarction were enrolled. Primary analysis was performed on subjects with complete angiographic and AD data (n = 763) including 111 subjects (15%) with severe CAD based on CCTA (n = 34) and invasive angiography (n = 77). The sensitivity and specificity of AD were 78% (p = 0.012 for non-inferiority) and 35% (p < 0.001 for failure to demonstrate non-inferiority), respectively. AD results had a high 91% negative predictive value for the presence of significant CAD. AD testing failed to demonstrate non-inferior diagnostic accuracy as compared to the historical performance of a nuclear stress OPC due to low specificity. AD sensitivity was non-inferior in detecting significant CAD with a high negative predictive value supporting a potential value in excluding CAD.
NASA Astrophysics Data System (ADS)
Kasprzyk, J. R.; Reed, P. M.; Characklis, G. W.; Kirsch, B. R.
2010-12-01
This paper proposes and demonstrates a new interactive framework for sensitivity-informed de Novo programming, in which a learning approach to formulating decision problems can confront the deep uncertainty within water management problems. The framework couples global sensitivity analysis using Sobol’ variance decomposition with multiobjective evolutionary algorithms (MOEAs) to generate planning alternatives and test their robustness to new modeling assumptions and scenarios. We explore these issues within the context of a risk-based water supply management problem, where a city seeks the most efficient use of a water market. The case study examines a single city’s water supply in the Lower Rio Grande Valley (LRGV) in Texas, using both a 10-year planning horizon and an extreme single-year drought scenario. The city’s water supply portfolio comprises a volume of permanent rights to reservoir inflows and use of a water market through anticipatory thresholds for acquiring transfers of water through optioning and spot leases. Diagnostic information from the Sobol’ variance decomposition is used to create a sensitivity-informed problem formulation testing different decision variable configurations, with tradeoffs for the formulation solved using a MOEA. Subsequent analysis uses the drought scenario to expose tradeoffs between long-term and short-term planning and illustrate the impact of deeply uncertain assumptions on water availability in droughts. The results demonstrate water supply portfolios’ efficiency, reliability, and utilization of transfers in the water supply market and show how to adaptively improve the value and robustness of our problem formulations by evolving our definition of optimality to discover key tradeoffs.
The Design and Operation of Ultra-Sensitive and Tunable Radio-Frequency Interferometers.
Cui, Yan; Wang, Pingshan
2014-12-01
Dielectric spectroscopy (DS) is an important technique for scientific and technological investigations in various areas. DS sensitivity and operating frequency ranges are critical for many applications, including lab-on-chip development where sample volumes are small with a wide range of dynamic processes to probe. In this work, we present the design and operation considerations of radio-frequency (RF) interferometers that are based on power-dividers (PDs) and quadrature-hybrids (QHs). Such interferometers are proposed to address the sensitivity and frequency tuning challenges of current DS techniques. Verified algorithms together with mathematical models are presented to quantify material properties from scattering parameters for three common transmission line sensing structures, i.e., coplanar waveguides (CPWs), conductor-backed CPWs, and microstrip lines. A high-sensitivity and stable QH-based interferometer is demonstrated by measuring glucose-water solution at a concentration level that is ten times lower than some recent RF sensors while our sample volume is ~1 nL. Composition analysis of ternary mixture solutions are also demonstrated with a PD-based interferometer. Further work is needed to address issues like system automation, model improvement at high frequencies, and interferometer scaling.
Zhang, Nan; Li, Kaiwei; Cui, Ying; Wu, Zhifang; Shum, Perry Ping; Auguste, Jean-Louis; Dinh, Xuan Quyen; Humbert, Georges; Wei, Lei
2018-02-13
All-in-fiber optofluidics is an analytical tool that provides enhanced sensing performance with simplified analyzing system design. Currently, its advance is limited either by complicated liquid manipulation and light injection configuration or by low sensitivity resulting from inadequate light-matter interaction. In this work, we design and fabricate a side-channel photonic crystal fiber (SC-PCF) and exploit its versatile sensing capabilities in in-line optofluidic configurations. The built-in microfluidic channel of the SC-PCF enables strong light-matter interaction and easy lateral access of liquid samples in these analytical systems. In addition, the sensing performance of the SC-PCF is demonstrated with methylene blue for absorptive molecular detection and with human cardiac troponin T protein by utilizing a Sagnac interferometry configuration for ultra-sensitive and specific biomolecular specimen detection. Owing to the features of great flexibility and compactness, high-sensitivity to the analyte variation, and efficient liquid manipulation/replacement, the demonstrated SC-PCF offers a generic solution to be adapted to various fiber-waveguide sensors to detect a wide range of analytes in real time, especially for applications from environmental monitoring to biological diagnosis.
Loganathan, Muthukumaran; Bristow, Douglas A
2014-04-01
This paper presents a method and cantilever design for improving the mechanical measurement sensitivity in the atomic force microscopy (AFM) tapping mode. The method uses two harmonics in the drive signal to generate a bi-harmonic tapping trajectory. Mathematical analysis demonstrates that the wide-valley bi-harmonic tapping trajectory is as much as 70% more sensitive to changes in the sample topography than the standard single-harmonic trajectory typically used. Although standard AFM cantilevers can be driven in the bi-harmonic tapping trajectory, they require large forcing at the second harmonic. A design is presented for a bi-harmonic cantilever that has a second resonant mode at twice its first resonant mode, thereby capable of generating bi-harmonic trajectories with small forcing signals. Bi-harmonic cantilevers are fabricated by milling a small cantilever on the interior of a standard cantilever probe using a focused ion beam. Bi-harmonic drive signals are derived for standard cantilevers and bi-harmonic cantilevers. Experimental results demonstrate better than 30% improvement in measurement sensitivity using the bi-harmonic cantilever. Images obtained through bi-harmonic tapping exhibit improved sharpness and surface tracking, especially at high scan speeds and low force fields.
Data challenges in estimating the capacity value of solar photovoltaics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul
We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less
Data Challenges in Estimating the Capacity Value of Solar Photovoltaics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul
We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Our analysis also suggests that multiple years' historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less
Data challenges in estimating the capacity value of solar photovoltaics
Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul
2017-04-30
We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less
Cox, Jonathan T.; Kronewitter, Scott R.; Shukla, Anil K.; ...
2014-09-15
Subambient pressure ionization with nanoelectrospray (SPIN) has proven to be effective in producing ions with high efficiency and transmitting them to low pressures for high sensitivity mass spectrometry (MS) analysis. Here we present evidence that not only does the SPIN source improve MS sensitivity but also allows for gentler ionization conditions. The gentleness of a conventional heated capillary electrospray ionization (ESI) source and the SPIN source was compared by the liquid chromatography mass spectrometry (LC-MS) analysis of colominic acid. Colominic acid is a mixture of sialic acid polymers of different lengths containing labile glycosidic linkages between monomer units necessitating amore » gentle ion source. By coupling the SPIN source with high resolution mass spectrometry and using advanced data processing tools, we demonstrate much extended coverage of sialic acid polymer chains as compared to using the conventional ESI source. Additionally we show that SPIN-LC-MS is effective in elucidating polymer features with high efficiency and high sensitivity previously unattainable by the conventional ESI-LC-MS methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karaulanov, Todor; Savukov, Igor; Kim, Young Jin
We constructed a spin-exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell. This configuration provides almost complete overlap of the beams in the cell, and prevents the pump beam from entering the probe detection channel. By coupling the lasers in multi-mode fibers, without an optical isolator or field modulation, we demonstrate a sensitivity of 10 fTmore » $$/\\sqrt{\\text{Hz}}$$ for frequencies between 10 Hz and 100 Hz. In addition to the experimental study of sensitivity, we present a theoretical analysis of SERF magnetometer response to magnetic fields for small-angle and parallel-beam configurations, and show that at optimal DC offset fields the magnetometer response is comparable to that in the orthogonal-beam configuration. Based on the analysis, we also derive fundamental and probe-limited sensitivities for the arbitrary non-orthogonal geometry. The expected practical and fundamental sensitivities are of the same order as those in the orthogonal geometry. As a result, we anticipate that our design will be useful for magnetoencephalography (MEG) and magnetocardiography (MCG) applications.« less
Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines
NASA Astrophysics Data System (ADS)
Massa, Luca
A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
Mohapatra, Gayatry; Engler, David A; Starbuck, Kristen D; Kim, James C; Bernay, Derek C; Scangas, George A; Rousseau, Audrey; Batchelor, Tracy T; Betensky, Rebecca A; Louis, David N
2011-04-01
Array comparative genomic hybridization (aCGH) is a powerful tool for detecting DNA copy number alterations (CNA). Because diffuse malignant gliomas are often sampled by small biopsies, formalin-fixed paraffin-embedded (FFPE) blocks are often the only tissue available for genetic analysis; FFPE tissues are also needed to study the intratumoral heterogeneity that characterizes these neoplasms. In this paper, we present a combination of evaluations and technical advances that provide strong support for the ready use of oligonucleotide aCGH on FFPE diffuse gliomas. We first compared aCGH using bacterial artificial chromosome (BAC) arrays in 45 paired frozen and FFPE gliomas, and demonstrate a high concordance rate between FFPE and frozen DNA in an individual clone-level analysis of sensitivity and specificity, assuring that under certain array conditions, frozen and FFPE DNA can perform nearly identically. However, because oligonucleotide arrays offer advantages to BAC arrays in genomic coverage and practical availability, we next developed a method of labeling DNA from FFPE tissue that allows efficient hybridization to oligonucleotide arrays. To demonstrate utility in FFPE tissues, we applied this approach to biphasic anaplastic oligoastrocytomas and demonstrate CNA differences between DNA obtained from the two components. Therefore, BAC and oligonucleotide aCGH can be sensitive and specific tools for detecting CNAs in FFPE DNA, and novel labeling techniques enable the routine use of oligonucleotide arrays for FFPE DNA. In combination, these advances should facilitate genome-wide analysis of rare, small and/or histologically heterogeneous gliomas from FFPE tissues.
From web search to healthcare utilization: privacy-sensitive studies from mobile data.
White, Ryen; Horvitz, Eric
2013-01-01
We explore relationships between health information seeking activities and engagement with healthcare professionals via a privacy-sensitive analysis of geo-tagged data from mobile devices. We analyze logs of mobile interaction data stripped of individually identifiable information and location data. The data analyzed consist of time-stamped search queries and distances to medical care centers. We examine search activity that precedes the observation of salient evidence of healthcare utilization (EHU) (ie, data suggesting that the searcher is using healthcare resources), in our case taken as queries occurring at or near medical facilities. We show that the time between symptom searches and observation of salient evidence of seeking healthcare utilization depends on the acuity of symptoms. We construct statistical models that make predictions of forthcoming EHU based on observations about the current search session, prior medical search activities, and prior EHU. The predictive accuracy of the models varies (65%-90%) depending on the features used and the timeframe of the analysis, which we explore via a sensitivity analysis. We provide a privacy-sensitive analysis that can be used to generate insights about the pursuit of health information and healthcare. The findings demonstrate how large-scale studies of mobile devices can provide insights on how concerns about symptomatology lead to the pursuit of professional care. We present new methods for the analysis of mobile logs and describe a study that provides evidence about how people transition from mobile searches on symptoms and diseases to the pursuit of healthcare in the world.
Commercial test kits for detection of Lyme borreliosis: a meta-analysis of test accuracy
Cook, Michael J; Puri, Basant K
2016-01-01
The clinical diagnosis of Lyme borreliosis can be supported by various test methodologies; test kits are available from many manufacturers. Literature searches were carried out to identify studies that reported characteristics of the test kits. Of 50 searched studies, 18 were included where the tests were commercially available and samples were proven to be positive using serology testing, evidence of an erythema migrans rash, and/or culture. Additional requirements were a test specificity of ≥85% and publication in the last 20 years. The weighted mean sensitivity for all tests and for all samples was 59.5%. Individual study means varied from 30.6% to 86.2%. Sensitivity for each test technology varied from 62.4% for Western blot kits, and 62.3% for enzyme-linked immunosorbent assay tests, to 53.9% for synthetic C6 peptide ELISA tests and 53.7% when the two-tier methodology was used. Test sensitivity increased as dissemination of the pathogen affected different organs; however, the absence of data on the time from infection to serological testing and the lack of standard definitions for “early” and “late” disease prevented analysis of test sensitivity versus time of infection. The lack of standardization of the definitions of disease stage and the possibility of retrospective selection bias prevented clear evaluation of test sensitivity by “stage”. The sensitivity for samples classified as acute disease was 35.4%, with a corresponding sensitivity of 64.5% for samples from patients defined as convalescent. Regression analysis demonstrated an improvement of 4% in test sensitivity over the 20-year study period. The studies did not provide data to indicate the sensitivity of tests used in a clinical setting since the effect of recent use of antibiotics or steroids or other factors affecting antibody response was not factored in. The tests were developed for only specific Borrelia species; sensitivities for other species could not be calculated. PMID:27920571
A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na
2013-01-01
We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.
NASA Astrophysics Data System (ADS)
Feng, Shangyuan; Lin, Juqiang; Huang, Zufang; Chen, Guannan; Chen, Weisheng; Wang, Yue; Chen, Rong; Zeng, Haishan
2013-01-01
The capability of using silver nanoparticle based near-infrared surface enhanced Raman scattering (SERS) spectroscopy combined with principal component analysis (PCA) and linear discriminate analysis (LDA) to differentiate esophageal cancer tissue from normal tissue was presented. Significant differences in Raman intensities of prominent SERS bands were observed between normal and cancer tissues. PCA-LDA multivariate analysis of the measured tissue SERS spectra achieved diagnostic sensitivity of 90.9% and specificity of 97.8%. This exploratory study demonstrated great potential for developing label-free tissue SERS analysis into a clinical tool for esophageal cancer detection.
European Multicenter Study on Analytical Performance of DxN Veris System HCV Assay.
Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Gismondo, Maria Rita; Hofmann, Jörg; Izopet, Jacques; Kühn, Sebastian; Lombardi, Alessandra; Marcos, Maria Angeles; Sauné, Karine; O'Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel W
2017-04-01
The analytical performance of the Veris HCV Assay for use on the new and fully automated Beckman Coulter DxN Veris Molecular Diagnostics System (DxN Veris System) was evaluated at 10 European virology laboratories. Precision, analytical sensitivity, specificity, and performance with negative samples, linearity, and performance with hepatitis C virus (HCV) genotypes were evaluated. Precision for all sites showed a standard deviation (SD) of 0.22 log 10 IU/ml or lower for each level tested. Analytical sensitivity determined by probit analysis was between 6.2 and 9.0 IU/ml. Specificity on 94 unique patient samples was 100%, and performance with 1,089 negative samples demonstrated 100% not-detected results. Linearity using patient samples was shown from 1.34 to 6.94 log 10 IU/ml. The assay demonstrated linearity upon dilution with all HCV genotypes. The Veris HCV Assay demonstrated an analytical performance comparable to that of currently marketed HCV assays when tested across multiple European sites. Copyright © 2017 American Society for Microbiology.
Grande, Antonio Jose; Reid, Hamish; Thomas, Emma; Foster, Charlie; Darton, Thomas C
2016-08-01
Dengue fever is a ubiquitous arboviral infection in tropical and sub-tropical regions, whose incidence has increased over recent decades. In the absence of a rapid point of care test, the clinical diagnosis of dengue is complex. The World Health Organisation has outlined diagnostic criteria for making the diagnosis of dengue infection, which includes the use of the tourniquet test (TT). To assess the quality of the evidence supporting the use of the TT and perform a diagnostic accuracy meta-analysis comparing the TT to antibody response measured by ELISA. A comprehensive literature search was conducted in the following databases to April, 2016: MEDLINE (PubMed), EMBASE, Cochrane Central Register of Controlled Trials, BIOSIS, Web of Science, SCOPUS. Studies comparing the diagnostic accuracy of the tourniquet test with ELISA for the diagnosis of dengue were included. Two independent authors extracted data using a standardized form. A total of 16 studies with 28,739 participants were included in the meta-analysis. Pooled sensitivity for dengue diagnosis by TT was 58% (95% Confidence Interval (CI), 43%-71%) and the specificity was 71% (95% CI, 60%-80%). In the subgroup analysis sensitivity for non-severe dengue diagnosis was 55% (95% CI, 52%-59%) and the specificity was 63% (95% CI, 60%-66%), whilst sensitivity for dengue hemorrhagic fever diagnosis was 62% (95% CI, 53%-71%) and the specificity was 60% (95% CI, 48%-70%). Receiver-operator characteristics demonstrated a test accuracy (AUC) of 0.70 (95% CI, 0.66-0.74). The tourniquet test is widely used in resource poor settings despite currently available evidence demonstrating only a marginal benefit in making a diagnosis of dengue infection alone. The protocol for this systematic review was registered at CRD42015020323.
Mommen, Geert P M; Meiring, Hugo D; Heck, Albert J R; de Jong, Ad P J M
2013-07-16
In proteomics, comprehensive analysis of peptides mixtures necessitates multiple dimensions of separation prior to mass spectrometry analysis to reduce sample complexity and increase the dynamic range of analysis. The main goal of this work was to improve the performance of (online) multidimensional protein identification technology (MudPIT) in terms of sensitivity, compatibility and recovery. The method employs weak anion and strong cation mixed-bed ion exchange chromatography (ACE) in the first separation dimension and reversed phase chromatography (RP) in the second separation dimension (Motoyama et.al. Anal. Chem 2007, 79, 3623-34.). We demonstrated that the chromatographic behavior of peptides in ACE chromatography depends on both the WAX/SCX mixing ratio as the ionic strength of the mobile phase system. This property allowed us to replace the conventional salt gradient by a (discontinuous) salt-free, pH gradient. First dimensional separation of peptides was accomplished with mixtures of aqueous formic acid and dimethylsulfoxide with increasing concentrations. The overall performance of this mobile phase system was found comparable to ammonium acetate buffers in application to ACE chromatography, but clearly outperformed strong cation exchange for use in first dimensional peptide separation. The dramatically improved compatibility between (salt-free) ion exchange chromatography and reversed phase chromatography-mass spectrometry allowed us to downscale the dimensions of the RP analytical column down to 25 μm i.d. for an additional 2- to 3-fold improvement in performance compared to current technology. The achieved levels of sensitivity, orthogonality, and compatibility demonstrates the potential of salt-free ACE MudPIT for the ultrasensitive, multidimensional analysis of very modest amounts of sample material.
Nemkov, Travis; D'Alessandro, Angelo; Hansen, Kirk C.
2015-01-01
Amino acid analysis is a powerful bioanalytical technique for many biomedical research endeavors, including cancer, emergency medicine, nutrition and neuroscience research. In the present study, we present a three minute analytical method for underivatized amino acid analysis that employs ultra-high performance liquid chromatography and high resolution quadrupole orbitrap mass spectrometry. This method has demonstrated linearity (mM to nM range), reproducibility (intra-day<5%, inter-day<20%), sensitivity (low fmol) and selectivity. Here, we illustrate the rapidity and accuracy of the method through comparison with conventional liquid chromatography-mass spectrometry methods. We further demonstrate the robustness and sensitivity of this method on a diverse range of biological matrices. Using this method we were able to selectively discriminate murine pancreatic cancer cells with and without knocked down expression of Hypoxia Inducible Factor 1α; plasma, lymph and bronchioalveolar lavage fluid samples from control versus hemorrhaged rats; and muscle tissue samples harvested from rats subjected to both low fat and high fat diets. Furthermore, we were able to exploit the sensitivity of the method to detect and quantify the release of glutamate from sparsely isolated murine taste buds. Spiked in light or heavy standards (13C6-arginine, 13C6-lysine, 13C515N2-glutamine) or xenometabolites were used to determine coefficient of variations, confirm linearity of relative quantitation in four different matrices, and overcome matrix effects for absolute quantitation. The presented method enables high-throughput analysis of low abundance samples requiring only one percent of the material extracted from 100,000 cells, 10 μl of biological fluid, or 2 mg of muscle tissue. PMID:26058356
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines
Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.
2017-01-01
Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.
Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H
2017-04-01
Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Muldoon, Timothy J.; Thekkek, Nadhi; Roblyer, Darren; Maru, Dipen; Harpaz, Noam; Potack, Jonathan; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2010-03-01
Early detection of neoplasia in patients with Barrett's esophagus is essential to improve outcomes. The aim of this ex vivo study was to evaluate the ability of high-resolution microendoscopic imaging and quantitative image analysis to identify neoplastic lesions in patients with Barrett's esophagus. Nine patients with pathologically confirmed Barrett's esophagus underwent endoscopic examination with biopsies or endoscopic mucosal resection. Resected fresh tissue was imaged with fiber bundle microendoscopy; images were analyzed by visual interpretation or by quantitative image analysis to predict whether the imaged sites were non-neoplastic or neoplastic. The best performing pair of quantitative features were chosen based on their ability to correctly classify the data into the two groups. Predictions were compared to the gold standard of histopathology. Subjective analysis of the images by expert clinicians achieved average sensitivity and specificity of 87% and 61%, respectively. The best performing quantitative classification algorithm relied on two image textural features and achieved a sensitivity and specificity of 87% and 85%, respectively. This ex vivo pilot trial demonstrates that quantitative analysis of images obtained with a simple microendoscope system can distinguish neoplasia in Barrett's esophagus with good sensitivity and specificity when compared to histopathology and to subjective image interpretation.
NASA Astrophysics Data System (ADS)
Kasprzyk, J. R.; Reed, P. M.; Kirsch, B. R.; Characklis, G. W.
2009-12-01
Risk-based water supply management presents severe cognitive, computational, and social challenges to planning in a changing world. Decision aiding frameworks must confront the cognitive biases implicit to risk, the severe uncertainties associated with long term planning horizons, and the consequent ambiguities that shape how we define and solve water resources planning and management problems. This paper proposes and demonstrates a new interactive framework for sensitivity informed de novo programming. The theoretical focus of our many-objective de novo programming is to promote learning and evolving problem formulations to enhance risk-based decision making. We have demonstrated our proposed de novo programming framework using a case study for a single city’s water supply in the Lower Rio Grande Valley (LRGV) in Texas. Key decisions in this case study include the purchase of permanent rights to reservoir inflows and anticipatory thresholds for acquiring transfers of water through optioning and spot leases. A 10-year Monte Carlo simulation driven by historical data is used to provide performance metrics for the supply portfolios. The three major components of our methodology include Sobol globoal sensitivity analysis, many-objective evolutionary optimization and interactive tradeoff visualization. The interplay between these components allows us to evaluate alternative design metrics, their decision variable controls and the consequent system vulnerabilities. Our LRGV case study measures water supply portfolios’ efficiency, reliability, and utilization of transfers in the water supply market. The sensitivity analysis is used interactively over interannual, annual, and monthly time scales to indicate how the problem controls change as a function of the timescale of interest. These results have been used then to improve our exploration and understanding of LRGV costs, vulnerabilities, and the water portfolios’ critical reliability constraints. These results demonstrate how we can adaptively improve the value and robustness of our problem formulations by evolving our definition of optimality to discover key tradeoffs.
Using the Polymerase Chain Reaction in an Undergraduate Laboratory to Produce "DNA Fingerprints."
ERIC Educational Resources Information Center
Phelps, Tara L.; And Others
1996-01-01
Presents a laboratory exercise that demonstrates the sensitivity of the Polymerase Chain Reaction as well as its potential application to forensic analysis during a criminal investigation. Can also be used to introduce, review, and integrate population and molecular genetics topics such as genotypes, multiple alleles, allelic and genotypic…
Anderson, Raydel D.; Wang, Xin; Katz, Lee S.; Vuong, Jeni T.; Bell, Melissa E.; Juni, Billie A.; Lowther, Sara A.; Lynfield, Ruth; MacNeil, Jessica R.; Mayer, Leonard W.
2012-01-01
PCR detecting the protein D (hpd) and fuculose kinase (fucK) genes showed high sensitivity and specificity for identifying Haemophilus influenzae and differentiating it from H. haemolyticus. Phylogenetic analysis using the 16S rRNA gene demonstrated two distinct groups for H. influenzae and H. haemolyticus. PMID:22301020
Identifying and counting point defects in carbon nanotubes.
Fan, Yuwei; Goldsmith, Brett R; Collins, Philip G
2005-12-01
The prevailing conception of carbon nanotubes and particularly single-walled carbon nanotubes (SWNTs) continues to be one of perfectly crystalline wires. Here, we demonstrate a selective electrochemical method that labels point defects and makes them easily visible for quantitative analysis. High-quality SWNTs are confirmed to contain one defect per 4 microm on average, with a distribution weighted towards areas of SWNT curvature. Although this defect density compares favourably to high-quality, silicon single-crystals, the presence of a single defect can have tremendous electronic effects in one-dimensional conductors such as SWNTs. We demonstrate a one-to-one correspondence between chemically active point defects and sites of local electronic sensitivity in SWNT circuits, confirming the expectation that individual defects may be critical to understanding and controlling variability, noise and chemical sensitivity in SWNT electronic devices. By varying the SWNT synthesis technique, we further show that the defect spacing can be varied over orders of magnitude. The ability to detect and analyse point defects, especially at very low concentrations, indicates the promise of this technique for quantitative process analysis, especially in nanoelectronics development.
Ivanov, Yuri D; Pleshakova, Tatyana; Malsagova, Krystina; Kozlov, Andrey; Kaysheva, Anna; Kopylov, Arthur; Izotov, Alexander; Andreeva, Elena; Kanashenko, Sergey; Usanov, Sergey; Archakov, Alexander
2014-10-01
An approach combining atomic force microscopy (AFM) fishing and mass spectrometry (MS) analysis to detect proteins at ultra-low concentrations is proposed. Fishing out protein molecules onto a highly oriented pyrolytic graphite surface coated with polytetrafluoroethylene film was carried out with and without application of an external electric field. After that they were visualized by AFM and identified by MS. It was found that injection of solution leads to charge generation in the solution, and an electric potential within the measuring cell is induced. It was demonstrated that without an external electric field in the rapid injection input of diluted protein solution the fishing is efficient, as opposed to slow fluid input. The high sensitivity of this method was demonstrated by detection of human serum albumin and human cytochrome b5 in 10(-17) -10(-18) m water solutions. It was shown that an external negative voltage applied to highly oriented pyrolytic graphite hinders the protein fishing. The efficiency of fishing with an external positive voltage was similar to that obtained without applying any voltage. © 2014 FEBS.
A lab-on-chip for biothreat detection using single-molecule DNA mapping.
Meltzer, Robert H; Krogmeier, Jeffrey R; Kwok, Lisa W; Allen, Richard; Crane, Bryan; Griffis, Joshua W; Knaian, Linda; Kojanian, Nanor; Malkin, Gene; Nahas, Michelle K; Papkov, Vyacheslav; Shaikh, Saad; Vyavahare, Kedar; Zhong, Qun; Zhou, Yi; Larson, Jonathan W; Gilmanshin, Rudolf
2011-03-07
Rapid, specific, and sensitive detection of airborne bacteria, viruses, and toxins is critical for biodefense, yet the diverse nature of the threats poses a challenge for integrated surveillance, as each class of pathogens typically requires different detection strategies. Here, we present a laboratory-on-a-chip microfluidic device (LOC-DLA) that integrates two unique assays for the detection of airborne pathogens: direct linear analysis (DLA) with unsurpassed specificity for bacterial threats and Digital DNA for toxins and viruses. The LOC-DLA device also prepares samples for analysis, incorporating upstream functions for concentrating and fractionating DNA. Both DLA and Digital DNA assays are single molecule detection technologies, therefore the assay sensitivities depend on the throughput of individual molecules. The microfluidic device and its accompanying operation protocols have been heavily optimized to maximize throughput and minimize the loss of analyzable DNA. We present here the design and operation of the LOC-DLA device, demonstrate multiplex detection of rare bacterial targets in the presence of 100-fold excess complex bacterial mixture, and demonstrate detection of picogram quantities of botulinum toxoid.
Henry, Francis P.; Wang, Yan; Rodriguez, Carissa L. R.; Randolph, Mark A.; Rust, Esther A. Z.; Winograd, Jonathan M.; de Boer, Johannes F.; Park, B. Hyle
2015-01-01
Abstract. Assessing nerve integrity and myelination after injury is necessary to provide insight for treatment strategies aimed at restoring neuromuscular function. Currently, this is largely done with electrical analysis, which lacks direct quantitative information. In vivo optical imaging with sufficient imaging depth and resolution could be used to assess the nerve microarchitecture. In this study, we examine the use of polarization sensitive-optical coherence tomography (PS-OCT) to quantitatively assess the sciatic nerve microenvironment through measurements of birefringence after applying a nerve crush injury in a rat model. Initial loss of function and subsequent recovery were demonstrated by calculating the sciatic function index (SFI). We found that the PS-OCT phase retardation slope, which is proportional to birefringence, increased monotonically with the SFI. Additionally, histomorphometric analysis of the myelin thickness and g-ratio shows that the PS-OCT slope is a good indicator of myelin health and recovery after injury. These results demonstrate that PS-OCT is capable of providing nondestructive and quantitative assessment of nerve health after injury and shows promise for continued use both clinically and experimentally in neuroscience. PMID:25858593
Henry, Francis P; Wang, Yan; Rodriguez, Carissa L R; Randolph, Mark A; Rust, Esther A Z; Winograd, Jonathan M; de Boer, Johannes F; Park, B Hyle
2015-04-01
Assessing nerve integrity and myelination after injury is necessary to provide insight for treatment strategies aimed at restoring neuromuscular function. Currently, this is largely done with electrical analysis, which lacks direct quantitative information. In vivo optical imaging with sufficient imaging depth and resolution could be used to assess the nerve microarchitecture. In this study, we examine the use of polarization sensitive-optical coherence tomography (PS-OCT) to quantitatively assess the sciatic nerve microenvironment through measurements of birefringence after applying a nerve crush injury in a rat model. Initial loss of function and subsequent recovery were demonstrated by calculating the sciatic function index (SFI). We found that the PS-OCT phase retardation slope, which is proportional to birefringence, increased monotonically with the SFI. Additionally, histomorphometric analysis of the myelin thickness and g-ratio shows that the PS-OCT slope is a good indicator of myelin health and recovery after injury. These results demonstrate that PS-OCT is capable of providing nondestructive and quantitative assessment of nerve health after injury and shows promise for continued use both clinically and experimentally in neuroscience.
Which Measures of Online Control Are Least Sensitive to Offline Processes?
de Grosbois, John; Tremblay, Luc
2018-02-28
A major challenge to the measurement of online control is the contamination by offline, planning-based processes. The current study examined the sensitivity of four measures of online control to offline changes in reaching performance induced by prism adaptation and terminal feedback. These measures included the squared Z scores (Z 2 ) of correlations of limb position at 75% movement time versus movement end, variable error, time after peak velocity, and a frequency-domain analysis (pPower). The results indicated that variable error and time after peak velocity were sensitive to the prism adaptation. Furthermore, only the Z 2 values were biased by the terminal feedback. Ultimately, the current study has demonstrated the sensitivity of limb kinematic measures to offline control processes and that pPower analyses may yield the most suitable measure of online control.
Lorkova, Lucie; Scigelova, Michaela; Arrey, Tabiwang Ndipanquang; Vit, Ondrej; Pospisilova, Jana; Doktorova, Eliska; Klanova, Magdalena; Alam, Mahmudul; Vockova, Petra; Maswabi, Bokang
2015-01-01
Mantle cell lymphoma (MCL) is a chronically relapsing aggressive type of B-cell non-Hodgkin lymphoma considered incurable by currently used treatment approaches. Fludarabine is a purine analog clinically still widely used in the therapy of relapsed MCL. Molecular mechanisms of fludarabine resistance have not, however, been studied in the setting of MCL so far. We therefore derived fludarabine-resistant MCL cells (Mino/FR) and performed their detailed functional and proteomic characterization compared to the original fludarabine sensitive cells (Mino). We demonstrated that Mino/FR were highly cross-resistant to other antinucleosides (cytarabine, cladribine, gemcitabine) and to an inhibitor of Bruton tyrosine kinase (BTK) ibrutinib. Sensitivity to other types of anti-lymphoma agents was altered only mildly (methotrexate, doxorubicin, bortezomib) or remained unaffacted (cisplatin, bendamustine). The detailed proteomic analysis of Mino/FR compared to Mino cells unveiled over 300 differentially expressed proteins. Mino/FR were characterized by the marked downregulation of deoxycytidine kinase (dCK) and BTK (thus explaining the observed crossresistance to antinucleosides and ibrutinib), but also by the upregulation of several enzymes of de novo nucleotide synthesis, as well as the up-regulation of the numerous proteins of DNA repair and replication. The significant upregulation of the key antiapoptotic protein Bcl-2 in Mino/FR cells was associated with the markedly increased sensitivity of the fludarabine-resistant MCL cells to Bcl-2-specific inhibitor ABT199 compared to fludarabine-sensitive cells. Our data thus demonstrate that a detailed molecular analysis of drug-resistant tumor cells can indeed open a way to personalized therapy of resistant malignancies. PMID:26285204
Verly, Iedan R N; van Kuilenburg, André B P; Abeling, Nico G G M; Goorden, Susan M I; Fiocco, Marta; Vaz, Frédéric M; van Noesel, Max M; Zwaan, C Michel; Kaspers, GertJan L; Merks, Johannes H M; Caron, Huib N; Tytgat, Godelieve A M
2017-02-01
Neuroblastoma (NBL) accounts for 10% of the paediatric malignancies and is responsible for 15% of the paediatric cancer-related deaths. Vanillylmandelic acid (VMA) and homovanillic acid (HVA) are most commonly analysed in urine of NBL patients. However, their diagnostic sensitivity is suboptimal (82%). Therefore, we performed in-depth analysis of the diagnostic sensitivity of a panel of urinary catecholamine metabolites. Retrospective study of a panel of 8 urinary catecholamine metabolites (VMA, HVA, 3-methoxytyramine [3MT], dopamine, epinephrine, metanephrine, norepinephrine and normetanephrine [NMN]) from 301 NBL patients at diagnosis. Special attention was given to subgroups, metaiodobenzylguanidine (MIBG) non-avid tumours and VMA/HVA negative patients. Elevated catecholamine metabolites, especially 3MT, correlated with nine out of 12 NBL characteristics such as stage, age, MYCN amplification, loss of heterozygosity for 1p and bone-marrow invasion. The combination of the classical markers VMA and HVA had a diagnostic sensitivity of 84%. NMN was the most sensitive single diagnostic metabolite with overall sensitivity of 89%. When all 8 metabolites were combined, a diagnostic sensitivity of 95% was achieved. Among the VMA and HVA negative patients, were also 29% with stage 4 disease, which usually had elevation of other catecholamine metabolites (93%). Diagnostic sensitivity for patients with MIBG non-avid tumour was improved from 33% (VMA and/or HVA) to 89% by measuring the panel. Our study demonstrates that analysis of a urinary catecholamine metabolite panel, comprising 8 metabolites, ensures the highest sensitivity to diagnose NBL patients. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gold leaf counter electrodes for dye-sensitized solar cells
NASA Astrophysics Data System (ADS)
Shimada, Kazuhiro; Toyoda, Takeshi
2018-03-01
In this study, a gold leaf 100 nm thin film is used as the counter electrode in dye-sensitized solar cells. The traditional method of hammering gold foil to obtain a thin gold leaf, which requires only small amounts of gold, was employed. The gold leaf was then attached to the substrate using an adhesive to produce the gold electrode. The proposed approach for fabricating counter electrodes is demonstrated to be facile and cost-effective, as opposed to existing techniques. Compared with electrodes prepared with gold foil and sputtered gold, the gold leaf counter electrode demonstrates higher catalytic activity with a cobalt-complex electrolyte and higher cell efficiency. The origin of the improved performance was investigated by surface morphology examination (scanning electron microscopy), various electrochemical analyses (cyclic voltammetry, linear sweep voltammetry, and electrochemical impedance spectroscopy), and crystalline analysis (X-ray diffractometry).
Nuclear magnetic resonance detection and spectroscopy of single proteins using quantum logic.
Lovchinsky, I; Sushkov, A O; Urbach, E; de Leon, N P; Choi, S; De Greve, K; Evans, R; Gertner, R; Bersin, E; Müller, C; McGuinness, L; Jelezko, F; Walsworth, R L; Park, H; Lukin, M D
2016-02-19
Nuclear magnetic resonance spectroscopy is a powerful tool for the structural analysis of organic compounds and biomolecules but typically requires macroscopic sample quantities. We use a sensor, which consists of two quantum bits corresponding to an electronic spin and an ancillary nuclear spin, to demonstrate room temperature magnetic resonance detection and spectroscopy of multiple nuclear species within individual ubiquitin proteins attached to the diamond surface. Using quantum logic to improve readout fidelity and a surface-treatment technique to extend the spin coherence time of shallow nitrogen-vacancy centers, we demonstrate magnetic field sensitivity sufficient to detect individual proton spins within 1 second of integration. This gain in sensitivity enables high-confidence detection of individual proteins and allows us to observe spectral features that reveal information about their chemical composition. Copyright © 2016, American Association for the Advancement of Science.
ON THE USE OF SHOT NOISE FOR PHOTON COUNTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zmuidzinas, Jonas, E-mail: jonas@caltech.edu
Lieu et al. have recently claimed that it is possible to substantially improve the sensitivity of radio-astronomical observations. In essence, their proposal is to make use of the intensity of the photon shot noise as a measure of the photon arrival rate. Lieu et al. provide a detailed quantum-mechanical calculation of a proposed measurement scheme that uses two detectors and conclude that this scheme avoids the sensitivity degradation that is associated with photon bunching. If correct, this result could have a profound impact on radio astronomy. Here I present a detailed analysis of the sensitivity attainable using shot-noise measurement schemesmore » that use either one or two detectors, and demonstrate that neither scheme can avoid the photon bunching penalty. I perform both semiclassical and fully quantum calculations of the sensitivity, obtaining consistent results, and provide a formal proof of the equivalence of these two approaches. These direct calculations are furthermore shown to be consistent with an indirect argument based on a correlation method that establishes an independent limit to the sensitivity of shot-noise measurement schemes. Furthermore, these calculations are directly applicable to the regime of interest identified by Lieu et al. Collectively, these results conclusively demonstrate that the photon-bunching sensitivity penalty applies to shot-noise measurement schemes just as it does to ordinary photon counting, in contradiction to the fundamental claim made by Lieu et al. The source of this contradiction is traced to a logical fallacy in their argument.« less
Novel optical gyroscope: proof of principle demonstration and future scope
Srivastava, Shailesh; Rao D. S., Shreesha; Nandakumar, Hari
2016-01-01
We report the first proof-of-principle demonstration of the resonant optical gyroscope with reflector that we have recently proposed. The device is very different from traditional optical gyroscopes since it uses the inherent coupling between the clockwise and counterclockwise propagating waves to sense the rotation. Our demonstration confirms our theoretical analysis and simulations. We also demonstrate a novel method of biasing the gyroscope using orthogonal polarization states. The simplicity of the structure and the readout method, the theoretically predicted high sensitivities (better than 0.001 deg/hr), and the possibility of further performance enhancement using a related laser based active device, all have immense potential for attracting fresh research and technological initiatives. PMID:27694987
Validity, sensitivity and specificity of the mentation, behavior and mood subscale of the UPDRS.
Holroyd, Suzanne; Currie, Lillian J; Wooten, G Frederick
2008-06-01
The unified Parkinson's disease rating scale (UPDRS) is the most widely used tool to rate the severity and the stage of Parkinson's disease (PD). However, the mentation, behavior and mood (MBM) subscale of the UPDRS has received little investigation regarding its validity and sensitivity. Three items of this subscale were compared to criterion tests to examine validity, sensitivity and specificity. Ninety-seven patients with idiopathic PD were assessed on the UPDRS. Scores on three items of the MBM subscale, intellectual impairment, thought disorder and depression, were compared to criterion tests, the telephone interview for cognition status (TICS), psychiatric assessment for psychosis and the geriatric depression scale (GDS). Non-parametric tests of association were performed to examine concurrent validity of the MBM items. The sensitivities, specificities and optimal cutoff scores for each MBM item were estimated by receiver operating characteristic (ROC) curve analysis. The MBM items demonstrated low to moderate correlation with the criterion tests, and the sensitivity and specificity were not strong. Even using a score of 7.0 on the items of the MBM demonstrated a sensitivity/specificity of only 0.19/0.48 for intellectual impairment, 0.60/0.72 for thought disorder and 0.61/0.87 for depression. Using a more appropriate cutoff of 2.0 revealed sensitivities of 0.01, 0.38 and 0.13 respectively. The MBM subscale items of intellectual impairment, thought disorder and depression are not appropriate for screening or diagnostic purposes. Tools such as the TICS and the GDS should be considered instead.
NASA Astrophysics Data System (ADS)
Wang, Guoqing; Bu, Tong; Zako, Tamotsu; Watanabe-Tamaki, Ryoko; Tanaka, Takuo; Maeda, Mizuo
2017-09-01
Due to the potential of gold nanoparticle (AuNP)-based trace analysis, the discrimination of small AuNP clusters with different assembling stoichiometry is a subject of fundamental and technological importance. Here we prepare oligomerized AuNPs with controlled stoichiometry through DNA-directed assembly, and demonstrate that AuNP monomers, dimers and trimers can be clearly distinguished using dark field microscopy (DFM). The scattering intensity for of AuNP structures with stoichiometry ranging from 1 to 3 agrees well with our theoretical calculations. This study demonstrates the potential of utilizing the DFM approach in ultra-sensitive detection as well as the use of DNA-directed assembly for plasmonic nano-architectures.
Sensitivity Enhancement of FBG-Based Strain Sensor.
Li, Ruiya; Chen, Yiyang; Tan, Yuegang; Zhou, Zude; Li, Tianliang; Mao, Jian
2018-05-17
A novel fiber Bragg grating (FBG)-based strain sensor with a high-sensitivity is presented in this paper. The proposed FBG-based strain sensor enhances sensitivity by pasting the FBG on a substrate with a lever structure. This typical mechanical configuration mechanically amplifies the strain of the FBG to enhance overall sensitivity. As this mechanical configuration has a high stiffness, the proposed sensor can achieve a high resonant frequency and a wide dynamic working range. The sensing principle is presented, and the corresponding theoretical model is derived and validated. Experimental results demonstrate that the developed FBG-based strain sensor achieves an enhanced strain sensitivity of 6.2 pm/με, which is consistent with the theoretical analysis result. The strain sensitivity of the developed sensor is 5.2 times of the strain sensitivity of a bare fiber Bragg grating strain sensor. The dynamic characteristics of this sensor are investigated through the finite element method (FEM) and experimental tests. The developed sensor exhibits an excellent strain-sensitivity-enhancing property in a wide frequency range. The proposed high-sensitivity FBG-based strain sensor can be used for small-amplitude micro-strain measurement in harsh industrial environments.
Sensitivity Enhancement of FBG-Based Strain Sensor
Chen, Yiyang; Tan, Yuegang; Zhou, Zude; Mao, Jian
2018-01-01
A novel fiber Bragg grating (FBG)-based strain sensor with a high-sensitivity is presented in this paper. The proposed FBG-based strain sensor enhances sensitivity by pasting the FBG on a substrate with a lever structure. This typical mechanical configuration mechanically amplifies the strain of the FBG to enhance overall sensitivity. As this mechanical configuration has a high stiffness, the proposed sensor can achieve a high resonant frequency and a wide dynamic working range. The sensing principle is presented, and the corresponding theoretical model is derived and validated. Experimental results demonstrate that the developed FBG-based strain sensor achieves an enhanced strain sensitivity of 6.2 pm/με, which is consistent with the theoretical analysis result. The strain sensitivity of the developed sensor is 5.2 times of the strain sensitivity of a bare fiber Bragg grating strain sensor. The dynamic characteristics of this sensor are investigated through the finite element method (FEM) and experimental tests. The developed sensor exhibits an excellent strain-sensitivity-enhancing property in a wide frequency range. The proposed high-sensitivity FBG-based strain sensor can be used for small-amplitude micro-strain measurement in harsh industrial environments. PMID:29772826
Mehta, Milap; Tserentsoodol, Nomingerel; Postlethwait, John H.; Rebrik, Tatiana I.
2013-01-01
The ligand sensitivity of cGMP-gated (CNG) ion channels in cone photoreceptors is modulated by CNG-modulin, a Ca2+-binding protein. We investigated the functional role of CNG-modulin in phototransduction in vivo in morpholino-mediated gene knockdown zebrafish. Through comparative genomic analysis, we identified the orthologue gene of CNG-modulin in zebrafish, eml1, an ancient gene present in the genome of all vertebrates sequenced to date. We compare the photoresponses of wild-type cones with those of cones that do not express the EML1 protein. In the absence of EML1, dark-adapted cones are ∼5.3-fold more light sensitive than wild-type cones. Previous qualitative studies in several nonmammalian species have shown that immediately after the onset of continuous illumination, cones are less light sensitive than in darkness, but sensitivity then recovers over the following 15–20 s. We characterize light sensitivity recovery in continuously illuminated wild-type zebrafish cones and demonstrate that sensitivity recovery does not occur in the absence of EML1. PMID:24198367
Korenbrot, Juan I; Mehta, Milap; Tserentsoodol, Nomingerel; Postlethwait, John H; Rebrik, Tatiana I
2013-11-06
The ligand sensitivity of cGMP-gated (CNG) ion channels in cone photoreceptors is modulated by CNG-modulin, a Ca(2+)-binding protein. We investigated the functional role of CNG-modulin in phototransduction in vivo in morpholino-mediated gene knockdown zebrafish. Through comparative genomic analysis, we identified the orthologue gene of CNG-modulin in zebrafish, eml1, an ancient gene present in the genome of all vertebrates sequenced to date. We compare the photoresponses of wild-type cones with those of cones that do not express the EML1 protein. In the absence of EML1, dark-adapted cones are ∼5.3-fold more light sensitive than wild-type cones. Previous qualitative studies in several nonmammalian species have shown that immediately after the onset of continuous illumination, cones are less light sensitive than in darkness, but sensitivity then recovers over the following 15-20 s. We characterize light sensitivity recovery in continuously illuminated wild-type zebrafish cones and demonstrate that sensitivity recovery does not occur in the absence of EML1.
A retrospective analysis of preoperative staging modalities for oral squamous cell carcinoma.
Kähling, Ch; Langguth, T; Roller, F; Kroll, T; Krombach, G; Knitschke, M; Streckbein, Ph; Howaldt, H P; Wilbrand, J-F
2016-12-01
An accurate preoperative assessment of cervical lymph node status is a prerequisite for individually tailored cancer therapies in patients with oral squamous cell carcinoma. The detection of malignant spread and its treatment crucially influence the prognosis. The aim of the present study was to analyze the different staging modalities used among patients with a diagnosis of primary oral squamous cell carcinoma between 2008 and 2015. An analysis of preoperative staging findings, collected by clinical palpation, ultrasound, and computed tomography (CT), was performed. The results obtained were compared with the results of the final histopathological findings of the neck dissection specimens. A statistical analysis using McNemar's test was performed. The sensitivity of CT for the detection of malignant cervical tumor spread was 74.5%. The ultrasound obtained a sensitivity of 60.8%. Both CT and ultrasound demonstrated significantly enhanced sensitivity compared to the clinical palpation with a sensitivity of 37.1%. No significant difference was observed between CT and ultrasound. A combination of different staging modalities increased the sensitivity significantly compared with ultrasound staging alone. No significant difference in sensitivity was found between the combined use of different staging modalities and CT staging alone. The highest sensitivity, of 80.0%, was obtained by a combination of all three staging modalities: clinical palpation, ultrasound and CT. The present study indicates that CT has an essential role in the preoperative staging of patients with oral squamous cell carcinoma. Its use not only significantly increases the sensitivity of cervical lymph node metastasis detection but also offers a preoperative assessment of local tumor spread and resection borders. An additional non-invasive cervical lymph node examination increases the sensitivity of the tumor staging process and reduces the risk of occult metastasis. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fu, Rongxin; Li, Qi; Zhang, Junqi; Wang, Ruliang; Lin, Xue; Xue, Ning; Su, Ya; Jiang, Kai; Huang, Guoliang
2016-10-01
Label free point mutation detection is particularly momentous in the area of biomedical research and clinical diagnosis since gene mutations naturally occur and bring about highly fatal diseases. In this paper, a label free and high sensitive approach is proposed for point mutation detection based on hyperspectral interferometry. A hybridization strategy is designed to discriminate a single-base substitution with sequence-specific DNA ligase. Double-strand structures will take place only if added oligonucleotides are perfectly paired to the probe sequence. The proposed approach takes full use of the inherent conformation of double-strand DNA molecules on the substrate and a spectrum analysis method is established to point out the sub-nanoscale thickness variation, which benefits to high sensitive mutation detection. The limit of detection reach 4pg/mm2 according to the experimental result. A lung cancer gene point mutation was demonstrated, proving the high selectivity and multiplex analysis capability of the proposed biosensor.
Bashyam, Ashvin; Li, Matthew; Cima, Michael J
2018-07-01
Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bashyam, Ashvin; Li, Matthew; Cima, Michael J.
2018-07-01
Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR.
An evaluation of computer-aided disproportionality analysis for post-marketing signal detection.
Lehman, H P; Chen, J; Gould, A L; Kassekert, R; Beninger, P R; Carney, R; Goldberg, M; Goss, M A; Kidos, K; Sharrar, R G; Shields, K; Sweet, A; Wiholm, B E; Honig, P K
2007-08-01
To understand the value of computer-aided disproportionality analysis (DA) in relation to current pharmacovigilance signal detection methods, four products were retrospectively evaluated by applying an empirical Bayes method to Merck's post-marketing safety database. Findings were compared with the prior detection of labeled post-marketing adverse events. Disproportionality ratios (empirical Bayes geometric mean lower 95% bounds for the posterior distribution (EBGM05)) were generated for product-event pairs. Overall (1993-2004 data, EBGM05> or =2, individual terms) results of signal detection using DA compared to standard methods were sensitivity, 31.1%; specificity, 95.3%; and positive predictive value, 19.9%. Using groupings of synonymous labeled terms, sensitivity improved (40.9%). More of the adverse events detected by both methods were detected earlier using DA and grouped (versus individual) terms. With 1939-2004 data, diagnostic properties were similar to those from 1993 to 2004. DA methods using Merck's safety database demonstrate sufficient sensitivity and specificity to be considered for use as an adjunct to conventional signal detection methods.
Kook, Sung-Ho; Son, Young-Ok; Han, Seong-Kyu; Lee, Hyung-Soon; Kim, Beom-Tae; Jang, Yong-Suk; Choi, Ki-Choon; Lee, Keun-Soo; Kim, So-Soon; Lim, Ji-Young; Jeon, Young-Mi; Kim, Jong-Ghee; Lee, Jeong-Chae
2005-11-30
Epstein-Barr virus (EBV) infects more than 90 % of the world's population and has a potential oncogenic nature. A histone deacetylase (HDAC) inhibitor, trichostatin A (TSA), has shown potential ability in cancer chemoprevention and treatment, but its effect on EBV-infected Akata cells has not been examined. This study investigated the effect of TSA on the proliferation and apoptosis of the cells. TSA inhibited cell growth and induced cytotoxicity in the EBV-infected Akata cells. TSA treatment sensitively induced apoptosis in the cell, which was demonstrated by the increased number of positively stained cells in the TUNEL assay, the migration of many cells to the sub-G0/G1 phase in flow cytometric analysis, and the ladder formation of genomic DNA. Western blot analysis showed that caspase-dependent pathways are involved in the TSA-induced apoptosis of EBV-infected Akata cells. Overall, this study shows that EBV-infected B lymphomas are quite sensitive to TSA-provoked apoptosis.
Taki, M; Signorini, A; Oton, C J; Nannipieri, T; Di Pasquale, F
2013-10-15
We experimentally demonstrate the use of cyclic pulse coding for distributed strain and temperature measurements in hybrid Raman/Brillouin optical time-domain analysis (BOTDA) optical fiber sensors. The highly integrated proposed solution effectively addresses the strain/temperature cross-sensitivity issue affecting standard BOTDA sensors, allowing for simultaneous meter-scale strain and temperature measurements over 10 km of standard single mode fiber using a single narrowband laser source only.
Homšak, Matjaž; Silar, Mira; Berce, Vojko; Tomazin, Maja; Skerbinjek-Kavalar, Maja; Celesnik, Nina; Košnik, Mitja; Korošec, Peter
2013-01-01
Peanut sensitization is common in children. However, it is difficult to assess which children will react mildly and which severely. This study evaluated the relevance of basophil allergen sensitivity testing to distinguish the severity of peanut allergy in children. Twenty-seven peanut-sensitized children with symptoms varying from mild symptoms to severe anaphylaxis underwent peanut CD63 dose-response curve analysis with the inclusion of basophil allergen sensitivity calculation (CD-sens) and peanut component immunoglobulin E (IgE) testing. Eleven children who had experienced anaphylaxis to peanuts showed a markedly higher peanut CD63 response at submaximal allergen concentrations and CD-sens (median 1,667 vs. 0.5; p < 0.0001) than 16 children who experienced a milder reaction. Furthermore, a negative or low CD-sens to peanuts unambiguously excluded anaphylactic peanut allergy. Children with anaphylaxis have higher levels of Ara h 1, 2, 3 and 9 IgE, but comparable levels of IgE to Ara h 8 and whole-peanut extract. The diagnostic specificity calculated with a receiver operating characteristic analysis reached 100% for CD-sens and 73% for Ara h 2. We demonstrated that severe peanut allergy is significantly associated with higher basophil allergen sensitivity. This cellular test should facilitate a more accurate diagnosis of peanut allergy. © 2013 S. Karger AG, Basel.
The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.
Kepes, Sven; McDaniel, Michael A
2015-01-01
Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance
2015-01-01
Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Blackmore, C Craig; Terasawa, Teruhiko
2006-02-01
Error in radiology can be reduced by standardizing the interpretation of imaging studies to the optimum sensitivity and specificity. In this report, the authors demonstrate how the optimal interpretation of appendiceal computed tomography (CT) can be determined and how it varies in different clinical scenarios. Utility analysis and receiver operating characteristic (ROC) curve modeling were used to determine the trade-off between false-positive and false-negative test results to determine the optimal operating point on the ROC curve for the interpretation of appendicitis CT. Modeling was based on a previous meta-analysis for the accuracy of CT and on literature estimates of the utilities of various health states. The posttest probability of appendicitis was derived using Bayes's theorem. At a low prevalence of disease (screening), appendicitis CT should be interpreted at high specificity (97.7%), even at the expense of lower sensitivity (75%). Conversely, at a high probability of disease, high sensitivity (97.4%) is preferred (specificity 77.8%). When the clinical diagnosis of appendicitis is equivocal, CT interpretation should emphasize both sensitivity and specificity (sensitivity 92.3%, specificity 91.5%). Radiologists can potentially decrease medical error and improve patient health by varying the interpretation of appendiceal CT on the basis of the clinical probability of appendicitis. This report is an example of how utility analysis can be used to guide radiologists in the interpretation of imaging studies and provide guidance on appropriate targets for the standardization of interpretation.
Hutsell, Blake A; Negus, S Stevens; Banks, Matthew L
2015-01-01
We have previously demonstrated reductions in cocaine choice produced by either continuous 14-day phendimetrazine and d-amphetamine treatment or removing cocaine availability under a cocaine vs. food choice procedure in rhesus monkeys. The aim of the present investigation was to apply the concatenated generalized matching law (GML) to cocaine vs. food choice dose-effect functions incorporating sensitivity to both the relative magnitude and price of each reinforcer. Our goal was to determine potential behavioral mechanisms underlying pharmacological treatment efficacy to decrease cocaine choice. A multi-model comparison approach was used to characterize dose- and time-course effects of both pharmacological and environmental manipulations on sensitivity to reinforcement. GML models provided an excellent fit of the cocaine choice dose-effect functions in individual monkeys. Reductions in cocaine choice by both pharmacological and environmental manipulations were principally produced by systematic decreases in sensitivity to reinforcer price and non-systematic changes in sensitivity to reinforcer magnitude. The modeling approach used provides a theoretical link between the experimental analysis of choice and pharmacological treatments being evaluated as candidate 'agonist-based' medications for cocaine addiction. The analysis suggests that monoamine releaser treatment efficacy to decrease cocaine choice was mediated by selectively increasing the relative price of cocaine. Overall, the net behavioral effect of these pharmacological treatments was to increase substitutability of food pellets, a nondrug reinforcer, for cocaine. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
The clinical relevance of birch pollen profilin cross-reactivity in sensitized patients.
Wölbing, F; Kunz, J; Kempf, W E; Grimmel, C; Fischer, J; Biedermann, T
2017-04-01
Overlapping seasons and cross-reactivity, especially to grass pollen profilin, can hamper the diagnosis of birch pollen allergy. To identify the primary sensitizing allergen and the clinical relevance of cross-sensitization, we correlated sensitization profiles with in vitro and in vivo tests, symptom scores, and pollen counts. A total of 433 patients with positive skin prick test (SPT) to birch pollen were analyzed regarding IgE to major birch and grass pollen allergens Bet v 1 and Phl p 1/p 5 and the profilins Bet v 2 and Phl p 12. Subgroups were analyzed by basophil activation test (BAT) and CAP-FEIA-based cross- and self-inhibition tests. A total of 349 patients were sensitized to Bet v 1, 44 patients to both Bet v 1 and Bet v 2, and 15 patients to Bet v 2 only. From Bet v 2-sensitized patients, 40 were also sensitized to Phl p 12. Ex vivo, Bet v 2 and Phl p 12 induced dose-dependent activation in basophils of these patients. Cross- and self-inhibition tests with both allergens confirmed cross-reactivity. However, semiquantitative analysis of SPTs demonstrated markedly increased reactivity to grass compared to birch pollen extract in Bet v 2 only sensitized patients. Accordingly, in most of those patients, clinical symptoms precisely correlated with grass pollen counts. Identification of the clinically relevant and sensitizing allergen needs correlation of actual pollen counts with clinical symptoms and sensitization status to major allergens. Semiquantitative analysis of SPT or BAT and determining profilin-specific IgE can contribute to making the diagnosis. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Wilson, Rachel L.; Blackman, Christopher S.; Carmalt, Claire J.; Stanoiu, Adelina; Di Maggio, Francesco
2018-01-01
Analyte sensitivity for gas sensors based on semiconducting metal oxides should be highly dependent on the film thickness, particularly when that thickness is on the order of the Debye length. This thickness dependence has previously been demonstrated for SnO2 and inferred for TiO2. In this paper, TiO2 thin films have been prepared by Atomic Layer Deposition (ALD) using titanium isopropoxide and water as precursors. The deposition process was performed on standard alumina gas sensor platforms and microscope slides (for analysis purposes), at a temperature of 200 °C. The TiO2 films were exposed to different concentrations of CO, CH4, NO2, NH3 and SO2 to evaluate their gas sensitivities. These experiments showed that the TiO2 film thickness played a dominant role within the conduction mechanism and the pattern of response for the electrical resistance towards CH4 and NH3 exposure indicated typical n-type semiconducting behavior. The effect of relative humidity on the gas sensitivity has also been demonstrated. PMID:29494504
NASA Astrophysics Data System (ADS)
Zhao, Yang; Dai, Rui-Na; Xiao, Xiang; Zhang, Zong; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe
2017-02-01
Two-person neuroscience, a perspective in understanding human social cognition and interaction, involves designing immersive social interaction experiments as well as simultaneously recording brain activity of two or more subjects, a process termed "hyperscanning." Using newly developed imaging techniques, the interbrain connectivity or hyperlink of various types of social interaction has been revealed. Functional near-infrared spectroscopy (fNIRS)-hyperscanning provides a more naturalistic environment for experimental paradigms of social interaction and has recently drawn much attention. However, most fNIRS-hyperscanning studies have computed hyperlinks using sensor data directly while ignoring the fact that the sensor-level signals contain confounding noises, which may lead to a loss of sensitivity and specificity in hyperlink analysis. In this study, on the basis of independent component analysis (ICA), a source-level analysis framework is proposed to investigate the hyperlinks in a fNIRS two-person neuroscience study. The performance of five widely used ICA algorithms in extracting sources of interaction was compared in simulative datasets, and increased sensitivity and specificity of hyperlink analysis by our proposed method were demonstrated in both simulative and real two-person experiments.
Koenig, Helen C; Finkel, Barbara B; Khalsa, Satjeet S; Lanken, Paul N; Prasad, Meeta; Urbani, Richard; Fuchs, Barry D
2011-01-01
Lung protective ventilation reduces mortality in patients with acute lung injury, but underrecognition of acute lung injury has limited its use. We recently validated an automated electronic acute lung injury surveillance system in patients with major trauma in a single intensive care unit. In this study, we assessed the system's performance as a prospective acute lung injury screening tool in a diverse population of intensive care unit patients. Patients were screened prospectively for acute lung injury over 21 wks by the automated system and by an experienced research coordinator who manually screened subjects for enrollment in Acute Respiratory Distress Syndrome Clinical Trials Network (ARDSNet) trials. Performance of the automated system was assessed by comparing its results with the manual screening process. Discordant results were adjudicated blindly by two physician reviewers. In addition, a sensitivity analysis using a range of assumptions was conducted to better estimate the system's performance. The Hospital of the University of Pennsylvania, an academic medical center and ARDSNet center (1994-2006). Intubated patients in medical and surgical intensive care units. None. Of 1270 patients screened, 84 were identified with acute lung injury (incidence of 6.6%). The automated screening system had a sensitivity of 97.6% (95% confidence interval, 96.8-98.4%) and a specificity of 97.6% (95% confidence interval, 96.8-98.4%). The manual screening algorithm had a sensitivity of 57.1% (95% confidence interval, 54.5-59.8%) and a specificity of 99.7% (95% confidence interval, 99.4-100%). Sensitivity analysis demonstrated a range for sensitivity of 75.0-97.6% of the automated system under varying assumptions. Under all assumptions, the automated system demonstrated higher sensitivity than and comparable specificity to the manual screening method. An automated electronic system identified patients with acute lung injury with high sensitivity and specificity in diverse intensive care units of a large academic medical center. Further studies are needed to evaluate the effect of automated prompts that such a system can initiate on the use of lung protective ventilation in patients with acute lung injury.
Interventional MRI: tapering improves the distal sensitivity of the loopless antenna.
Qian, Di; El-Sharkawy, AbdEl-Monem M; Atalar, Ergin; Bottomley, Paul A
2010-03-01
The "loopless antenna" is an interventional MRI detector consisting of a tuned coaxial cable and an extended inner conductor or "whip". A limitation is the poor sensitivity afforded at, and immediately proximal to, its distal end, which is exacerbated by the extended whip length when the whip is uniformly insulated. It is shown here that tapered insulation dramatically improves the distal sensitivity of the loopless antenna by pushing the current sensitivity toward the tip. The absolute signal-to-noise ratio is numerically computed by the electromagnetic method-of-moments for three resonant 3-T antennae with no insulation, uniform insulation, and with linearly tapered insulation. The analysis shows that tapered insulation provides an approximately 400% increase in signal-to-noise ratio in trans-axial planes 1 cm from the tip and a 16-fold increase in the sensitive area as compared to an equivalent, uniformly insulated antenna. These findings are directly confirmed by phantom experiments and by MRI of an aorta specimen. The results demonstrate that numerical electromagnetic signal-to-noise ratio analysis can accurately predict the loopless detector's signal-to-noise ratio and play a central role in optimizing its design. The manifold improvement in distal signal-to-noise ratio afforded by redistributing the insulation should improve the loopless antenna's utility for interventional MRI. (c) 2010 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Cohen, Gerald A.; Mroz, Zenon
1990-01-01
A uniform variational approach to sensitivity analysis of vibration frequencies and bifurcation loads of nonlinear structures is developed. Two methods of calculating the sensitivities of bifurcation buckling loads and vibration frequencies of nonlinear structures, with respect to stiffness and initial strain parameters, are presented. A direct method requires calculation of derivatives of the prebuckling state with respect to these parameters. An adjoint method bypasses the need for these derivatives by using instead the strain field associated with the second-order postbuckling state. An operator notation is used and the derivation is based on the principle of virtual work. The derivative computations are easily implemented in structural analysis programs. This is demonstrated by examples using a general purpose, finite element program and a shell-of-revolution program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilfiker, James N.; Stadermann, Michael; Sun, Jianing
It is a well-known challenge to determine refractive index (n) from ultra-thin films where the thickness is less than about 10 nm. In this paper, we discovered an interesting exception to this issue while characterizing spectroscopic ellipsometry (SE) data from isotropic, free-standing polymer films. Ellipsometry analysis shows that both thickness and refractive index can be independently determined for free-standing films as thin as 5 nm. Simulations further confirm an orthogonal separation between thickness and index effects on the experimental SE data. Effects of angle of incidence and wavelength on the data and sensitivity are discussed. Finally, while others have demonstratedmore » methods to determine refractive index from ultra-thin films, our analysis provides the first results to demonstrate high-sensitivity to the refractive index from ultra-thin layers.« less
Schnabel, Thomas; Musso, Maurizio; Tondi, Gianluca
2014-01-01
Vibrational spectroscopy is one of the most powerful tools in polymer science. Three main techniques--Fourier transform infrared spectroscopy (FT-IR), FT-Raman spectroscopy, and FT near-infrared (NIR) spectroscopy--can also be applied to wood science. Here, these three techniques were used to investigate the chemical modification occurring in wood after impregnation with tannin-hexamine preservatives. These spectroscopic techniques have the capacity to detect the externally added tannin. FT-IR has very strong sensitivity to the aromatic peak at around 1610 cm(-1) in the tannin-treated samples, whereas FT-Raman reflects the peak at around 1600 cm(-1) for the externally added tannin. This high efficacy in distinguishing chemical features was demonstrated in univariate analysis and confirmed via cluster analysis. Conversely, the results of the NIR measurements show noticeable sensitivity for small differences. For this technique, multivariate analysis is required and with this chemometric tool, it is also possible to predict the concentration of tannin on the surface.
Giménez-Arnau, Ana; Silvestre, Juan Francisco; Mercader, Pedro; De la Cuadra, Jesus; Ballester, Isabel; Gallardo, Fernando; Pujol, Ramón M; Zimerson, Erik; Bruze, Magnus
2009-11-01
The methyl ester form of fumaric acid named dimethyl fumarate (DMF) is an effective mould-growth inhibitor. Its irritating and sensitizing properties were demonstrated in animal models. Recently, DMF has been identified as responsible for furniture contact dermatitis in Europe. To describe the clinical manifestations, patch test results, shoe chemical analysis, and source of exposure to DMF-induced shoe contact dermatitis. Patients with suspected shoe contact dermatitis were studied in compliance with the Declaration of Helsinki. Patch test results obtained with their own shoe and the European baseline series, acrylates and fumaric acid esters (FAE), were recorded according to international guidelines. The content of DMF in shoes was analysed with gas chromatography and mass spectrometry. Acute, immediate irritant contact dermatitis and non-immunological contact urticaria were observed in eight adults and two children, respectively. All the adult patients studied developed a delayed sensitization demonstrated by a positive patch testing to DMF < or = 0.1% in pet. Cross-reactivity with other FAEs and acrylates was observed. At least 12 different shoe brands were investigated. The chemical analysis from the available shoes showed the presence of DMF. DMF in shoes was responsible for severe contact dermatitis. Global preventive measures for avoiding contact with DMF are necessary.
Salmon, Stefanie J; Adriaanse, Marieke A; De Vet, Emely; Fennis, Bob M; De Ridder, Denise T D
2014-01-01
Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion.
Salmon, Stefanie J.; Adriaanse, Marieke A.; De Vet, Emely; Fennis, Bob M.; De Ridder, Denise T. D.
2014-01-01
Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion. PMID:25009523
Quantitative polarized light microscopy using spectral multiplexing interferometry.
Li, Chengshuai; Zhu, Yizheng
2015-06-01
We propose an interferometric spectral multiplexing method for measuring birefringent specimens with simple configuration and high sensitivity. The retardation and orientation of sample birefringence are simultaneously encoded onto two spectral carrier waves, generated interferometrically by a birefringent crystal through polarization mixing. A single interference spectrum hence contains sufficient information for birefringence determination, eliminating the need for mechanical rotation or electrical modulation. The technique is analyzed theoretically and validated experimentally on cellulose film. System simplicity permits the possibility of mitigating system birefringence background. Further analysis demonstrates the technique's exquisite sensitivity as high as ∼20 pm for retardation measurement.
Wu, Chuang; Tse, Ming-Leung Vincent; Liu, Zhengyong; Guan, Bai-Ou; Lu, Chao; Tam, Hwa-Yaw
2013-09-01
We propose and demonstrate a highly sensitive in-line photonic crystal fiber (PCF) microfluidic refractometer. Ultrathin C-shaped fibers are spliced in-between the PCF and standard single-mode fibers. The C-shaped fibers provide openings for liquid to flow in and out of the PCF. Based on a Sagnac interferometer, the refractive index (RI) response of the device is investigated theoretically and experimentally. A high sensitivity of 6621 nm/RIU for liquid RI from 1.330 to 1.333 is achieved in the experiment, which agrees well with the theoretical analysis.
Stewart, James A.; Kohnert, Aaron A.; Capolungo, Laurent; ...
2018-03-06
The complexity of radiation effects in a material’s microstructure makes developing predictive models a difficult task. In principle, a complete list of all possible reactions between defect species being considered can be used to elucidate damage evolution mechanisms and its associated impact on microstructure evolution. However, a central limitation is that many models use a limited and incomplete catalog of defect energetics and associated reactions. Even for a given model, estimating its input parameters remains a challenge, especially for complex material systems. Here, we present a computational analysis to identify the extent to which defect accumulation, energetics, and irradiation conditionsmore » can be determined via forward and reverse regression models constructed and trained from large data sets produced by cluster dynamics simulations. A global sensitivity analysis, via Sobol’ indices, concisely characterizes parameter sensitivity and demonstrates how this can be connected to variability in defect evolution. Based on this analysis and depending on the definition of what constitutes the input and output spaces, forward and reverse regression models are constructed and allow for the direct calculation of defect accumulation, defect energetics, and irradiation conditions. Here, this computational analysis, exercised on a simplified cluster dynamics model, demonstrates the ability to design predictive surrogate and reduced-order models, and provides guidelines for improving model predictions within the context of forward and reverse engineering of mathematical models for radiation effects in a materials’ microstructure.« less
Roine, Antti; Saviauk, Taavi; Kumpulainen, Pekka; Karjalainen, Markus; Tuokko, Antti; Aittoniemi, Janne; Vuento, Risto; Lekkala, Jukka; Lehtimäki, Terho; Tammela, Teuvo L; Oksala, Niku K J
2014-01-01
Urinary tract infection (UTI) is a common disease with significant morbidity and economic burden, accounting for a significant part of the workload in clinical microbiology laboratories. Current clinical chemisty point-of-care diagnostics rely on imperfect dipstick analysis which only provides indirect and insensitive evidence of urinary bacterial pathogens. An electronic nose (eNose) is a handheld device mimicking mammalian olfaction that potentially offers affordable and rapid analysis of samples without preparation at athmospheric pressure. In this study we demonstrate the applicability of ion mobility spectrometry (IMS) -based eNose to discriminate the most common UTI pathogens from gaseous headspace of culture plates rapidly and without sample preparation. We gathered a total of 101 culture samples containing four most common UTI bacteries: E. coli, S. saprophyticus, E. faecalis, Klebsiella spp and sterile culture plates. The samples were analyzed using ChemPro 100i device, consisting of IMS cell and six semiconductor sensors. Data analysis was conducted by linear discriminant analysis (LDA) and logistic regression (LR). The results were validated by leave-one-out and 5-fold cross validation analysis. In discrimination of sterile and bacterial samples sensitivity of 95% and specificity of 97% were achieved. The bacterial species were identified with sensitivity of 95% and specificity of 96% using eNose as compared to urine bacterial cultures. These findings strongly demonstrate the ability of our eNose to discriminate bacterial cultures and provides a proof of principle to use this method in urinanalysis of UTI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, James A.; Kohnert, Aaron A.; Capolungo, Laurent
The complexity of radiation effects in a material’s microstructure makes developing predictive models a difficult task. In principle, a complete list of all possible reactions between defect species being considered can be used to elucidate damage evolution mechanisms and its associated impact on microstructure evolution. However, a central limitation is that many models use a limited and incomplete catalog of defect energetics and associated reactions. Even for a given model, estimating its input parameters remains a challenge, especially for complex material systems. Here, we present a computational analysis to identify the extent to which defect accumulation, energetics, and irradiation conditionsmore » can be determined via forward and reverse regression models constructed and trained from large data sets produced by cluster dynamics simulations. A global sensitivity analysis, via Sobol’ indices, concisely characterizes parameter sensitivity and demonstrates how this can be connected to variability in defect evolution. Based on this analysis and depending on the definition of what constitutes the input and output spaces, forward and reverse regression models are constructed and allow for the direct calculation of defect accumulation, defect energetics, and irradiation conditions. Here, this computational analysis, exercised on a simplified cluster dynamics model, demonstrates the ability to design predictive surrogate and reduced-order models, and provides guidelines for improving model predictions within the context of forward and reverse engineering of mathematical models for radiation effects in a materials’ microstructure.« less
Tao, Weijing; Shen, Yang; Guo, Lili; Bo, Genji
2014-01-01
Balanced steady-state free precession MR angiography (b-SSFP MRA) has shown great promise in diagnosing renal artery stenosis (RAS) as a non-contrast MR angiography (NC-MRA) method. However, results from related studies are inconsistent. The purpose of this meta-analysis was to assess the accuracy of b-SSFP MRA compared to contrast-enhanced MR angiography (CE-MRA) in diagnosing RAS. English and Chinese studies that were published prior to September 4, 2013 and that assessed b-SSFP MRA diagnostic performance in RAS patients were reviewed. Quality of the literature was assessed independently by two observers. The statistical analysis was adopted by the software of Meta-Disc version 1.4. Using the heterogeneity test, a statistical effect model was chosen to calculate different pooled weighted values. The receiver operator characteristic (ROC) space and Spearman correlation coefficient were to explore threshold effect. Sensitivity analysis and the publication bias were performed to demonstrate if the pooled estimates were stable and reliable. We produced forest plots to calculate the pooled values and corresponding 95% confidence interval (CI) of sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), and constructed a summary receiver operating characteristic curve (SROC) to calculate the area under the curve (AUC). A total of 10 high quality articles were used in this meta-analysis. The studies showed a high degree of heterogeneity. The "shoulder-arm" shape in the ROC plot and the Spearman correlation coefficient between the log(SEN) and log(1-SPE) suggested that there was a threshold effect. Sensitivity analysis demonstrated that the actual combined effect size was equal to the theoretical combined effect size. The publication bias was low after quality evaluation of the literature and the construction of a funnel plot. The pooled sensitivity was 0.88 (95% CI, 0.83-0.91) and pooled specificity was 0.94 (95% CI, 0.93-0.95); pooled PLR was 14.57 (95% CI, 9.78-21.71]) and pooled NLR was 0.15 (95% CI, 0.11-0.20). The AUC was 0.964 3. In contrast to CE-MRA, the b-SSFP MRA is more accurate in diagnosing RAS, and may be able to replace other diagnostic methods in patients with renal insufficiency.
Ruff, Kristin R; Puetter, Adriane; Levy, Laura S
2007-01-01
Background AIDS-related non-Hodgkin's lymphoma (AIDS-NHL) is the second most frequent cancer associated with AIDS, and is a frequent cause of death in HIV-infected individuals. Experimental analysis of AIDS-NHL has been facilitated by the availability of an excellent animal model, i.e., simian Acquired Immunodeficiency Syndrome (SAIDS) in the rhesus macaque consequent to infection with simian immunodeficiency virus. A recent study of SAIDS-NHL demonstrated a lymphoma-derived cell line to be sensitive to the growth inhibitory effects of the ubiquitous cytokine, transforming growth factor-beta (TGF-beta). The authors concluded that TGF-beta acts as a negative growth regulator of the lymphoma-derived cell line and, potentially, as an inhibitory factor in the regulatory network of AIDS-related lymphomagenesis. The present study was conducted to assess whether other SAIDS-NHL and AIDS-NHL cell lines are similarly sensitive to the growth inhibitory effects of TGF-beta, and to test the hypothesis that interleukin-6 (IL-6) may represent a counteracting positive influence in their growth regulation. Methods Growth stimulation or inhibition in response to cytokine treatment was quantified using trypan blue exclusion or colorimetric MTT assay. Intracellular flow cytometry was used to analyze the activation of signaling pathways and to examine the expression of anti-apoptotic proteins and distinguishing hallmarks of AIDS-NHL subclass. Apoptosis was quantified by flow cytometric analysis of cell populations with sub-G1 DNA content and by measuring activated caspase-3. Results Results confirmed the sensitivity of LCL8664, an immunoblastic SAIDS-NHL cell line, to TGF-beta1-mediated growth inhibition, and further demonstrated the partial rescue by simultaneous treatment with IL-6. IL-6 was shown to activate STAT3, even in the presence of TGF-beta1, and thereby to activate proliferative and anti-apoptotic pathways. By comparison, human AIDS-NHL cell lines differed in their responsiveness to TGF-beta1 and IL-6. Analysis of a recently derived AIDS-NHL cell line, UMCL01-101, indicated that it represents immunoblastic AIDS-DLCBL. Like LCL-8664, UMCL01-101 was sensitive to TGF-beta1-mediated inhibition, rescued partially by IL-6, and demonstrated rapid STAT3 activation following IL-6 treatment even in the presence of TGF-beta1. Conclusion These studies indicate that the sensitivity of immunoblastic AIDS- or SAIDS-DLBCL to TGF-beta1-mediated growth inhibition may be overcome through the stimulation of proliferative and anti-apoptotic signals by IL-6, particularly through the rapid activation of STAT3. PMID:17324269
Total protein analysis as a reliable loading control for quantitative fluorescent Western blotting.
Eaton, Samantha L; Roche, Sarah L; Llavero Hurtado, Maica; Oldknow, Karla J; Farquharson, Colin; Gillingwater, Thomas H; Wishart, Thomas M
2013-01-01
Western blotting has been a key technique for determining the relative expression of proteins within complex biological samples since the first publications in 1979. Recent developments in sensitive fluorescent labels, with truly quantifiable linear ranges and greater limits of detection, have allowed biologists to probe tissue specific pathways and processes with higher resolution than ever before. However, the application of quantitative Western blotting (QWB) to a range of healthy tissues and those from degenerative models has highlighted a problem with significant consequences for quantitative protein analysis: how can researchers conduct comparative expression analyses when many of the commonly used reference proteins (e.g. loading controls) are differentially expressed? Here we demonstrate that common controls, including actin and tubulin, are differentially expressed in tissues from a wide range of animal models of neurodegeneration. We highlight the prevalence of such alterations through examination of published "-omics" data, and demonstrate similar responses in sensitive QWB experiments. For example, QWB analysis of spinal cord from a murine model of Spinal Muscular Atrophy using an Odyssey scanner revealed that beta-actin expression was decreased by 19.3±2% compared to healthy littermate controls. Thus, normalising QWB data to β-actin in these circumstances could result in 'skewing' of all data by ∼20%. We further demonstrate that differential expression of commonly used loading controls was not restricted to the nervous system, but was also detectable across multiple tissues, including bone, fat and internal organs. Moreover, expression of these "control" proteins was not consistent between different portions of the same tissue, highlighting the importance of careful and consistent tissue sampling for QWB experiments. Finally, having illustrated the problem of selecting appropriate single protein loading controls, we demonstrate that normalisation using total protein analysis on samples run in parallel with stains such as Coomassie blue provides a more robust approach.
Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.
Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi
2017-05-01
Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.
Mendes, Paula; Nunes, Luis Miguel; Teixeira, Margarida Ribau
2014-09-01
This article demonstrates how decision-makers can be guided in the process of defining performance target values in the balanced scorecard system. We apply a method based on sensitivity analysis with Monte Carlo simulation to the municipal solid waste management system in Loulé Municipality (Portugal). The method includes two steps: sensitivity analysis of performance indicators to identify those performance indicators with the highest impact on the balanced scorecard model outcomes; and sensitivity analysis of the target values for the previously identified performance indicators. Sensitivity analysis shows that four strategic objectives (IPP1: Comply with the national waste strategy; IPP4: Reduce nonrenewable resources and greenhouse gases; IPP5: Optimize the life-cycle of waste; and FP1: Meet and optimize the budget) alone contribute 99.7% of the variability in overall balanced scorecard value. Thus, these strategic objectives had a much stronger impact on the estimated balanced scorecard outcome than did others, with the IPP1 and the IPP4 accounting for over 55% and 22% of the variance in overall balanced scorecard value, respectively. The remaining performance indicators contribute only marginally. In addition, a change in the value of a single indicator's target value made the overall balanced scorecard value change by as much as 18%. This may lead to involuntarily biased decisions by organizations regarding performance target-setting, if not prevented with the help of methods such as that proposed and applied in this study. © The Author(s) 2014.
Self-consistent adjoint analysis for topology optimization of electromagnetic waves
NASA Astrophysics Data System (ADS)
Deng, Yongbo; Korvink, Jan G.
2018-05-01
In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.
From web search to healthcare utilization: privacy-sensitive studies from mobile data
Horvitz, Eric
2013-01-01
Objective We explore relationships between health information seeking activities and engagement with healthcare professionals via a privacy-sensitive analysis of geo-tagged data from mobile devices. Materials and methods We analyze logs of mobile interaction data stripped of individually identifiable information and location data. The data analyzed consist of time-stamped search queries and distances to medical care centers. We examine search activity that precedes the observation of salient evidence of healthcare utilization (EHU) (ie, data suggesting that the searcher is using healthcare resources), in our case taken as queries occurring at or near medical facilities. Results We show that the time between symptom searches and observation of salient evidence of seeking healthcare utilization depends on the acuity of symptoms. We construct statistical models that make predictions of forthcoming EHU based on observations about the current search session, prior medical search activities, and prior EHU. The predictive accuracy of the models varies (65%–90%) depending on the features used and the timeframe of the analysis, which we explore via a sensitivity analysis. Discussion We provide a privacy-sensitive analysis that can be used to generate insights about the pursuit of health information and healthcare. The findings demonstrate how large-scale studies of mobile devices can provide insights on how concerns about symptomatology lead to the pursuit of professional care. Conclusion We present new methods for the analysis of mobile logs and describe a study that provides evidence about how people transition from mobile searches on symptoms and diseases to the pursuit of healthcare in the world. PMID:22661560
2014-01-01
Background Due to the recent European legislations posing a ban of animal tests for safety assessment within the cosmetic industry, development of in vitro alternatives for assessment of skin sensitization is highly prioritized. To date, proposed in vitro assays are mainly based on single biomarkers, which so far have not been able to classify and stratify chemicals into subgroups, related to risk or potency. Methods Recently, we presented the Genomic Allergen Rapid Detection (GARD) assay for assessment of chemical sensitizers. In this paper, we show how the genome wide readout of GARD can be expanded and used to identify differentially regulated pathways relating to individual chemical sensitizers. In this study, we investigated the mechanisms of action of a range of skin sensitizers through pathway identification, pathway classification and transcription factor analysis and related this to the reactive mechanisms and potency of the sensitizing agents. Results By transcriptional profiling of chemically stimulated MUTZ-3 cells, 33 canonical pathways intimately involved in sensitization to chemical substances were identified. The results showed that metabolic processes, cell cycling and oxidative stress responses are the key events activated during skin sensitization, and that these functions are engaged differently depending on the reactivity mechanisms of the sensitizing agent. Furthermore, the results indicate that the chemical reactivity groups seem to gradually engage more pathways and more molecules in each pathway with increasing sensitizing potency of the chemical used for stimulation. Also, a switch in gene regulation from up to down regulation, with increasing potency, was seen both in genes involved in metabolic functions and cell cycling. These observed pathway patterns were clearly reflected in the regulatory elements identified to drive these processes, where 33 regulatory elements have been proposed for further analysis. Conclusions This study demonstrates that functional analysis of biomarkers identified from our genomics study of human MUTZ-3 cells can be used to assess sensitizing potency of chemicals in vitro, by the identification of key cellular events, such as metabolic and cell cycling pathways. PMID:24517095
Developmental and hormonal regulation of thermosensitive neuron potential activity in rat brain.
Belugin, S; Akino, K; Takamura, N; Mine, M; Romanovsky, D; Fedoseev, V; Kubarko, A; Kosaka, M; Yamashita, S
1999-08-01
To understand the involvement of thyroid hormone on the postnatal development of hypothalamic thermosensitive neurons, we focused on the analysis of thermosensitive neuronal activity in the preoptic and anterior hypothalamic (PO/AH) regions of developing rats with and without hypothyroidism. In euthyroid rats, the distribution of thermosensitive neurons in PO/AH showed that in 3-week-old rats (46 neurons tested), 19.5% were warm-sensitive and 80.5% were nonsensitive. In 5- to 12-week-old euthyroid rats (122 neurons), 33.6% were warm-sensitive and 66.4% were nonsensitive. In 5- to 12-week-old hypothyroid rats (108 neurons), however, 18.5% were warm-sensitive and 81.5% were nonsensitive. Temperature thresholds of warm-sensitive neurons were lower in 12-week-old euthyroid rats (36.4+/-0.2 degrees C, n = 15, p<0.01,) than in 3-week-old and in 5-week-old euthyroid rats (38.5+/-0.5 degrees C, n = 9 and 38.0+/-0.3 degrees C, n = 15, respectively). The temperature thresholds of warm-sensitive neurons in 12-week-old hypothyroid rats (39.5+/-0.3 degrees C, n = 8) were similar to that of warm-sensitive neurons of 3-week-old raats (euthyroid and hypothyroid). In contrast, there was no difference in the thresholds of warm-sensitive neurons between hypothyroid and euthyroid rats at the age of 3-5 weeks. In conclusion, monitoring the thermosensitive neuronal tissue activity demonstrated the evidence that thyroid hormone regulates the maturation of warm-sensitive hypothalamic neurons in developing rat brain by electrophysiological analysis.
Lee, Ho-Won; Muniyappa, Ranganath; Yan, Xu; Yue, Lilly Q.; Linden, Ellen H.; Chen, Hui; Hansen, Barbara C.
2011-01-01
The euglycemic glucose clamp is the reference method for assessing insulin sensitivity in humans and animals. However, clamps are ill-suited for large studies because of extensive requirements for cost, time, labor, and technical expertise. Simple surrogate indexes of insulin sensitivity/resistance including quantitative insulin-sensitivity check index (QUICKI) and homeostasis model assessment (HOMA) have been developed and validated in humans. However, validation studies of QUICKI and HOMA in both rats and mice suggest that differences in metabolic physiology between rodents and humans limit their value in rodents. Rhesus monkeys are a species more similar to humans than rodents. Therefore, in the present study, we evaluated data from 199 glucose clamp studies obtained from a large cohort of 86 monkeys with a broad range of insulin sensitivity. Data were used to evaluate simple surrogate indexes of insulin sensitivity/resistance (QUICKI, HOMA, Log HOMA, 1/HOMA, and 1/Fasting insulin) with respect to linear regression, predictive accuracy using a calibration model, and diagnostic performance using receiver operating characteristic. Most surrogates had modest linear correlations with SIClamp (r ≈ 0.4–0.64) with comparable correlation coefficients. Predictive accuracy determined by calibration model analysis demonstrated better predictive accuracy of QUICKI than HOMA and Log HOMA. Receiver operating characteristic analysis showed equivalent sensitivity and specificity of most surrogate indexes to detect insulin resistance. Thus, unlike in rodents but similar to humans, surrogate indexes of insulin sensitivity/resistance including QUICKI and log HOMA may be reasonable to use in large studies of rhesus monkeys where it may be impractical to conduct glucose clamp studies. PMID:21209021
Antiresonant reflecting guidance mechanism in hollow-core fiber for gas pressure sensing.
Hou, Maoxiang; Zhu, Feng; Wang, Ying; Wang, Yiping; Liao, Changrui; Liu, Shen; Lu, Peixiang
2016-11-28
A gas pressure sensor based on an antiresonant reflecting guidance mechanism in a hollow-core fiber (HCF) with an open microchannel is experimentally demonstrated for gas pressure sensing. The microchannel was created on the ring cladding of the HCF by femtosecond laser drilling to provide an air-core pressure equivalent to the external environment. The HCF cladding functions as an antiresonant reflecting waveguide, which induces sharp periodic lossy dips in the transmission spectrum. The proposed sensor exhibits a high pressure sensitivity of 3.592 nm/MPa and a low temperature cross-sensitivity of 7.5 kPa/°C. Theoretical analysis indicates that the observed high gas pressure sensitivity originates from the pressure induced refractive index change of the air in the hollow-core. The good operation durability and fabrication simplicity make the device an attractive candidate for reliable and highly sensitive gas pressure measurement in harsh environments.
Xiang, Mei-Hao; Liu, Jin-Wen; Li, Na; Tang, Hao; Yu, Ru-Qin; Jiang, Jian-Hui
2016-02-28
Graphitic C3N4 (g-C3N4) nanosheets provide an attractive option for bioprobes and bioimaging applications. Utilizing highly fluorescent and water-dispersible ultrathin g-C3N4 nanosheets, a highly sensitive, selective and label-free biosensor has been developed for ALP detection for the first time. The developed approach utilizes a natural substrate of ALP in biological systems and thus affords very high catalytic efficiency. This novel biosensor is demonstrated to enable quantitative analysis of ALP in a wide range from 0.1 to 1000 U L(-1) with a low detection limit of 0.08 U L(-1), which is among the most sensitive assays for ALP. It is expected that the developed method may provide a low-cost, convenient, rapid and highly sensitive platform for ALP-based clinical diagnostics and biomedical applications.
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
Tanko, Zita; Shab, Arna; Diepgen, Thomas Ludwig; Weisshaar, Elke
2009-06-01
Fragrances are very common in everyday products. A metalworker with chronic hand eczema and previously diagnosed type IV sensitizations to epoxy resin, balsam of Peru, fragrance mix and fragrance mix II was diagnosed with additional type IV sensitizations to geraniol, hydroxycitronellal, lilial, tree moss, oak moss absolute, citral, citronellol, farnesol, Lyral, fragrance mix II and fragrance mix (with sorbitan sesquioleate). In addition, a type IV sensitization to the skin protection cream containing geraniol and citronellol used at the workplace was detected, and deemed occupationally relevant in this case. The patient could have had contact to fragrances through private use of cosmetics and detergents. On the other hand, the fragrance-containing skin protection cream supports occupational exposure. This case report demonstrates that fragrance contact allergy has to be searched for and clarified individually, which requires a thorough history and a detailed analysis of the work place.
Laser ablation surface-enhanced Raman microspectroscopy.
Londero, Pablo S; Lombardi, John R; Leona, Marco
2013-06-04
Improved identification of trace organic compounds in complex matrixes is critical for a variety of fields such as material science, heritage science, and forensics. Surface-enhanced Raman scattering (SERS) is a vibrational spectroscopy technique that can attain single-molecule sensitivity and has been shown to complement mass spectrometry, but lacks widespread application without a robust method that utilizes the effect. We demonstrate a new, highly sensitive, and widely applicable approach to SERS analysis based on laser ablation in the presence of a tailored plasmonic substrate. We analyze several challenging compounds, including non-water-soluble pigments and dyed leather from an ancient Egyptian chariot, achieving sensitivity as high as 120 amol for a 1:1 signal-to-noise ratio and 5 μm spatial resolution. This represents orders of magnitude improvement in spatial resolution and sensitivity compared to those of other SERS approaches intended for widespread application, greatly increasing the applicability of SERS.
Strappini, Francesca; Gilboa, Elad; Pitzalis, Sabrina; Kay, Kendrick; McAvoy, Mark; Nehorai, Arye; Snyder, Abraham Z
2017-03-01
Temporal and spatial filtering of fMRI data is often used to improve statistical power. However, conventional methods, such as smoothing with fixed-width Gaussian filters, remove fine-scale structure in the data, necessitating a tradeoff between sensitivity and specificity. Specifically, smoothing may increase sensitivity (reduce noise and increase statistical power) but at the cost loss of specificity in that fine-scale structure in neural activity patterns is lost. Here, we propose an alternative smoothing method based on Gaussian processes (GP) regression for single subjects fMRI experiments. This method adapts the level of smoothing on a voxel by voxel basis according to the characteristics of the local neural activity patterns. GP-based fMRI analysis has been heretofore impractical owing to computational demands. Here, we demonstrate a new implementation of GP that makes it possible to handle the massive data dimensionality of the typical fMRI experiment. We demonstrate how GP can be used as a drop-in replacement to conventional preprocessing steps for temporal and spatial smoothing in a standard fMRI pipeline. We present simulated and experimental results that show the increased sensitivity and specificity compared to conventional smoothing strategies. Hum Brain Mapp 38:1438-1459, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
De Koster, J; Hostens, M; Hermans, K; Van den Broeck, W; Opsomer, G
2016-10-01
The aim of the present research was to compare different measures of insulin sensitivity in dairy cows at the end of the dry period. To do so, 10 clinically healthy dairy cows with a varying body condition score were selected. By performing hyperinsulinemic euglycemic clamp (HEC) tests, we previously demonstrated a negative association between the insulin sensitivity and insulin responsiveness of glucose metabolism and the body condition score of these animals. In the same animals, other measures of insulin sensitivity were determined and the correlation with the HEC test, which is considered as the gold standard, was calculated. Measures derived from the intravenous glucose tolerance test (IVGTT) are based on the disappearance of glucose after an intravenous glucose bolus. Glucose concentrations during the IVGTT were used to calculate the area under the curve of glucose and the clearance rate of glucose. In addition, glucose and insulin data from the IVGTT were fitted in the minimal model to derive the insulin sensitivity parameter, Si. Based on blood samples taken before the start of the IVGTT, basal concentrations of glucose, insulin, NEFA, and β-hydroxybutyrate were determined and used to calculate surrogate indices for insulin sensitivity, such as the homeostasis model of insulin resistance, the quantitative insulin sensitivity check index, the revised quantitative insulin sensitivity check index and the revised quantitative insulin sensitivity check index including β-hydroxybutyrate. Correlation analysis revealed no association between the results obtained by the HEC test and any of the surrogate indices for insulin sensitivity. For the measures derived from the IVGTT, the area under the curve for the first 60 min of the test and the Si derived from the minimal model demonstrated good correlation with the gold standard. Copyright © 2016 Elsevier Inc. All rights reserved.
Sampling and sensitivity analyses tools (SaSAT) for computational modelling
Hoare, Alexander; Regan, David G; Wilson, David P
2008-01-01
SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361
Spin-exchange relaxation-free magnetometer with nearly parallel pump and probe beams
Karaulanov, Todor; Savukov, Igor; Kim, Young Jin
2016-03-22
We constructed a spin-exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell. This configuration provides almost complete overlap of the beams in the cell, and prevents the pump beam from entering the probe detection channel. By coupling the lasers in multi-mode fibers, without an optical isolator or field modulation, we demonstrate a sensitivity of 10 fTmore » $$/\\sqrt{\\text{Hz}}$$ for frequencies between 10 Hz and 100 Hz. In addition to the experimental study of sensitivity, we present a theoretical analysis of SERF magnetometer response to magnetic fields for small-angle and parallel-beam configurations, and show that at optimal DC offset fields the magnetometer response is comparable to that in the orthogonal-beam configuration. Based on the analysis, we also derive fundamental and probe-limited sensitivities for the arbitrary non-orthogonal geometry. The expected practical and fundamental sensitivities are of the same order as those in the orthogonal geometry. As a result, we anticipate that our design will be useful for magnetoencephalography (MEG) and magnetocardiography (MCG) applications.« less
The enhanced cyan fluorescent protein: a sensitive pH sensor for fluorescence lifetime imaging.
Poëa-Guyon, Sandrine; Pasquier, Hélène; Mérola, Fabienne; Morel, Nicolas; Erard, Marie
2013-05-01
pH is an important parameter that affects many functions of live cells, from protein structure or function to several crucial steps of their metabolism. Genetically encoded pH sensors based on pH-sensitive fluorescent proteins have been developed and used to monitor the pH of intracellular compartments. The quantitative analysis of pH variations can be performed either by ratiometric or fluorescence lifetime detection. However, most available genetically encoded pH sensors are based on green and yellow fluorescent proteins and are not compatible with multicolor approaches. Taking advantage of the strong pH sensitivity of enhanced cyan fluorescent protein (ECFP), we demonstrate here its suitability as a sensitive pH sensor using fluorescence lifetime imaging. The intracellular ECFP lifetime undergoes large changes (32 %) in the pH 5 to pH 7 range, which allows accurate pH measurements to better than 0.2 pH units. By fusion of ECFP with the granular chromogranin A, we successfully measured the pH in secretory granules of PC12 cells, and we performed a kinetic analysis of intragranular pH variations in living cells exposed to ammonium chloride.
Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C
2018-01-01
Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Post-Optimality Analysis In Aerospace Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.
1993-01-01
This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.
Jin, H; Yuan, L; Li, C; Kan, Y; Hao, R; Yang, J
2014-03-01
The purpose of this study was to systematically review and perform a meta-analysis of published data regarding the diagnostic performance of positron emission tomography (PET) or PET/computed tomography (PET/CT) in prosthetic infection after arthroplasty. A comprehensive computer literature search of studies published through May 31, 2012 regarding PET or PET/CT in patients suspicious of prosthetic infection was performed in PubMed/MEDLINE, Embase and Scopus databases. Pooled sensitivity and specificity of PET or PET/CT in patients suspicious of prosthetic infection on a per prosthesis-based analysis were calculated. The area under the receiver-operating characteristic (ROC) curve was calculated to measure the accuracy of PET or PET/CT in patients with suspicious of prosthetic infection. Fourteen studies comprising 838 prosthesis with suspicious of prosthetic infection after arthroplasty were included in this meta-analysis. The pooled sensitivity of PET or PET/CT in detecting prosthetic infection was 86% (95% confidence interval [CI] 82-90%) on a per prosthesis-based analysis. The pooled specificity of PET or PET/CT in detecting prosthetic infection was 86% (95% CI 83-89%) on a per prosthesis-based analysis. The area under the ROC curve was 0.93 on a per prosthesis-based analysis. In patients suspicious of prosthetic infection, FDG PET or PET/CT demonstrated high sensitivity and specificity. FDG PET or PET/CT are accurate methods in this setting. Nevertheless, possible sources of false positive results and influcing factors should kept in mind.
Liu, C Carrie; Jethwa, Ashok R; Khariwala, Samir S; Johnson, Jonas; Shin, Jennifer J
2016-01-01
(1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of nondiagnostic and indeterminate cytology with parotid FNA. Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I(2) statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I(2) point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509-0.982) and a specificity of 0.995 (95% CI, 0.960-0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030-0.075) and 0.147 (95% CI, 0.106-0.188), respectively. FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2015.
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Liu, C. Carrie; Jethwa, Ashok R.; Khariwala, Samir S.; Johnson, Jonas; Shin, Jennifer J.
2016-01-01
Objectives (1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of non-diagnostic and indeterminate cytology with parotid FNA. Data Sources Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Review Methods Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I2 statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. Results The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I2 point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509–0.982) and a specificity of 0.995 (95% CI, 0.960–0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030–0.075) and 0.147 (95% CI, 0.106–0.188), respectively. Conclusion FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. PMID:26428476
van de Schoot, Rens; Broere, Joris J.; Perryck, Koen H.; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E.
2015-01-01
Background The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis. PMID:25765534
van de Schoot, Rens; Broere, Joris J; Perryck, Koen H; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E
2015-01-01
Background : The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods : First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results : Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion : We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.
NASA Technical Reports Server (NTRS)
Smith, Andrew; LaVerde, Bruce; Fulcher, Clay; Hunt, Ron
2012-01-01
An approach for predicting the vibration, strain, and force responses of a flight-like vehicle panel assembly to acoustic pressures is presented. Important validation for the approach is provided by comparison to ground test measurements in a reverberant chamber. The test article and the corresponding analytical model were assembled in several configurations to demonstrate the suitability of the approach for response predictions when the vehicle panel is integrated with equipment. Critical choices in the analysis necessary for convergence of the predicted and measured responses are illustrated through sensitivity studies. The methodology includes representation of spatial correlation of the pressure field over the panel surface. Therefore, it is possible to demonstrate the effects of hydrodynamic coincidence in the response. The sensitivity to pressure patch density clearly illustrates the onset of coincidence effects on the panel response predictions.
NASA Astrophysics Data System (ADS)
Dutta, Tanoy; Chandra, Falguni; Koner, Apurba L.
2018-02-01
A ;naked-eye; detection of health hazardous bisulfite (HSO3-) and hypochlorite (ClO-) using an indicator dye (Quinaldine Red, QR) in a wide range of pH is demonstrated. The molecule contains a quinoline moiety linked to an N,N-dimethylaniline moiety with a conjugated double bond. Treatment of QR with HSO3- and ClO-, in aqueous solution at near-neutral pH, resulted in a colorless product with high selectivity and sensitivity. The detection limit was 47.8 μM and 0.2 μM for HSO3- and ClO- respectively. However, ClO- was 50 times more sensitive and with 2 times faster response compared to HSO3-. The detail characterization and related analysis demonstrate the potential of QR for a rapid, robust and highly efficient colorimetric sensor for the practical applications to detect hypochlorite in water samples.
Wang, Yiping; Ni, Xiaoqi; Wang, Ming; Cui, Yifeng; Shi, Qingyun
2017-01-23
In this paper, a demodulation method for optic fiber micro-electromechanical systems (MEMS) extrinsic Fabry-Perot interferometer (EFPI) pressure sensor exploiting microwave photonics filter technique is firstly proposed and experimentally demonstrated. A single bandpass microwave photonic filter (MPF) which mainly consists of a spectrum-sliced light source, a pressurized optical fiber MEMS EFPI, a phase modulator (PM) and a length of dispersion compensating fiber (DCF) is demonstrated. The frequency response of the filter with respect to the pressure is studied. By detecting the resonance frequency shifts of the MPF, the pressure can be determined. The theoretical and experimental results show that the proposed EFPI pressure demodulation method has a higher resolution and higher speed than traditional methods based on optical spectrum analysis. The sensitivity of the sensor is measured to be as high as 86 MHz/MPa in the range of 0-4Mpa. Moreover, the sensitivity can be easily adjusted.
Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.
Song, Sang-Hoon; Lee, Naeun; Kim, Dong-Joon; Lee, Sooyeun; Jeong, Chul-Ho
2017-01-01
Molecular and metabolic alterations in cancer cells are one of the leading causes of acquired resistance to chemotherapeutics. In this study, we explored an experimental strategy to identify which of these alterations can induce erlotinib resistance in human pancreatic cancer. Using genetically matched erlotinib-sensitive (BxPC-3) and erlotinib-resistant (BxPC-3ER) pancreatic cancer cells, we conducted a multi-omics analysis of metabolomes and transcriptomes in these cells. Untargeted and targeted metabolomic analyses revealed significant changes in metabolic pathways involved in the regulation of polyamines, amino acids, and fatty acids. Further transcriptomic analysis identified that ornithine decarboxylase (ODC) and its major metabolite, putrescine, contribute to the acquisition of erlotinib resistance in BxPC-3ER cells. Notably, either pharmacological or genetic blockage of ODC was able to restore erlotinib sensitivity, and this could be rescued by treatment with exogenous putrescine in erlotinib-resistant BxPC-3ER cells. Moreover, using a panel of cancer cells we demonstrated that ODC expression levels in cancer cells are inversely correlated with sensitivity to chemotherapeutics. Taken together, our findings will begin to uncover mechanisms of acquired drug resistance and ultimately help to identify potential therapeutic markers in cancer. PMID:29190951
Inheritance of Carboxin Resistance in a European Field Isolate of Ustilago nuda.
Newcombe, G; Thomas, P L
2000-02-01
ABSTRACT Two carboxin-resistant field isolates of Ustilago nuda from Europe were crossed with a carboxin-sensitive field isolate from North America. Meiotic tetrads isolated from germinating F(1) teliospores of one of the hybrids were tested for carboxin resistance and mating type. Carboxin resistance was shown to be controlled by a single gene (CBX1R), because a 1:1 segregation of carboxin resistance was observed in all 27 tetrads. Tetrad analysis indicated that the loci for carboxin resistance (Cbx1) and mating type (MAT1) segregate independently but may be located on the same chromosome. Tetrad analysis was not possible with the F(1) hybrid of he other field isolate, and its resistance cannot yet be attributed to CBX1R. Carboxin resistance was qualitatively dominant to sensitivity in vitro, as demonstrated by triad analysis of germinating F(1) teliospores. Quantitative in planta infection percents supported the conclusion that CBX1R is dominant, although incompletely, in the F(1) hybrid of one of the field isolates. Also, fewer than expected carboxin-sensitive F(2) individuals were observed in planta. However, inoculations of host plants with U. nuda have resulted in similar, unexpected variation in the past.
Identification of potential barriers to nurse-sensitive outcome demonstration.
Beckel, Jean; Wolf, Gail; Wilson, Roxanne; Hoolahan, Susan
2013-12-01
The objective of this study was to determine differences in chief nursing officer, Magnet(®) program director, nurse leader, and direct care RN perspectives of potential barriers to demonstration of nurse-sensitive outcomes. The Magnet Recognition Program(®) and other designations are focusing on patient outcomes. No evidence is available addressing barriers to demonstration of nursing outcomes at multiple levels of practice. A Likert scale tool was developed and administered to 526 attendees at the 2012 national Magnet conference. Questions related to available resources, benchmarks, outcome demonstration process understanding, perception of value, and competing priorities. Significant perception differences by role were demonstrated related to available resources, competing priorities, and process understanding supporting demonstration of nurse-sensitive outcomes. No significant differences were identified related to benchmarks or perception of process value to the organization. This study provides new information demonstrating potential barriers to demonstration of nurse-sensitive outcomes differing by role. Opportunity exists to develop systems and processes to reduce perceived barriers among the nursing workforce.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Python, Francois; Goebel, Carsten; Aeby, Pierre
2009-09-15
The number of studies involved in the development of in vitro skin sensitization tests has increased since the adoption of the EU 7th amendment to the cosmetics directive proposing to ban animal testing for cosmetic ingredients by 2013. Several studies have recently demonstrated that sensitizers induce a relevant up-regulation of activation markers such as CD86, CD54, IL-8 or IL-1{beta} in human myeloid cell lines (e.g., U937, MUTZ-3, THP-1) or in human peripheral blood monocyte-derived dendritic cells (PBMDCs). The present study aimed at the identification of new dendritic cell activation markers in order to further improve the in vitro evaluation ofmore » the sensitizing potential of chemicals. We have compared the gene expression profiles of PBMDCs and the human cell line MUTZ-3 after a 24-h exposure to the moderate sensitizer cinnamaldehyde. A list of 80 genes modulated in both cell types was obtained and a set of candidate marker genes was selected for further analysis. Cells were exposed to selected sensitizers and non-sensitizers for 24 h and gene expression was analyzed by quantitative real-time reverse transcriptase-polymerase chain reaction. Results indicated that PIR, TRIM16 and two Nrf2-regulated genes, CES1 and NQO1, are modulated by most sensitizers. Up-regulation of these genes could also be observed in our recently published DC-activation test with U937 cells. Due to their role in DC activation, these new genes may help to further refine the in vitro approaches for the screening of the sensitizing properties of a chemical.« less
Apparatus and method for quantitative determination of materials contained in fluids
Radziemski, Leon J.; Cremers, David A.
1985-01-01
Apparatus and method for near real-time in-situ monitoring of particulates and vapors contained in fluids. Initial filtration of a known volume of the fluid sample is combined with laser-induced dielectric breakdown spectroscopy of the filter employed to obtain qualitative and quantitative information with high sensitivity. Application of the invention to monitoring of beryllium, beryllium oxide, or other beryllium-alloy dusts is demonstrated. Significant shortening of analysis time is achieved from those of the usual chemical techniques of analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Kook In; Lee, In Gyu; Hwang, Wan Sik, E-mail: mhshin@kau.ac.kr, E-mail: whwang@kau.ac.kr
The oxidation properties of graphene oxide (GO) are systematically correlated with their chemical sensing properties. Based on an impedance analysis, the equivalent circuit models of the capacitive sensors are established, and it is demonstrated that capacitive operations are related to the degree of oxidation. This is also confirmed by X-ray diffraction and Raman analysis. Finally, highly sensitive stacked GO sensors are shown to detect humidity in capacitive mode, which can be useful in various applications requiring low power consumption.
Optimizing human activity patterns using global sensitivity analysis.
Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M
2014-12-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Optimizing human activity patterns using global sensitivity analysis
Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2014-01-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less
Purification of Derivatized Oligosaccharides by Solid Phase Extraction for Glycomic Analysis
Zhang, Qiwei; Li, Henghui; Feng, Xiaojun; Liu, Bi-Feng; Liu, Xin
2014-01-01
Profiling of glycans released from proteins is very complex and important. To enhance the detection sensitivity, chemical derivatization is required for the analysis of carbohydrates. Due to the interference of excess reagents, a simple and reliable purification method is usually necessary for the derivatized oligosaccharides. Various SPE based methods have been applied for the clean-up process. To demonstrate the differences among these methods, seven types of self-packed SPE cartridges were systematically compared in this study. The optimized conditions were determined for each type of cartridge and it was found that microcrystalline cellulose was the most appropriate SPE material for the purification of derivatized oligosaccharide. Normal phase HPLC analysis of the derivatized maltoheptaose was realized with a detection limit of 0.12 pmol (S N−1 = 3) and a recovery over 70%. With the optimized SPE method, relative quantification analysis of N-glycans from model glycoproteins were carried out accurately and over 40 N-glycans from human serum samples were determined regardless of the isomers. Due to the high stability and sensitivity, microcrystalline cellulose cartridge showed potential applications in glycomics analysis. PMID:24705408
Holloway, Andrew J; Oshlack, Alicia; Diyagama, Dileepa S; Bowtell, David DL; Smyth, Gordon K
2006-01-01
Background Concerns are often raised about the accuracy of microarray technologies and the degree of cross-platform agreement, but there are yet no methods which can unambiguously evaluate precision and sensitivity for these technologies on a whole-array basis. Results A methodology is described for evaluating the precision and sensitivity of whole-genome gene expression technologies such as microarrays. The method consists of an easy-to-construct titration series of RNA samples and an associated statistical analysis using non-linear regression. The method evaluates the precision and responsiveness of each microarray platform on a whole-array basis, i.e., using all the probes, without the need to match probes across platforms. An experiment is conducted to assess and compare four widely used microarray platforms. All four platforms are shown to have satisfactory precision but the commercial platforms are superior for resolving differential expression for genes at lower expression levels. The effective precision of the two-color platforms is improved by allowing for probe-specific dye-effects in the statistical model. The methodology is used to compare three data extraction algorithms for the Affymetrix platforms, demonstrating poor performance for the commonly used proprietary algorithm relative to the other algorithms. For probes which can be matched across platforms, the cross-platform variability is decomposed into within-platform and between-platform components, showing that platform disagreement is almost entirely systematic rather than due to measurement variability. Conclusion The results demonstrate good precision and sensitivity for all the platforms, but highlight the need for improved probe annotation. They quantify the extent to which cross-platform measures can be expected to be less accurate than within-platform comparisons for predicting disease progression or outcome. PMID:17118209
Britz-McKibbin, Philip; Otsuka, Koji; Terabe, Shigeru
2002-08-01
Simple yet effective methods to enhance concentration sensitivity is needed for capillary electrophoresis (CE) to become a practical method to analyze trace levels of analytes in real samples. In this report, the development of a novel on-line preconcentration technique combining dynamic pH junction and sweeping modes of focusing is applied to the sensitive and selective analysis of three flavin derivatives: riboflavin, flavin mononucleotide (FMN) and flavin adenine dinucleotide (FAD). Picomolar (pM) detectability of flavins by CE with laser-induced fluorescence (LIF) detection is demonstrated through effective focusing of large sample volumes (up to 22% capillary length) using a dual pH junction-sweeping focusing mode. This results in greater than a 1,200-fold improvement in sensitivity relative to conventional injection methods, giving a limit of detection (S/N = 3) of approximately 4.0 pM for FAD and FMN. Flavin focusing is examined in terms of analyte mobility dependence on buffer pH, borate complexation and SDS interaction. Dynamic pH junction-sweeping extends on-line focusing to both neutral (hydrophobic) and weakly acidic (hydrophilic) species and is considered useful in cases when either conventional sweeping or dynamic pH junction techniques used alone are less effective for certain classes of analytes. Enhanced focusing performance by this hyphenated method was demonstrated by greater than a 4-fold reduction in flavin bandwidth, as compared to either sweeping or dynamic pH junction, reflected by analyte detector bandwidths <0.20 cm. Novel on-line focusing strategies are required to improve sensitivity in CE, which may be applied toward more effective biochemical analysis methods for diverse types of analytes.
Mikhaylova, Lyudmila; Zhang, Yiming; Kobzik, Lester; Fedulov, Alexey V
2013-01-01
We investigated the link between epigenome-wide methylation aberrations at birth and genomic transcriptional changes upon allergen sensitization that occur in the neonatal dendritic cells (DC) due to maternal asthma. We previously demonstrated that neonates of asthmatic mothers are born with a functional skew in splenic DCs that can be seen even in allergen-naïve pups and can convey allergy responses to normal recipients. However, minimal-to-no transcriptional or phenotypic changes were found to explain this alteration. Here we provide in-depth analysis of genome-wide DNA methylation profiles and RNA transcriptional (microarray) profiles before and after allergen sensitization. We identified differentially methylated and differentially expressed loci and performed manually-curated matching of methylation status of the key regulatory sequences (promoters and CpG islands) to expression of their respective transcripts before and after sensitization. We found that while allergen-naive DCs from asthma-at-risk neonates have minimal transcriptional change compared to controls, the methylation changes are extensive. The substantial transcriptional change only becomes evident upon allergen sensitization, when it occurs in multiple genes with the pre-existing epigenetic alterations. We demonstrate that maternal asthma leads to both hyper- and hypomethylation in neonatal DCs, and that both types of events at various loci significantly overlap with transcriptional responses to allergen. Pathway analysis indicates that approximately 1/2 of differentially expressed and differentially methylated genes directly interact in known networks involved in allergy and asthma processes. We conclude that congenital epigenetic changes in DCs are strongly linked to altered transcriptional responses to allergen and to early-life asthma origin. The findings are consistent with the emerging paradigm that asthma is a disease with underlying epigenetic changes.
Arm Dominance Affects Feedforward Strategy more than Feedback Sensitivity during a Postural Task
Walker, Elise H. E.; Perreault, Eric J.
2015-01-01
Handedness is a feature of human motor control that is still not fully understood. Recent work has demonstrated that the dominant and nondominant arm each excel at different behaviors, and has proposed that this behavioral asymmetry arises from lateralization in the cerebral cortex: the dominant side specializes in predictive trajectory control, while the nondominant side is specialized for impedance control. Long-latency stretch reflexes are an automatic mechanism for regulating posture, and have been shown to contribute to limb impedance. To determine whether long-latency reflexes also contribute to asymmetric motor behavior in the upper limbs, we investigated the effect of arm dominance on stretch reflexes during a postural task that required varying degrees of impedance control. Our results demonstrated slightly but significantly larger reflex responses in the biarticular muscles of the nondominant arm, as would be consistent with increased impedance control. These differences were attributed solely to higher levels of voluntary background activity in the nondominant biarticular muscles, indicating that feedforward strategies for postural stability may differ between arms. Reflex sensitivity, which was defined as the magnitude of the reflex response for matched levels of background activity, was not significantly different between arms for a broad subject population ranging from 23–51 years of age. These results indicate that inter-arm differences in feedforward strategies are more influential during posture than differences in feedback sensitivity, in a broad subject population. Interestingly, restricting our analysis to subjects under 40 years of age revealed a small increase in long-latency reflex sensitivity in the nondominant arm relative to the dominant arm. Though our subject numbers were small for this secondary analysis, it suggests that further studies may be required to assess the influence of reflex lateralization throughout development. PMID:25850407
Arm dominance affects feedforward strategy more than feedback sensitivity during a postural task.
Walker, Elise H E; Perreault, Eric J
2015-07-01
Handedness is a feature of human motor control that is still not fully understood. Recent work has demonstrated that the dominant and nondominant arm each excel at different behaviors and has proposed that this behavioral asymmetry arises from lateralization in the cerebral cortex: the dominant side specializes in predictive trajectory control, while the nondominant side is specialized for impedance control. Long-latency stretch reflexes are an automatic mechanism for regulating posture and have been shown to contribute to limb impedance. To determine whether long-latency reflexes also contribute to asymmetric motor behavior in the upper limbs, we investigated the effect of arm dominance on stretch reflexes during a postural task that required varying degrees of impedance control. Our results demonstrated slightly but significantly larger reflex responses in the biarticular muscles of the nondominant arm, as would be consistent with increased impedance control. These differences were attributed solely to higher levels of voluntary background activity in the nondominant biarticular muscles, indicating that feedforward strategies for postural stability may differ between arms. Reflex sensitivity, which was defined as the magnitude of the reflex response for matched levels of background activity, was not significantly different between arms for a broad subject population ranging from 23 to 51 years of age. These results indicate that inter-arm differences in feedforward strategies are more influential during posture than differences in feedback sensitivity, in a broad subject population. Interestingly, restricting our analysis to subjects under 40 years of age revealed a small increase in long-latency reflex sensitivity in the nondominant arm relative to the dominant arm. Though our subject numbers were small for this secondary analysis, it suggests that further studies may be required to assess the influence of reflex lateralization throughout development.
Wei, Binnian; McGuffey, James E; Blount, Benjamin C; Wang, Lanqing
2016-01-01
Maternal exposure to marijuana during the lactation period-either active or passive-has prompted concerns about transmission of cannabinoids to breastfed infants and possible subsequent adverse health consequences. Assessing these health risks requires a sensitive analytical approach that is able to quantitatively measure trace-level cannabinoids in breast milk. Here, we describe a saponification-solid phase extraction approach combined with ultra-high-pressure liquid chromatography-tandem mass spectrometry for simultaneously quantifying Δ9-tetrahydrocannabinol (THC), cannabidiol (CBD), and cannabinol (CBN) in breast milk. We demonstrate for the first time that constraints on sensitivity can be overcome by utilizing alkaline saponification of the milk samples. After extensively optimizing the saponification procedure, the validated method exhibited limits of detections of 13, 4, and 66 pg/mL for THC, CBN, and CBD, respectively. Notably, the sensitivity achieved was significantly improved, for instance, the limits of detection for THC is at least 100-fold more sensitive compared to that previously reported in the literature. This is essential for monitoring cannabinoids in breast milk resulting from passive or nonrecent active maternal exposure. Furthermore, we simultaneously acquired multiple reaction monitoring transitions for 12 C- and 13 C-analyte isotopes. This combined analysis largely facilitated data acquisition by reducing the repetitive analysis rate for samples exceeding the linear limits of 12 C-analytes. In addition to high sensitivity and broad quantitation range, this method delivers excellent accuracy (relative error within ±10%), precision (relative standard deviation <10%), and efficient analysis. In future studies, we expect this method to play a critical role in assessing infant exposure to cannabinoids through breastfeeding.
AEP Ohio gridSMART Demonstration Project Real-Time Pricing Demonstration Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widergren, Steven E.; Subbarao, Krishnappa; Fuller, Jason C.
2014-02-01
This report contributes initial findings from an analysis of significant aspects of the gridSMART® Real-Time Pricing (RTP) – Double Auction demonstration project. Over the course of four years, Pacific Northwest National Laboratory (PNNL) worked with American Electric Power (AEP), Ohio and Battelle Memorial Institute to design, build, and operate an innovative system to engage residential consumers and their end-use resources in a participatory approach to electric system operations, an incentive-based approach that has the promise of providing greater efficiency under normal operating conditions and greater flexibility to react under situations of system stress. The material contained in this report supplementsmore » the findings documented by AEP Ohio in the main body of the gridSMART report. It delves into three main areas: impacts on system operations, impacts on households, and observations about the sensitivity of load to price changes.« less
Advanced techniques for determining long term compatibility of materials with propellants
NASA Technical Reports Server (NTRS)
Green, R. L.
1972-01-01
The search for advanced measurement techniques for determining long term compatibility of materials with propellants was conducted in several parts. A comprehensive survey of the existing measurement and testing technology for determining material-propellant interactions was performed. Selections were made from those existing techniques which were determined could meet or be made to meet the requirements. Areas of refinement or changes were recommended for improvement of others. Investigations were also performed to determine the feasibility and advantages of developing and using new techniques to achieve significant improvements over existing ones. The most interesting demonstration was that of the new technique, the volatile metal chelate analysis. Rivaling the neutron activation analysis in terms of sensitivity and specificity, the volatile metal chelate technique was fully demonstrated.
Analysis of interior noise ground and flight test data for advanced turboprop aircraft applications
NASA Technical Reports Server (NTRS)
Simpson, M. A.; Tran, B. N.
1991-01-01
Interior noise ground tests conducted on a DC-9 aircraft test section are described. The objectives were to study ground test and analysis techniques for evaluating the effectiveness of interior noise control treatments for advanced turboprop aircraft, and to study the sensitivity of the ground test results to changes in various test conditions. Noise and vibration measurements were conducted under simulated advanced turboprop excitation, for two interior noise control treatment configurations. These ground measurement results were compared with results of earlier UHB (Ultra High Bypass) Demonstrator flight tests with comparable interior treatment configurations. The Demonstrator is an MD-80 test aircraft with the left JT8D engine replaced with a prototype UHB advanced turboprop engine.
Analysis of interior noise ground and flight test data for advanced turboprop aircraft applications
NASA Astrophysics Data System (ADS)
Simpson, M. A.; Tran, B. N.
1991-08-01
Interior noise ground tests conducted on a DC-9 aircraft test section are described. The objectives were to study ground test and analysis techniques for evaluating the effectiveness of interior noise control treatments for advanced turboprop aircraft, and to study the sensitivity of the ground test results to changes in various test conditions. Noise and vibration measurements were conducted under simulated advanced turboprop excitation, for two interior noise control treatment configurations. These ground measurement results were compared with results of earlier UHB (Ultra High Bypass) Demonstrator flight tests with comparable interior treatment configurations. The Demonstrator is an MD-80 test aircraft with the left JT8D engine replaced with a prototype UHB advanced turboprop engine.
Han, Liping; Zhao, Qingwei; Liang, Xianhong; Wang, Xiaoqing; Zhang, Zhen; Ma, Zhiguo; Zhao, Miaoqing; Wang, Aihua; Liu, Shuai
2017-07-11
Inhibition of Brd4 by JQ1 treatment showed potential in the treatment of glioma, however, some cases showed low sensitivity of JQ1. In addition, the pre-clinical analysis showed its limitation by demonstrating that transient treatment with JQ1 leads to aggressive tumor development. Thus, an improved understanding of the mechanisms underlying JQ1 is urgently required to design strategies to improve its efficiency, as well as overcome its limitation. HEXIM1 has been confirmed to have an important role in regulating JQ1 sensitivity. In our study, ubenimex, a classical anti-cancer drug showed potential in regulating the JQ1 sensitivity of glioma cells using the WST-1 proliferation assay. Further studies demonstrated that ubenimex inhibited autophagy and downregulated the autophagic degradation of HEXIM1. The role of HEXIM1 in regulating JQ1 sensitivity was verified by the HEXIM1 knockdown. Since ubenimex was verified as an Akt inhibitor, we further studied the role of Akt inhibition in regulating JQ1 sensitivity and migration of glioma cells. Data showed that ubenimex improved the efficiency of JQ1 treatment and suppressed migration both in the in vitro and in vivo xenografts models. The Akt agonist attenuated these effects, pointing to the role of Akt inhibition in JQ1 sensitivity and suppressed migration. Our findings suggest the potential of ubenimex adjuvant treatment to enhance JQ1 efficiency and attenuate parts of its side effect (enhancing tumor aggressive) by regulating the autophagic degradation of HEXIM1 and Akt inhibition.
Banks, Rosamonde E; Craven, Rachel A; Harnden, Patricia A; Selby, Peter J
2003-04-01
Western blotting remains a central technique in confirming identities of proteins, their quantitation and analysis of various isoforms. The biotin-avidin/streptavidin system is often used as an amplification step to increase sensitivity but in some tissues such as kidney, "nonspecific" interactions may be a problem due to high levels of endogenous biotin-containing proteins. The EnVision system, developed for immunohistochemical applications, relies on binding of a polymeric conjugate consisting of up to 100 peroxidase molecules and 20 secondary antibody molecules linked directly to an activated dextran backbone, to the primary antibody. This study demonstrates that it is also a viable and sensitive alternative detection system in Western blotting applications.
Yasumitsu, Hidetaro; Ozeki, Yasuhiro; Kawsar, Sarkar M A; Toda, Tosifusa; Kanaly, Robert
2010-11-01
Coomassie Brilliant Blue (CBB) protein stains are inexpensive but detect proteins at only at microgram levels. Because of acetic acid and methanol, they cause skin irritation and reduce work motivation by malodor. Recent mass spectrometric (MS) analyses demonstrated that nanogram-sensitive colloidal CBB staining resulted in in vitro methylations of proteins. We propose a rapid, inexpensive, sensitive, odorless, less harsh, and in vitro methylation-free CBB stain. CGP uses three components: citric acid, CBB G-250, and polyvinylpyrrolidone. CGP detects proteins at 12ng within 45min, and because it is nonalcohol, in principle in vitro methylation would be eliminated. Indeed, MS analysis of CGP-stained bands confirmed a lack of methylation. 2010 Elsevier Inc. All rights reserved.
Cavity-enhanced Faraday rotation measurement with auto-balanced photodetection.
Chang, Chia-Yu; Shy, Jow-Tsong
2015-10-01
Optical cavity enhancement for a tiny Faraday rotation is demonstrated with auto-balanced photodetection. This configuration is analyzed using the Jones matrix formalism. The resonant rotation signal is amplified, and thus, the angular sensitivity is improved. In the experiment, the air Faraday rotation is measured with an auto-balanced photoreceiver in single-pass and cavity geometries. The result shows that the measured Faraday rotation in the single-pass geometry is enhanced by a factor of 85 in the cavity geometry, and the sensitivity is improved to 7.54×10(-10) rad Hz(-1/2), which agrees well with the Jones matrix analysis. With this verification, we propose an AC magnetic sensor whose magnetic sensitivity is expected to achieve 10 pT Hz(-1/2).
Davenport, Tracey A; Burns, Jane M; Hickie, Ian B
2017-01-01
Background Web-based self-report surveying has increased in popularity, as it can rapidly yield large samples at a low cost. Despite this increase in popularity, in the area of youth mental health, there is a distinct lack of research comparing the results of Web-based self-report surveys with the more traditional and widely accepted computer-assisted telephone interviewing (CATI). Objective The Second Australian Young and Well National Survey 2014 sought to compare differences in respondent response patterns using matched items on CATI versus a Web-based self-report survey. The aim of this study was to examine whether responses varied as a result of item sensitivity, that is, the item’s susceptibility to exaggeration on underreporting and to assess whether certain subgroups demonstrated this effect to a greater extent. Methods A subsample of young people aged 16 to 25 years (N=101), recruited through the Second Australian Young and Well National Survey 2014, completed the identical items on two occasions: via CATI and via Web-based self-report survey. Respondents also rated perceived item sensitivity. Results When comparing CATI with the Web-based self-report survey, a Wilcoxon signed-rank analysis showed that respondents answered 14 of the 42 matched items in a significantly different way. Significant variation in responses (CATI vs Web-based) was more frequent if the item was also rated by the respondents as highly sensitive in nature. Specifically, 63% (5/8) of the high sensitivity items, 43% (3/7) of the neutral sensitivity items, and 0% (0/4) of the low sensitivity items were answered in a significantly different manner by respondents when comparing their matched CATI and Web-based question responses. The items that were perceived as highly sensitive by respondents and demonstrated response variability included the following: sexting activities, body image concerns, experience of diagnosis, and suicidal ideation. For high sensitivity items, a regression analysis showed respondents who were male (beta=−.19, P=.048) or who were not in employment, education, or training (NEET; beta=−.32, P=.001) were significantly more likely to provide different responses on matched items when responding in the CATI as compared with the Web-based self-report survey. The Web-based self-report survey, however, demonstrated some evidence of avidity and attrition bias. Conclusions Compared with CATI, Web-based self-report surveys are highly cost-effective and had higher rates of self-disclosure on sensitive items, particularly for respondents who identify as male and NEET. A drawback to Web-based surveying methodologies, however, includes the limited control over avidity bias and the greater incidence of attrition bias. These findings have important implications for further development of survey methods in the area of health and well-being, especially when considering research topics (in this case diagnosis, suicidal ideation, sexting, and body image) and groups that are being recruited (young people, males, and NEET). PMID:28951382
Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L
2015-12-30
Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Marydasan, Betsy; Madhuri, Bollapalli; Cherukommu, Shirisha; Jose, Jedy; Viji, Mambattakkara; Karunakaran, Suneesh C; Chandrashekar, Tavarekere K; Rao, Kunchala Sridhar; Rao, Ch Mohan; Ramaiah, Danaboyina
2018-06-14
With the objective of developing efficient sensitizers for therapeutic applications, we synthesized a water-soluble 5,10,15,20-tetrakis(3,4-dihydroxyphenyl)chlorin (TDC) and investigated its in vitro and in vivo biological efficacy, comparing it with the commercially available sensitizers. TDC showed high water solubility (6-fold) when compared with that of Foscan and exhibited excellent triplet-excited-state (84%) and singlet-oxygen (80%) yields. In vitro photobiological investigations in human-ovarian-cancer cell lines SKOV-3 showed high photocytotoxicity, negligible dark toxicity, rapid cellular uptake, and specific localization of TDC in neoplastic cells as assessed by flow-cytometric cell-cycle and propidium iodide staining analysis. The photodynamic effects of TDC include confirmed reactive-oxygen-species-induced mitochondrial damage leading to necrosis in SKOV-3 cell lines. The in vivo photodynamic activity in nude-mouse models demonstrated abrogation of tumor growth without any detectable pathology in the skin, liver, spleen, or kidney, thereby demonstrating TDC application as an efficient and safe photosensitizer.
McCarthy, Samuel; Ai, Chenbing; Wheaton, Garrett; Tevatia, Rahul; Eckrich, Valerie; Kelly, Robert; Blum, Paul
2014-10-01
Thermoacidophilic archaea, such as Metallosphaera sedula, are lithoautotrophs that occupy metal-rich environments. In previous studies, an M. sedula mutant lacking the primary copper efflux transporter, CopA, became copper sensitive. In contrast, the basis for supranormal copper resistance remained unclear in the spontaneous M. sedula mutant, CuR1. Here, transcriptomic analysis of copper-shocked cultures indicated that CuR1 had a unique regulatory response to metal challenge corresponding to the upregulation of 55 genes. Genome resequencing identified 17 confirmed mutations unique to CuR1 that were likely to change gene function. Of these, 12 mapped to genes with annotated function associated with transcription, metabolism, or transport. These mutations included 7 nonsynonymous substitutions, 4 insertions, and 1 deletion. One of the insertion mutations mapped to pseudogene Msed_1517 and extended its reading frame an additional 209 amino acids. The extended mutant allele was identified as a homolog of Pho4, a family of phosphate symporters that includes the bacterial PitA proteins. Orthologs of this allele were apparent in related extremely thermoacidophilic species, suggesting M. sedula naturally lacked this gene. Phosphate transport studies combined with physiologic analysis demonstrated M. sedula PitA was a low-affinity, high-velocity secondary transporter implicated in copper resistance and arsenate sensitivity. Genetic analysis demonstrated that spontaneous arsenate-resistant mutants derived from CuR1 all underwent mutation in pitA and nonselectively became copper sensitive. Taken together, these results point to archaeal PitA as a key requirement for the increased metal resistance of strain CuR1 and its accelerated capacity for copper bioleaching. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Nacham, Omprakash; Ho, Tien D; Anderson, Jared L; Webster, Gregory K
2017-10-25
In this study, two ionic liquids (ILs), 1-butyl-3-methylimidazolium bis[(trifluoromethyl)sulfonyl]imide ([BMIM][NTf 2 ]) and trihexyltetradecylphosphonium bis[(trifluoromethyl)sulfonyl]imide ([P 66614 ][NTf 2 ]) were examined as contemporary diluents for residual solvent analysis using static headspace gas chromatography (SHS-GC) coupled with flame ionization detection (FID). ILs are a class of non-molecular solvents featuring negligible vapor pressure and high thermal stabilities. Owing to these favorable properties, ILs have potential to enable superior sensitivity and reduced interference, compared to conventional organic diluents, at high headspace incubation temperatures. By employing the [BMIM][NTf 2 ] IL as a diluent, a 25-fold improvement in limit of detection (LOD) was observed with respect to traditional HS-GC diluents, such as N-methylpyrrolidone (NMP). The established IL-based method demonstrated LODs ranging from 5.8 parts-per-million (ppm) to 20ppm of residual solvents in drug substances. The optimization of headspace extraction conditions was performed prior to method validation. An incubation temperature of 140°C and a 15min incubation time provided the best sensitivity for the analysis. Under optimized experimental conditions, the mass of residual solvents partitioned in the headspace was higher when using [BMIM][NTf 2 ] than NMP as a diluent. The analytical performance was demonstrated by determining the repeatability, accuracy, and linearity of the method. Linear ranges of up to two orders of magnitude were obtained for class 3 solvents. Excellent analyte recoveries were obtained in the presence of three different active pharmaceutical ingredients. Owing to its robustness, high throughput, and superior sensitivity, the HS-GC IL-based method can be used as an alternative to existing residual solvent methods. Copyright © 2017 Elsevier B.V. All rights reserved.
McCarthy, Samuel; Ai, Chenbing; Wheaton, Garrett; Tevatia, Rahul; Eckrich, Valerie; Kelly, Robert
2014-01-01
Thermoacidophilic archaea, such as Metallosphaera sedula, are lithoautotrophs that occupy metal-rich environments. In previous studies, an M. sedula mutant lacking the primary copper efflux transporter, CopA, became copper sensitive. In contrast, the basis for supranormal copper resistance remained unclear in the spontaneous M. sedula mutant, CuR1. Here, transcriptomic analysis of copper-shocked cultures indicated that CuR1 had a unique regulatory response to metal challenge corresponding to the upregulation of 55 genes. Genome resequencing identified 17 confirmed mutations unique to CuR1 that were likely to change gene function. Of these, 12 mapped to genes with annotated function associated with transcription, metabolism, or transport. These mutations included 7 nonsynonymous substitutions, 4 insertions, and 1 deletion. One of the insertion mutations mapped to pseudogene Msed_1517 and extended its reading frame an additional 209 amino acids. The extended mutant allele was identified as a homolog of Pho4, a family of phosphate symporters that includes the bacterial PitA proteins. Orthologs of this allele were apparent in related extremely thermoacidophilic species, suggesting M. sedula naturally lacked this gene. Phosphate transport studies combined with physiologic analysis demonstrated M. sedula PitA was a low-affinity, high-velocity secondary transporter implicated in copper resistance and arsenate sensitivity. Genetic analysis demonstrated that spontaneous arsenate-resistant mutants derived from CuR1 all underwent mutation in pitA and nonselectively became copper sensitive. Taken together, these results point to archaeal PitA as a key requirement for the increased metal resistance of strain CuR1 and its accelerated capacity for copper bioleaching. PMID:25092032
Chen, Lin; Zhu, Zhe; Gao, Wei; Jiang, Qixin; Yu, Jiangming; Fu, Chuangang
2017-09-05
Insulin-like growth factor 1 receptor (IGF-1R) is proved to contribute the development of many types of cancers. But, little is known about its roles in radio-resistance of colorectal cancer (CRC). Here, we demonstrated that low IGF-1R expression value was associated with the better radiotherapy sensitivity of CRC. Besides, through Quantitative Real-time PCR (qRT-PCR), the elevated expression value of epidermal growth factor receptor (EGFR) was observed in CRC cell lines (HT29, RKO) with high radio-sensitivity compared with those with low sensitivity (SW480, LOVO). The irradiation induced apoptosis rates of wild type and EGFR agonist (EGF) or IGF-1R inhibitor (NVP-ADW742) treated HT29 and SW480 cells were quantified by flow cytometry. As a result, the apoptosis rate of EGF and NVP-ADW742 treated HT29 cells was significantly higher than that of those wild type ones, which indicated that high EGFR and low IGF-1R expression level in CRC was associated with the high sensitivity to radiotherapy. We next conducted systemic bioinformatics analysis of genome-wide expression profiles of CRC samples from the Cancer Genome Atlas (TCGA). Differential expression analysis between IGF-1R and EGFR abnormal CRC samples, i.e. CRC samples with higher IGF-1R and lower EGFR expression levels based on their median expression values, and the rest of CRC samples identified potential genes contribute to radiotherapy sensitivity. Functional enrichment of analysis of those differential expression genes (DEGs) in the Database for Annotation, Visualization and Integrated Discovery (DAVID) indicated PPAR signaling pathway as an important pathway for the radio-resistance of CRC. Our study identified the potential biomarkers for the rational selection of radiotherapy for CRC patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Iverieli, M V; Abashidze, N O; Gogishvili, Kh B
2009-04-01
The aim of the research was to study sensitivity of specific microorganisms from the periodontal pockets of patients with rapidly progressive periodontal disease to Taromentine. 95 patients aged 21 to 35 years (50 women (52,6+/-33,62) and 45 men (47,36+/-3,62)) with rapidly progressive form of periodontal desease were observed. Porphiromonas gingivalis was identifide in 83 out of 95 patients (87,36+/-2,06). Prevotella intermedia - in 31 patients (32,6+/-2,750); Actinobacillus actinomycetemcomitans - in 23 patients (24,2+/-2,050); Bacteroides forsythus - in 19 patients (20,0+/-2,360); Treponema denticola - in 16 patients (16,84+/-2,190); Candida - in 11 patients (11,57+/-1,80). The sensitivity of all cultures to Taromentine was investigated: 134 (77,9+/-1,89) out of 183 identified markers demonstrated sensitivity to Taromentine. Demostrated sensitivity to Taromentine: 64 (37,2+/-1,06) out of 83 identified cultures of Porphiromonas gingivalis, 24 (13,95+/-1,85) out of 31 identified cultures of Prevotela intermedia, 18 (10,47+/-1,05) out of 23 identified cultures of Actinobacillus actinomycetemcomitans, 15 (8,7+/-1,86) out of 19 identified cultures of Bacteroides forsythus, and 13 (7,84+/-1,09) out of 16 identified cultures of Treponema denticola. Totally 38 (22,1+/-1,59) out of 172 identified periodontal markers demonstrated resistence to Taromentine. The results of analysis showed that Taromentine could be recommended in complex treatment of periodontal diseases.
Naunheim, Matthew R; Song, Phillip C; Franco, Ramon A; Alkire, Blake C; Shrime, Mark G
2017-03-01
Endoscopic management of bilateral vocal fold paralysis (BVFP) includes cordotomy and arytenoidectomy, and has become a well-accepted alternative to tracheostomy. However, the costs and quality-of-life benefits of endoscopic management have not been examined with formal economic analysis. This study undertakes a cost-effectiveness analysis of tracheostomy versus endoscopic management of BVFP. Cost-effectiveness analysis. A literature review identified a range of costs and outcomes associated with surgical options for BVFP. Additional costs were derived from Medicare reimbursement data; all were adjusted to 2014 dollars. Cost-effectiveness analysis evaluated both therapeutic strategies in short-term and long-term scenarios. Probabilistic sensitivity analysis was used to assess confidence levels regarding the economic evaluation. The incremental cost effectiveness ratio for endoscopic management versus tracheostomy is $31,600.06 per quality-adjusted life year (QALY), indicating that endoscopic management is the cost-effective short-term strategy at a willingness-to-pay (WTP) threshold of $50,000/QALY. The probability that endoscopic management is more cost-effective than tracheostomy at this WTP is 65.1%. Threshold analysis demonstrated that the model is sensitive to both utilities and cost in the short-term scenario. When costs of long-term care are included, tracheostomy is dominated by endoscopic management, indicating the cost-effectiveness of endoscopic management at any WTP. Endoscopic management of BVFP appears to be more cost-effective than tracheostomy. Though endoscopic cordotomy and arytenoidectomy require expertise and specialized equipment, this model demonstrates utility gains and long-term cost advantages to an endoscopic strategy. These findings are limited by the relative paucity of robust utility data and emphasize the need for further economic analysis in otolaryngology. NA Laryngoscope, 127:691-697, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Testing local Lorentz invariance with short-range gravity
Kostelecký, V. Alan; Mewes, Matthew
2017-01-10
The Newton limit of gravity is studied in the presence of Lorentz-violating gravitational operators of arbitrary mass dimension. The linearized modified Einstein equations are obtained and the perturbative solutions are constructed and characterized. We develop a formalism for data analysis in laboratory experiments testing gravity at short range and demonstrate that these tests provide unique sensitivity to deviations from local Lorentz invariance.
ERIC Educational Resources Information Center
Enkelaar, Lotte; Smulders, Ellen; Lantman-de Valk, Henny van Schrojenstein; Weerdesteyn, Vivian; Geurts, Alexander C. H.
2013-01-01
Mobility limitations are common in persons with Intellectual Disabilities (ID). Differences in balance and gait capacities between persons with ID and controls have mainly been demonstrated by instrumented assessments (e.g. posturography and gait analysis), which require sophisticated and expensive equipment such as force plates or a 3D motion…
Joint Services Electronics Program.
1985-12-31
year a comprehensive experimental study of the collision- enhanced Hanle-type resonances in Na vapor with various buffer gases has been completed...demonstrated theoretically that the collision-enhanced Hanle resonances are equivalent to the phenomenon of collision-induced transverse optical pumping. The...for the sensitivity of the mean sojourn times. We also developed a set of new equations based on perturbation analysis which calculates theoretically
Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.
2015-01-01
Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208
Jin, Yulong; Huang, Yanyan; Liu, Guoquan; Zhao, Rui
2013-09-21
A novel quartz crystal microbalance (QCM) sensor for rapid, highly selective and sensitive detection of copper ions was developed. As a signal amplifier, gold nanoparticles (Au NPs) were self-assembled onto the surface of the sensor. A simple dip-and-dry method enabled the whole detection procedure to be accomplished within 20 min. High selectivity of the sensor towards copper ions is demonstrated by both individual and coexisting assays with interference ions. This gold nanoparticle mediated amplification allowed a detection limit down to 3.1 μM. Together with good repeatability and regeneration, the QCM sensor was also applied to the analysis of copper contamination in drinking water. This work provides a flexible method for fabricating QCM sensors for the analysis of important small molecules in environmental and biological samples.
Speed skills: measuring the visual speed analyzing properties of primate MT neurons.
Perrone, J A; Thiele, A
2001-05-01
Knowing the direction and speed of moving objects is often critical for survival. However, it is poorly understood how cortical neurons process the speed of image movement. Here we tested MT neurons using moving sine-wave gratings of different spatial and temporal frequencies, and mapped out the neurons' spatiotemporal frequency response profiles. The maps typically had oriented ridges of peak sensitivity as expected for speed-tuned neurons. The preferred speed estimate, derived from the orientation of the maps, corresponded well to the preferred speed when moving bars were presented. Thus, our data demonstrate that MT neurons are truly sensitive to the object speed. These findings indicate that MT is not only a key structure in the analysis of direction of motion and depth perception, but also in the analysis of object speed.
Hilfiker, James N.; Stadermann, Michael; Sun, Jianing; ...
2016-08-27
It is a well-known challenge to determine refractive index (n) from ultra-thin films where the thickness is less than about 10 nm. In this paper, we discovered an interesting exception to this issue while characterizing spectroscopic ellipsometry (SE) data from isotropic, free-standing polymer films. Ellipsometry analysis shows that both thickness and refractive index can be independently determined for free-standing films as thin as 5 nm. Simulations further confirm an orthogonal separation between thickness and index effects on the experimental SE data. Effects of angle of incidence and wavelength on the data and sensitivity are discussed. Finally, while others have demonstratedmore » methods to determine refractive index from ultra-thin films, our analysis provides the first results to demonstrate high-sensitivity to the refractive index from ultra-thin layers.« less
Sequence investigation of 34 forensic autosomal STRs with massively parallel sequencing.
Zhang, Suhua; Niu, Yong; Bian, Yingnan; Dong, Rixia; Liu, Xiling; Bao, Yun; Jin, Chao; Zheng, Hancheng; Li, Chengtao
2018-05-01
STRs vary not only in the length of the repeat units and the number of repeats but also in the region with which they conform to an incremental repeat pattern. Massively parallel sequencing (MPS) offers new possibilities in the analysis of STRs since they can simultaneously sequence multiple targets in a single reaction and capture potential internal sequence variations. Here, we sequenced 34 STRs applied in the forensic community of China with a custom-designed panel. MPS performance were evaluated from sequencing reads analysis, concordance study and sensitivity testing. High coverage sequencing data were obtained to determine the constitute ratios and heterozygous balance. No actual inconsistent genotypes were observed between capillary electrophoresis (CE) and MPS, demonstrating the reliability of the panel and the MPS technology. With the sequencing data from the 200 investigated individuals, 346 and 418 alleles were obtained via CE and MPS technologies at the 34 STRs, indicating MPS technology provides higher discrimination than CE detection. The whole study demonstrated that STR genotyping with the custom panel and MPS technology has the potential not only to reveal length and sequence variations but also to satisfy the demands of high throughput and high multiplexing with acceptable sensitivity.
Neurons as sensors: individual and cascaded chemical sensing.
Prasad, Shalini; Zhang, Xuan; Yang, Mo; Ozkan, Cengiz S; Ozkan, Mihrimah
2004-07-15
A single neuron sensor has been developed based on the interaction of gradient electric fields and the cell membrane. Single neurons are rapidly positioned over individual microelectrodes using positive dielectrophoretic traps. This enables the continuous extracellular electrophysiological measurements from individual neurons. The sensor developed using this technique provides the first experimental method for determining single cell sensitivity; the speed of response and the associated physiological changes to a broad spectrum of chemical agents. Binding of specific chemical agents to a specific combination of receptors induces changes to the extracellular membrane potential of a single neuron, which can be translated into unique "signature patterns" (SP), which function as identification tags. Signature patterns are derived using Fast Fourier Transformation (FFT) analysis and Wavelet Transformation (WT) analysis of the modified extracellular action potential. The validity and the sensitivity of the system are demonstrated for a variety of chemical agents ranging from behavior altering chemicals (ethanol), environmentally hazardous agents (hydrogen peroxide, EDTA) to physiologically harmful agents (pyrethroids) at pico- and femto-molar concentrations. The ability of a single neuron to selectively identify specific chemical agents when injected in a serial manner is demonstrated in "cascaded sensing".
Reyes, Maria M; Schneekloth, Terry D; Hitschfeld, Mario J; Geske, Jennifer R; Atkinson, David L; Karpyak, Victor M
2016-05-02
The objective was to assess the clinical utility of the Adult ADHD Self-Report Scale (ASRS-v1.1) in identifying ADHD in alcoholics using the Psychiatric Research Interview for Substance and Mental Disorders (PRISM) as the diagnostic "gold standard." We performed a secondary analysis of data from 379 treatment-seeking alcoholics who completed the ASRS-v1.1 and the ADHD module of the PRISM. Data analysis included descriptive statistics. The prevalence of ADHD was 7.7% (95% CI = [5.4, 10.8]). The positive predictive value (PPV) of the ASRS-v1.1 was 18.1% (95% CI = [12.4, 25.7]) and the negative predictive value (NPV) was 97.6% (95% CI = [94.9, 98.9]). The ASRS-v1.1 demonstrated a sensitivity of 79.3% (95% CI = [61.6, 90.2]) and a specificity of 70.3% (95% CI = [65.3, 74.8]). The ASRS-v1.1 demonstrated acceptable sensitivity and specificity in a sample of treatment-seeking alcoholics when compared with the PRISM as the reference standard for ADHD diagnosis. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Hopcroft, Peter O.; Valdes, Paul J.
2015-07-01
Previous work demonstrated a significant correlation between tropical surface air temperature and equilibrium climate sensitivity (ECS) in PMIP (Paleoclimate Modelling Intercomparison Project) phase 2 model simulations of the last glacial maximum (LGM). This implies that reconstructed LGM cooling in this region could provide information about the climate system ECS value. We analyze results from new simulations of the LGM performed as part of Coupled Model Intercomparison Project (CMIP5) and PMIP phase 3. These results show no consistent relationship between the LGM tropical cooling and ECS. A radiative forcing and feedback analysis shows that a number of factors are responsible for this decoupling, some of which are related to vegetation and aerosol feedbacks. While several of the processes identified are LGM specific and do not impact on elevated CO2 simulations, this analysis demonstrates one area where the newer CMIP5 models behave in a qualitatively different manner compared with the older ensemble. The results imply that so-called Earth System components such as vegetation and aerosols can have a significant impact on the climate response in LGM simulations, and this should be taken into account in future analyses.
The Case for Intelligent Propulsion Control for Fast Engine Response
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Frederick, Dean K.; Guo, Ten-Huei
2009-01-01
Damaged aircraft have occasionally had to rely solely on thrust to maneuver as a consequence of losing hydraulic power needed to operate flight control surfaces. The lack of successful landings in these cases inspired research into more effective methods of utilizing propulsion-only control. That research demonstrated that one of the major contributors to the difficulty in landing is the slow response of the engines as compared to using traditional flight control. To address this, research is being conducted into ways of making the engine more responsive under emergency conditions. This can be achieved by relaxing controller limits, adjusting schedules, and/or redesigning the regulators to increase bandwidth. Any of these methods can enable faster response at the potential expense of engine life and increased likelihood of stall. However, an example sensitivity analysis revealed a complex interaction of the limits and the difficulty in predicting the way to achieve the fastest response. The sensitivity analysis was performed on a realistic engine model, and demonstrated that significantly faster engine response can be achieved compared to standard Bill of Material control. However, the example indicates the need for an intelligent approach to controller limit adjustment in order for the potential to be fulfilled.
Park, Albert H; Mann, David; Error, Marc E; Miller, Matthew; Firpo, Matthew A; Wang, Yong; Alder, Stephen C; Schleiss, Mark R
2013-01-01
To assess the validity of the guinea pig as a model for congenital cytomegalovirus (CMV) infection by comparing the effectiveness of detecting the virus by real-time polymerase chain reaction (PCR) in blood, urine, and saliva. Case-control study. Academic research. Eleven pregnant Hartley guinea pigs. Blood, urine, and saliva samples were collected from guinea pig pups delivered from pregnant dams inoculated with guinea pig CMV. These samples were then evaluated for the presence of guinea pig CMV by real-time PCR assuming 100% transmission. Thirty-one pups delivered from 9 inoculated pregnant dams and 8 uninfected control pups underwent testing for guinea pig CMV and for auditory brainstem response hearing loss. Repeated-measures analysis of variance demonstrated no statistically significantly lower weight for the infected pups compared with the noninfected control pups. Six infected pups demonstrated auditory brainstem response hearing loss. The sensitivity and specificity of the real-time PCR assay on saliva samples were 74.2% and 100.0%, respectively. The sensitivity of the real-time PCR on blood and urine samples was significantly lower than that on saliva samples. Real-time PCR assays of blood, urine, and saliva revealed that saliva samples show high sensitivity and specificity for detecting congenital CMV infection in guinea pigs. This finding is consistent with recent screening studies in human newborns. The guinea pig may be a good animal model in which to compare different diagnostic assays for congenital CMV infection.
Role of Reward Sensitivity and Processing in Major Depressive and Bipolar Spectrum Disorders
Alloy, Lauren B.; Olino, Thomas; Freed, Rachel D.; Nusslock, Robin
2016-01-01
Since Costello’s (1972) seminal Behavior Therapy article on loss of reinforcers or reinforcer effectiveness in depression, the role of reward sensitivity and processing in both depression and bipolar disorder has become a central area of investigation. In this article, we review the evidence for a model of reward sensitivity in mood disorders, with unipolar depression characterized by reward hyposensitivity and bipolar disorders by reward hypersensitivity. We address whether aberrant reward sensitivity and processing are correlates of, mood-independent traits of, vulnerabilities for, and/or predictors of the course of depression and bipolar spectrum disorders, covering evidence from self-report, behavioral, neurophysiological, and neural levels of analysis. We conclude that substantial evidence documents that blunted reward sensitivity and processing are involved in unipolar depression and heightened reward sensitivity and processing are characteristic of hypomania/mania. We further conclude that aberrant reward sensitivity has a trait component, but more research is needed to clearly demonstrate that reward hyposensitivity and hypersensitivity are vulnerabilities for depression and bipolar disorder, respectively. Moreover, additional research is needed to determine whether bipolar depression is similar to unipolar depression and characterized by reward hyposensitivity, or whether like bipolar hypomania/mania, it involves reward hypersensitivity. PMID:27816074
Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2012-01-01
The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.
Jha, Ashwini Kumar; Tang, Wen Hao; Bai, Zhi Bin; Xiao, Jia Quan
2014-01-01
To perform a meta-analysis to review the sensitivity and specificity of computed tomography and different known computed yomography signs for the diagnosis of strangulation in patients with acute small bowel obstruction. A comprehensive Pubmed search was performed for all reports that evaluated the use of CT and discussed different CT criteria for the diagnosis of acute SBO. Articles published in English language from January 1978 to June 2008 were included. Review articles, case reports, pictorial essays and articles without original data were excluded. The bivariate random effect model was used to obtain pooled sensitivity and pooled specificity. Summary receiver operating curve was calculated using Meta-Disc. Software Openbugs 3.0.3 was used to summarize the data. A total of 12 studies fulfilled the inclusion criteria. The pooled sensitivity and specificity of CT in the diagnosis of strangulation was 0.720 (95% CI 0.674 to 0.763) and 0.866 (95% CI 0.837 to 0.892) respectively. Among different CT signs, mesenteric edema had highest Pooled sensitivity of 0. 741 and lack of bowel wall enhancement had highest pooled specificity of 0.991. This review demonstrates that CT is highly sensitive as well as specific in the preoperative diagnosis of strangulation SBO which are in accordance with the published studies. Our analysis also shows that "presence of mesenteric fluid" is most sensitive, and "lack of bowel wall enhancement" is most specific CT sign of strangulation, and also justifies need of large scale prospective studies to validate the results obtained as well as to determine a clinical protocol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metz, Peter; Koch, Robert; Cladek, Bernadette
Ion-exchanged Aurivillius materials form perovskite nanosheet booklets wherein well-defined bi-periodic sheets, with ~11.5 Å thickness, exhibit extensive stacking disorder. The perovskite layer contents were defined initially using combined synchrotron X-ray and neutron Rietveld refinement of the parent Aurivillius structure. The structure of the subsequently ion-exchanged material, which is disordered in its stacking sequence, is analyzed using both pair distribution function (PDF) analysis and recursive method simulations of the scattered intensity. Combined X-ray and neutron PDF refinement of supercell stacking models demonstrates sensitivity of the PDF to both perpendicular and transverse stacking vector components. Further, hierarchical ensembles of stacking models weightedmore » by a standard normal distribution are demonstrated to improve PDF fit over 1–25 Å. Recursive method simulations of the X-ray scattering profile demonstrate agreement between the real space stacking analysis and more conventional reciprocal space methods. The local structure of the perovskite sheet is demonstrated to relax only slightly from the Aurivillius structure after ion exchange.« less
Study of polarization properties of fiber-optics probes with use of a binary phase plate.
Alferov, S V; Khonina, S N; Karpeev, S V
2014-04-01
We conduct a theoretical and experimental study of the distribution of the electric field components in the sharp focal domain when rotating a zone plate with a π-phase jump placed in the focused beam. Comparing the theoretical and experimental results for several kinds of near-field probes, an analysis of the polarization sensitivity of different types of metal-coated aperture probes is conducted. It is demonstrated that with increasing diameter of the non-metal-coated tip part there occurs an essential redistribution of sensitivity in favor of the transverse electric field components and an increase of the probe's energy throughput.
Irvine, S E; Dombi, P; Farkas, Gy; Elezzabi, A Y
2006-10-06
Control over basic processes through the electric field of a light wave can lead to new knowledge of fundamental light-matter interaction phenomena. We demonstrate, for the first time, that surface-plasmon (SP) electron acceleration can be coherently controlled through the carrier-envelope phase (CEP) of an excitation optical pulse. Analysis indicates that the physical origin of the CEP sensitivity arises from the electron's ponderomotive interaction with the oscillating electromagnetic field of the SP wave. The ponderomotive electron acceleration mechanism provides sensitive (nJ energies), high-contrast, single-shot CEP measurement capability of few-cycle laser pulses.
Analyzing cost-effectiveness of ulnar and median nerve transfers to regain forearm flexion.
Wali, Arvin R; Park, Charlie C; Brown, Justin M; Mandeville, Ross
2017-03-01
OBJECTIVE Peripheral nerve transfers to regain elbow flexion via the ulnar nerve (Oberlin nerve transfer) and median nerves are surgical options that benefit patients. Prior studies have assessed the comparative effectiveness of ulnar and median nerve transfers for upper trunk brachial plexus injury, yet no study has examined the cost-effectiveness of this surgery to improve quality-adjusted life years (QALYs). The authors present a cost-effectiveness model of the Oberlin nerve transfer and median nerve transfer to restore elbow flexion in the adult population with upper brachial plexus injury. METHODS Using a Markov model, the authors simulated ulnar and median nerve transfers and conservative measures in terms of neurological recovery and improvements in quality of life (QOL) for patients with upper brachial plexus injury. Transition probabilities were collected from previous studies that assessed the surgical efficacy of ulnar and median nerve transfers, complication rates associated with comparable surgical interventions, and the natural history of conservative measures. Incremental cost-effectiveness ratios (ICERs), defined as cost in dollars per QALY, were calculated. Incremental cost-effectiveness ratios less than $50,000/QALY were considered cost-effective. One-way and 2-way sensitivity analyses were used to assess parameter uncertainty. Probabilistic sampling was used to assess ranges of outcomes across 100,000 trials. RESULTS The authors' base-case model demonstrated that ulnar and median nerve transfers, with an estimated cost of $5066.19, improved effectiveness by 0.79 QALY over a lifetime compared with conservative management. Without modeling the indirect cost due to loss of income over lifetime associated with elbow function loss, surgical treatment had an ICER of $6453.41/QALY gained. Factoring in the loss of income as indirect cost, surgical treatment had an ICER of -$96,755.42/QALY gained, demonstrating an overall lifetime cost savings due to increased probability of returning to work. One-way sensitivity analysis demonstrated that the model was most sensitive to assumptions about cost of surgery, probability of good surgical outcome, and spontaneous recovery of neurological function with conservative treatment. Two-way sensitivity analysis demonstrated that surgical intervention was cost-effective with an ICER of $18,828.06/QALY even with the authors' most conservative parameters with surgical costs at $50,000 and probability of success of 50% when considering the potential income recovered through returning to work. Probabilistic sampling demonstrated that surgical intervention was cost-effective in 76% of cases at a willingness-to-pay threshold of $50,000/QALY gained. CONCLUSIONS The authors' model demonstrates that ulnar and median nerve transfers for upper brachial plexus injury improves QALY in a cost-effective manner.
Lo, Yuan Hung; Peachey, Tom; Abramson, David; McCulloch, Andrew
2013-01-01
Little is known about how small variations in ionic currents and Ca2+ and Na+ diffusion coefficients impact action potential and Ca2+ dynamics in rabbit ventricular myocytes. We applied sensitivity analysis to quantify the sensitivity of Shannon et al. model (Biophys. J., 2004) to 5%–10% changes in currents conductance, channels distribution, and ion diffusion in rabbit ventricular cells. We found that action potential duration and Ca2+ peaks are highly sensitive to 10% increase in L-type Ca2+ current; moderately influenced by 10% increase in Na+-Ca2+ exchanger, Na+-K+ pump, rapid delayed and slow transient outward K+ currents, and Cl− background current; insensitive to 10% increases in all other ionic currents and sarcoplasmic reticulum Ca2+ fluxes. Cell electrical activity is strongly affected by 5% shift of L-type Ca2+ channels and Na+-Ca2+ exchanger in between junctional and submembrane spaces while Ca2+-activated Cl−-channel redistribution has the modest effect. Small changes in submembrane and cytosolic diffusion coefficients for Ca2+, but not in Na+ transfer, may alter notably myocyte contraction. Our studies highlight the need for more precise measurements and further extending and testing of the Shannon et al. model. Our results demonstrate usefulness of sensitivity analysis to identify specific knowledge gaps and controversies related to ventricular cell electrophysiology and Ca2+ signaling. PMID:24222910
Prevalence of and risk factors for latex sensitization in patients with spina bifida.
Bernardini, R; Novembre, E; Lombardi, E; Mezzetti, P; Cianferoni, A; Danti, A D; Mercurella, A; Vierucci, A
1998-11-01
We determined the prevalence of and risk factors for latex sensitization in patients with spina bifida. A total of 59 consecutive subjects 2 to 40 years old with spina bifida answered a questionnaire, and underwent a latex skin prick test and determination of serum IgE specific for latex by RAST CAP radioimmunoassay. We also noted the relationships of total serum IgE skin prick tests to common air and food allergens. In addition, skin prick plus prick tests were also done with fresh foods, including kiwi, pear, orange, almond, pineapple, apple, tomato and banana. Latex sensitization was present in 15 patients (25%) according to the presence of IgE specific to latex, as detected by a skin prick test in 9 and/or RAST CAP in 13. Five latex sensitized patients (33.3%) had clinical manifestations, such as urticaria, conjuctivitis, angioedema, rhinitis and bronchial asthma, while using a latex glove and inflating a latex balloon. Atopy was present in 21 patients (35.6%). In 14 patients (23%) 1 or more skin tests were positive for fresh foods using a prick plus prick technique. Tomato, kiwi, and pear were the most common skin test positive foods. Univariate analysis revealed that a history of 5 or more operations, atopy and positive prick plus prick tests results for pear and kiwi were significantly associated with latex sensitization. Multivariate analysis demonstrated that only atopy and a history of 5 or more operations were significantly and independently associated with latex sensitization. A fourth of the patients with spina bifida were sensitized to latex. Atopy and an elevated number of operations were significant and independent predictors of latex sensitization in these cases.
Youland, Ryan S; Pafundi, Deanna H; Brinkmann, Debra H; Lowe, Val J; Morris, Jonathan M; Kemp, Bradley J; Hunt, Christopher H; Giannini, Caterina; Parney, Ian F; Laack, Nadia N
2018-05-01
Treatment-related changes can be difficult to differentiate from progressive glioma using MRI with contrast (CE). The purpose of this study is to compare the sensitivity and specificity of 18F-DOPA-PET and MRI in patients with recurrent glioma. Thirteen patients with MRI findings suspicious for recurrent glioma were prospectively enrolled and underwent 18F-DOPA-PET and MRI for neurosurgical planning. Stereotactic biopsies were obtained from regions of concordant and discordant PET and MRI CE, all within regions of T2/FLAIR signal hyperintensity. The sensitivity and specificity of 18F-DOPA-PET and CE were calculated based on histopathologic analysis. Receiver operating characteristic curve analysis revealed optimal tumor to normal (T/N) and SUVmax thresholds. In the 37 specimens obtained, 51% exhibited MRI contrast enhancement (M+) and 78% demonstrated 18F-DOPA-PET avidity (P+). Imaging characteristics included M-P- in 16%, M-P+ in 32%, M+P+ in 46% and M+P- in 5%. Histopathologic review of biopsies revealed grade II components in 16%, grade III in 43%, grade IV in 30% and no tumor in 11%. MRI CE sensitivity for recurrent tumor was 52% and specificity was 50%. PET sensitivity for tumor was 82% and specificity was 50%. A T/N threshold > 2.0 altered sensitivity to 76% and specificity to 100% and SUVmax > 1.36 improved sensitivity and specificity to 94 and 75%, respectively. 18F-DOPA-PET can provide increased sensitivity and specificity compared with MRI CE for visualizing the spatial distribution of recurrent gliomas. Future studies will incorporate 18F-DOPA-PET into re-irradiation target volume delineation for RT planning.
Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare
2017-11-01
The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.
Radomyski, Artur; Giubilato, Elisa; Ciffroy, Philippe; Critto, Andrea; Brochot, Céline; Marcomini, Antonio
2016-11-01
The study is focused on applying uncertainty and sensitivity analysis to support the application and evaluation of large exposure models where a significant number of parameters and complex exposure scenarios might be involved. The recently developed MERLIN-Expo exposure modelling tool was applied to probabilistically assess the ecological and human exposure to PCB 126 and 2,3,7,8-TCDD in the Venice lagoon (Italy). The 'Phytoplankton', 'Aquatic Invertebrate', 'Fish', 'Human intake' and PBPK models available in MERLIN-Expo library were integrated to create a specific food web to dynamically simulate bioaccumulation in various aquatic species and in the human body over individual lifetimes from 1932 until 1998. MERLIN-Expo is a high tier exposure modelling tool allowing propagation of uncertainty on the model predictions through Monte Carlo simulation. Uncertainty in model output can be further apportioned between parameters by applying built-in sensitivity analysis tools. In this study, uncertainty has been extensively addressed in the distribution functions to describe the data input and the effect on model results by applying sensitivity analysis techniques (screening Morris method, regression analysis, and variance-based method EFAST). In the exposure scenario developed for the Lagoon of Venice, the concentrations of 2,3,7,8-TCDD and PCB 126 in human blood turned out to be mainly influenced by a combination of parameters (half-lives of the chemicals, body weight variability, lipid fraction, food assimilation efficiency), physiological processes (uptake/elimination rates), environmental exposure concentrations (sediment, water, food) and eating behaviours (amount of food eaten). In conclusion, this case study demonstrated feasibility of MERLIN-Expo to be successfully employed in integrated, high tier exposure assessment. Copyright © 2016 Elsevier B.V. All rights reserved.
Wali, Arvin R; Park, Charlie C; Santiago-Dieppa, David R; Vaida, Florin; Murphy, James D; Khalessi, Alexander A
2017-06-01
OBJECTIVE Rupture of large or giant intracranial aneurysms leads to significant morbidity, mortality, and health care costs. Both coiling and the Pipeline embolization device (PED) have been shown to be safe and clinically effective for the treatment of unruptured large and giant intracranial aneurysms; however, the relative cost-to-outcome ratio is unknown. The authors present the first cost-effectiveness analysis to compare the economic impact of the PED compared with coiling or no treatment for the endovascular management of large or giant intracranial aneurysms. METHODS A Markov model was constructed to simulate a 60-year-old woman with a large or giant intracranial aneurysm considering a PED, endovascular coiling, or no treatment in terms of neurological outcome, angiographic outcome, retreatment rates, procedural and rehabilitation costs, and rupture rates. Transition probabilities were derived from prior literature reporting outcomes and costs of PED, coiling, and no treatment for the management of aneurysms. Cost-effectiveness was defined, with the incremental cost-effectiveness ratios (ICERs) defined as difference in costs divided by the difference in quality-adjusted life years (QALYs). The ICERs < $50,000/QALY gained were considered cost-effective. To study parameter uncertainty, 1-way, 2-way, and probabilistic sensitivity analyses were performed. RESULTS The base-case model demonstrated lifetime QALYs of 12.72 for patients in the PED cohort, 12.89 for the endovascular coiling cohort, and 9.7 for patients in the no-treatment cohort. Lifetime rehabilitation and treatment costs were $59,837.52 for PED; $79,025.42 for endovascular coiling; and $193,531.29 in the no-treatment cohort. Patients who did not undergo elective treatment were subject to increased rates of aneurysm rupture and high treatment and rehabilitation costs. One-way sensitivity analysis demonstrated that the model was most sensitive to assumptions about the costs and mortality risks for PED and coiling. Probabilistic sampling demonstrated that PED was the cost-effective strategy in 58.4% of iterations, coiling was the cost-effective strategy in 41.4% of iterations, and the no-treatment option was the cost-effective strategy in only 0.2% of iterations. CONCLUSIONS The authors' cost-effective model demonstrated that elective endovascular techniques such as PED and endovascular coiling are cost-effective strategies for improving health outcomes and lifetime quality of life measures in patients with large or giant unruptured intracranial aneurysm.
Global analysis of the yeast lipidome by quantitative shotgun mass spectrometry.
Ejsing, Christer S; Sampaio, Julio L; Surendranath, Vineeth; Duchoslav, Eva; Ekroos, Kim; Klemm, Robin W; Simons, Kai; Shevchenko, Andrej
2009-02-17
Although the transcriptome, proteome, and interactome of several eukaryotic model organisms have been described in detail, lipidomes remain relatively uncharacterized. Using Saccharomyces cerevisiae as an example, we demonstrate that automated shotgun lipidomics analysis enabled lipidome-wide absolute quantification of individual molecular lipid species by streamlined processing of a single sample of only 2 million yeast cells. By comparative lipidomics, we achieved the absolute quantification of 250 molecular lipid species covering 21 major lipid classes. This analysis provided approximately 95% coverage of the yeast lipidome achieved with 125-fold improvement in sensitivity compared with previous approaches. Comparative lipidomics demonstrated that growth temperature and defects in lipid biosynthesis induce ripple effects throughout the molecular composition of the yeast lipidome. This work serves as a resource for molecular characterization of eukaryotic lipidomes, and establishes shotgun lipidomics as a powerful platform for complementing biochemical studies and other systems-level approaches.
Signatures of mountain building: Detrital zircon U/Pb ages from northeast Tibet
Lease, Richard O.; Burbank, Douglas W.; Gehrels, George E.; Wang, Zhicai; Yuan, Daoyang
2007-01-01
Although detrital zircon has proven to be a powerful tool for determining provenance, past work has focused primarily on delimiting regional source terranes. Here we explore the limits of spatial resolution and stratigraphic sensitivity of detrital zircon in ascertaining provenance, and we demonstrate its ability to detect source changes for terranes separated by only a few tens of kilometers. For such an analysis to succeed for a given mountain, discrete intrarange source terranes must have unique U/Pb zircon age signatures and sediments eroded from the range must have well-defined depositional ages. Here we use ∼1400 single-grain U/Pb zircon ages from northeastern Tibet to identify and analyze an area that satisfies these conditions. This analysis shows that the edges of intermontane basins are stratigraphically sensitive to discrete, punctuated changes in local source terranes. By tracking eroding rock units chronologically through the stratigraphic record, this sensitivity permits the detection of the differential rock uplift and progressive erosion that began ca. 8 Ma in the Laji Shan, a 10-25-km-wide range in northeastern Tibet with a unique U/Pb age signature.
NASA Astrophysics Data System (ADS)
Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.
2018-04-01
Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.
High order statistical signatures from source-driven measurements of subcritical fissile systems
NASA Astrophysics Data System (ADS)
Mattingly, John Kelly
1998-11-01
This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.
Uncertainty Analysis of the Grazing Flow Impedance Tube
NASA Technical Reports Server (NTRS)
Brown, Martha C.; Jones, Michael G.; Watson, Willie R.
2012-01-01
This paper outlines a methodology to identify the measurement uncertainty of NASA Langley s Grazing Flow Impedance Tube (GFIT) over its operating range, and to identify the parameters that most significantly contribute to the acoustic impedance prediction. Two acoustic liners are used for this study. The first is a single-layer, perforate-over-honeycomb liner that is nonlinear with respect to sound pressure level. The second consists of a wire-mesh facesheet and a honeycomb core, and is linear with respect to sound pressure level. These liners allow for evaluation of the effects of measurement uncertainty on impedances educed with linear and nonlinear liners. In general, the measurement uncertainty is observed to be larger for the nonlinear liners, with the largest uncertainty occurring near anti-resonance. A sensitivity analysis of the aerodynamic parameters (Mach number, static temperature, and static pressure) used in the impedance eduction process is also conducted using a Monte-Carlo approach. This sensitivity analysis demonstrates that the impedance eduction process is virtually insensitive to each of these parameters.
Advances in on-chip photodetection for applications in miniaturized genetic analysis systems
NASA Astrophysics Data System (ADS)
Namasivayam, Vijay; Lin, Rongsheng; Johnson, Brian; Brahmasandra, Sundaresh; Razzacki, Zafar; Burke, David T.; Burns, Mark A.
2004-01-01
Microfabrication techniques have become increasingly popular in the development of next generation DNA analysis devices. Improved on-chip fluorescence detection systems may have applications in developing portable hand-held instruments for point-of-care diagnostics. Miniaturization of fluorescence detection involves construction of ultra-sensitive photodetectors that can be integrated onto a fluidic platform combined with the appropriate optical emission filters. We have previously demonstrated integration PIN photodiodes onto a microfabricated electrophoresis channel for separation and detection of DNA fragments. In this work, we present an improved detector structure that uses a PINN+ photodiode with an on-chip interference filter and a robust liquid barrier layer. This new design yields high sensitivity (detection limit of 0.9 ng µl-1 of DNA), low-noise (S/N ~ 100/1) and enhanced quantum efficiencies (>80%) over the entire visible spectrum. Applications of these photodiodes in various areas of DNA analysis such as microreactions (PCR), separations (electrophoresis) and microfluidics (drop sensing) are presented.
Parallel traveling-wave MRI: a feasibility study.
Pang, Yong; Vigneron, Daniel B; Zhang, Xiaoliang
2012-04-01
Traveling-wave magnetic resonance imaging utilizes far fields of a single-piece patch antenna in the magnet bore to generate radio frequency fields for imaging large-size samples, such as the human body. In this work, the feasibility of applying the "traveling-wave" technique to parallel imaging is studied using microstrip patch antenna arrays with both the numerical analysis and experimental tests. A specific patch array model is built and each array element is a microstrip patch antenna. Bench tests show that decoupling between two adjacent elements is better than -26-dB while matching of each element reaches -36-dB, demonstrating excellent isolation performance and impedance match capability. The sensitivity patterns are simulated and g-factors are calculated for both unloaded and loaded cases. The results on B 1- sensitivity patterns and g-factors demonstrate the feasibility of the traveling-wave parallel imaging. Simulations also suggest that different array configuration such as patch shape, position and orientation leads to different sensitivity patterns and g-factor maps, which provides a way to manipulate B(1) fields and improve the parallel imaging performance. The proposed method is also validated by using 7T MR imaging experiments. Copyright © 2011 Wiley-Liss, Inc.
SIG-VISA: Signal-based Vertically Integrated Seismic Monitoring
NASA Astrophysics Data System (ADS)
Moore, D.; Mayeda, K. M.; Myers, S. C.; Russell, S.
2013-12-01
Traditional seismic monitoring systems rely on discrete detections produced by station processing software; however, while such detections may constitute a useful summary of station activity, they discard large amounts of information present in the original recorded signal. We present SIG-VISA (Signal-based Vertically Integrated Seismic Analysis), a system for seismic monitoring through Bayesian inference on seismic signals. By directly modeling the recorded signal, our approach incorporates additional information unavailable to detection-based methods, enabling higher sensitivity and more accurate localization using techniques such as waveform matching. SIG-VISA's Bayesian forward model of seismic signal envelopes includes physically-derived models of travel times and source characteristics as well as Gaussian process (kriging) statistical models of signal properties that combine interpolation of historical data with extrapolation of learned physical trends. Applying Bayesian inference, we evaluate the model on earthquakes as well as the 2009 DPRK test event, demonstrating a waveform matching effect as part of the probabilistic inference, along with results on event localization and sensitivity. In particular, we demonstrate increased sensitivity from signal-based modeling, in which the SIGVISA signal model finds statistical evidence for arrivals even at stations for which the IMS station processing failed to register any detection.
García-Arribas, Alfredo; Gutiérrez, Jon; Kurlyandskaya, Galina V.; Barandiarán, José M.; Svalov, Andrey; Fernández, Eduardo; Lasheras, Andoni; de Cos, David; Bravo-Imaz, Iñaki
2014-01-01
The outstanding properties of selected soft magnetic materials make them successful candidates for building high performance sensors. In this paper we present our recent work regarding different sensing technologies based on the coupling of the magnetic properties of soft magnetic materials with their electric or elastic properties. In first place we report the influence on the magneto-impedance response of the thickness of Permalloy films in multilayer-sandwiched structures. An impedance change of 270% was found in the best conditions upon the application of magnetic field, with a low field sensitivity of 140%/Oe. Second, the magneto-elastic resonance of amorphous ribbons is used to demonstrate the possibility of sensitively measuring the viscosity of fluids, aimed to develop an on-line and real-time sensor capable of assessing the state of degradation of lubricant oils in machinery. A novel analysis method is shown to sensitively reveal the changes of the damping parameter of the magnetoelastic oscillations at the resonance as a function of the oil viscosity. Finally, the properties and performance of magneto-electric laminated composites of amorphous magnetic ribbons and piezoelectric polymer films are investigated, demonstrating magnetic field detection capabilities below 2.7 nT. PMID:24776934
Bueno, Ana María; Marín, Miguel Ángel; Contento, Ana María; Ríos, Ángel
2016-02-01
A chromatographic method, using amperometric detection, for the sensitive determination of six representative mutagenic amines was developed. A glassy carbon electrode (GCE), modified with multiwall carbon nanotubes (GCE-CNTs), was prepared and its response compared to a conventional glassy carbon electrode. The chromatographic method (HPLC-GCE-CNTs) allowed the separation and the determination of heterocyclic aromatic amines (HAAs) classified as mutagenic amines by the International Agency for Research of Cancer. The new electrode was systematically studied in terms of stability, sensitivity, and reproducibility. Statistical analysis of the obtained data demonstrated that the modified electrode provided better sensitivity than the conventional unmodified ones. Detection limits were in the 3.0 and 7.5 ng/mL range, whereas quantification limits ranged between 9.5 and 25.0 ng/mL were obtained. The applicability of the method was demonstrated by the determination of the amines in several types of samples (water and food samples). Recoveries indicate very good agreement between amounts added and those found for all HAAs (recoveries in the 92% and 105% range). Copyright © 2015 Elsevier Ltd. All rights reserved.
Lu, Yun; Jin, Xiuze; Duan, Cheng-A-Xin; Chang, Feng
2018-01-01
Hepatitis C is the second fastest growing infectious disease in China. The standard-of-care for chronic hepatitis C in China is Pegylated interferon plus ribavirin (PR), which is associated with tolerability and efficacy issues. An interferon- and ribavirin-free, all-oral regimen comprising daclatasvir (DCV) and asunaprevir (ASV), which displays higher efficacy and tolerability, has recently been approved in China. This study is to estimate the cost-effectiveness of DCV+ASV (24 weeks) for chronic hepatitis C genotype 1b treatment-naïve patients compared with PR regimen (48 weeks) in China. A cohort-based Markov model was developed from Chinese payer perspective to project the lifetime outcomes of treating 10,000 patients with an average age of 44.5 with two hypothetical regimens, DCV+ASV and PR. Chinese-specific health state costs and efficacy data were used. The annual discount rate was 5%. Base-case analysis and sensitivity analysis were conducted. For HCV Genotype 1b treatment-naïve patients, DCV+ASV proved to be dominant over PR, with a cost saving of ¥33,480(5,096 USD) and gains in QALYs and life years of 1.29 and 0.85, respectively. The lifetime risk of compensated cirrhosis, decompensated cirrhosis, hepatocellular carcinoma and liver-related death was greatly reduced with DCV+ASV. Univariate sensitivity analysis demonstrated that key influencers were the discount rate, time horizon, initial disease severity and sustained virological response rate of DCV+ASV, with all scenarios resulting in additional benefit. Probabilistic sensitivity analysis demonstrated that DCV+ASV has a high likelihood (100%) of being cost-effective. DCV+ASV is not only an effective and well-tolerated regimen to treat chronic HCV genotype 1b infection treatment-naïve patients, but also is more cost-effective than PR regimen. DCV+ASV can benefit both the public health and reimbursement system in China.
Pumping tests in non-uniform aquifers - the linear strip case
Butler, J.J.; Liu, W.Z.
1991-01-01
Many pumping tests are performed in geologic settings that can be conceptualized as a linear infinite strip of one material embedded in a matrix of differing flow properties. A semi-analytical solution is presented to aid the analysis of drawdown data obtained from pumping tests performed in settings that can be represented by such a conceptual model. Integral transform techniques are employed to obtain a solution in transform space that can be numerically inverted to real space. Examination of the numerically transformed solution reveals several interesting features of flow in this configuration. If the transmissivity of the strip is much higher than that of the matrix, linear and bilinear flow are the primary flow regimes during a pumping test. If the contrast between matrix and strip properties is not as extreme, then radial flow should be the primary flow mechanism. Sensitivity analysis is employed to develop insight into the controls on drawdown in this conceptual model and to demonstrate the importance of temporal and spatial placement of observations. Changes in drawdown are sensitive to the transmissivity of the strip for a limited time duration. After that time, only the total drawdown remains a function of strip transmissivity. In the case of storativity, both the total drawdown and changes in drawdown are sensitive to the storativity of the strip for a time of quite limited duration. After that time, essentially no information can be gained about the storage properties of the strip from drawdown data. An example analysis is performed using data previously presented in the literature to demonstrate the viability of the semi-analytical solution and to illustrate a general procedure for analysis of drawdown data in complex geologic settings. This example reinforces the importance of observation well placement and the time of data collection in constraining parameter correlation, a major source of the uncertainty that arises in the parameter estimation procedure. ?? 1991.
Lu, Yun; Jin, Xiuze; Duan, Cheng-a-xin
2018-01-01
Background Hepatitis C is the second fastest growing infectious disease in China. The standard-of-care for chronic hepatitis C in China is Pegylated interferon plus ribavirin (PR), which is associated with tolerability and efficacy issues. An interferon- and ribavirin-free, all-oral regimen comprising daclatasvir (DCV) and asunaprevir (ASV), which displays higher efficacy and tolerability, has recently been approved in China. Objectives This study is to estimate the cost-effectiveness of DCV+ASV (24 weeks) for chronic hepatitis C genotype 1b treatment-naïve patients compared with PR regimen (48 weeks) in China. Methods A cohort-based Markov model was developed from Chinese payer perspective to project the lifetime outcomes of treating 10,000 patients with an average age of 44.5 with two hypothetical regimens, DCV+ASV and PR. Chinese-specific health state costs and efficacy data were used. The annual discount rate was 5%. Base-case analysis and sensitivity analysis were conducted. Results For HCV Genotype 1b treatment-naïve patients, DCV+ASV proved to be dominant over PR, with a cost saving of ¥33,480(5,096 USD) and gains in QALYs and life years of 1.29 and 0.85, respectively. The lifetime risk of compensated cirrhosis, decompensated cirrhosis, hepatocellular carcinoma and liver-related death was greatly reduced with DCV+ASV. Univariate sensitivity analysis demonstrated that key influencers were the discount rate, time horizon, initial disease severity and sustained virological response rate of DCV+ASV, with all scenarios resulting in additional benefit. Probabilistic sensitivity analysis demonstrated that DCV+ASV has a high likelihood (100%) of being cost-effective. Conclusion DCV+ASV is not only an effective and well-tolerated regimen to treat chronic HCV genotype 1b infection treatment-naïve patients, but also is more cost-effective than PR regimen. DCV+ASV can benefit both the public health and reimbursement system in China. PMID:29634736
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Li, Changjun; Chang, Qinghua; Zhang, Jia; Chai, Wenshu
2018-05-01
This study is to investigate the effects of slow breathing on heart rate variability (HRV) and arterial baroreflex sensitivity in essential hypertension.We studied 60 patients with essential hypertension and 60 healthy controls. All subjects underwent controlled breathing at 8 and 16 breaths per minute. Electrocardiogram, respiratory, and blood pressure signals were recorded simultaneously. We studied effects of slow breathing on heart rate, blood pressure and respiratory peak, high-frequency (HF) power, low-frequency (LF) power, and LF/HF ratio of HRV with traditional and corrected spectral analysis. Besides, we tested whether slow breathing was capable of modifying baroreflex sensitivity in hypertensive subjects.Slow breathing, compared with 16 breaths per minute, decreased the heart rate and blood pressure (all P < .05), and shifted respiratory peak toward left (P < .05). Compared to 16 breaths/minute, traditional spectral analysis showed increased LF power and LF/HF ratio, decreased HF power of HRV at 8 breaths per minute (P < .05). As breathing rate decreased, corrected spectral analysis showed increased HF power, decreased LF power, LF/HF ratio of HRV (P < .05). Compared to controls, resting baroreflex sensitivity decreased in hypertensive subjects. Slow breathing increased baroreflex sensitivity in hypertensive subjects (from 59.48 ± 6.39 to 78.93 ± 5.04 ms/mm Hg, P < .05) and controls (from 88.49 ± 6.01 to 112.91 ± 7.29 ms/mm Hg, P < .05).Slow breathing can increase HF power and decrease LF power and LF/HF ratio in essential hypertension. Besides, slow breathing increased baroreflex sensitivity in hypertensive subjects. These demonstrate slow breathing is indeed capable of shifting sympatho-vagal balance toward vagal activities and increasing baroreflex sensitivity, suggesting a safe, therapeutic approach for essential hypertension.
NASA Astrophysics Data System (ADS)
Daaboul, George
Label-free optical biosensors have been established as proven tools for monitoring specific biomolecular interactions. However, compact and robust embodiments of such instruments have yet to be introduced in order to provide sensitive, quantitative, and high-throughput biosensing for low-cost research and clinical applications. Here we present the interferometric reflectance-imaging sensor (IRIS). IRIS allows sensitive label free analysis using an inexpensive and durable multi-color LED illumination source on a silicon based surface. IRIS monitors biomolecular interaction through measurement of biomass addition to the sensor's surface. We demonstrate the capability of this system to dynamically monitor antigen---antibody interactions with a noise floor of 5.2 pg/mm 2 and DNA single mismatch detection under isothermal melting conditions in an array format. Ensemble detection of binding events using IRIS did not provide the sensitivity needed for detection of infectious disease and biomarkers at clinically relevant concentrations. Therefore, a new approach was adapted to the IRIS platform that allowed the detection and identification of individual nanoparticles on the sensor's surface. The new detection method was termed single-particle IRIS (SP-IRIS). We developed two detection modalities for SP-IRIS. The first modality is when the target is a nanoparticle such as a virus. We verified that SP-IRIS can accurately detect and size individual viral particles. Then we demonstrated that single nanoparticle counting and sizing methodology on SP-IRIS leads to a specific and sensitive virus sensor that can be multiplexed. Finally, we developed an assay for the detection of Ebola and Marburg. A detection limit of 3 x 103 PFU/ml was demonstrated for vesicular stomatitis virus (VSV) pseudotyped with Ebola or Marburg virus glycoprotein. We have demonstrated that virus detection can be done in human whole blood directly without the need for sample preparation. The second modality of SP-IRIS we developed was single molecule counting of biomarkers utilizing a sandwich assay with detection probes labeled with gold nanoparticles. We demonstrated the use of single molecule counting in a nucleic acid assay for melanoma biomarker detection. We showed that a single molecule counting assay can lead to detection limits in the attomolar range. The improved sensitivity of IRIS utilizing single nanoparticle detection holds promise for a simple and low-cost technology for rapid virus detection and multiplexed molecular screening for clinical applications.
NASA Astrophysics Data System (ADS)
Kumar, Rajeev; Kushwaha, Angad S.; Srivastava, Monika; Mishra, H.; Srivastava, S. K.
2018-03-01
In the present communication, a highly sensitive surface plasmon resonance (SPR) biosensor with Kretschmann configuration having alternate layers, prism/zinc oxide/silver/gold/graphene/biomolecules (ss-DNA) is presented. The optimization of the proposed configuration has been accomplished by keeping the constant thickness of zinc oxide (32 nm), silver (32 nm), graphene (0.34 nm) layer and biomolecules (100 nm) for different values of gold layer thickness (1, 3 and 5 nm). The sensitivity of the proposed SPR biosensor has been demonstrated for a number of design parameters such as gold layer thickness, number of graphene layer, refractive index of biomolecules and the thickness of biomolecules layer. SPR biosensor with optimized geometry has greater sensitivity (66 deg/RIU) than the conventional (52 deg/RIU) as well as other graphene-based (53.2 deg/RIU) SPR biosensor. The effect of zinc oxide layer thickness on the sensitivity of SPR biosensor has also been analysed. From the analysis, it is found that the sensitivity increases significantly by increasing the thickness of zinc oxide layer. It means zinc oxide intermediate layer plays an important role to improve the sensitivity of the biosensor. The sensitivity of SPR biosensor also increases by increasing the number of graphene layer (upto nine layer).
NASA Astrophysics Data System (ADS)
Yan, Guofeng; Zhang, Liang; He, Sailing
2016-04-01
In this paper, a dual-parameter measurement scheme based on an etched thin core fiber modal interferometer (TCMI) cascaded with a fiber Bragg grating (FBG) is proposed and experimentally demonstrated for simultaneous measurement of magnetic field and temperature. The magnetic field and temperature responses of the packaged TCFMI were first investigated, which showed that the magnetic field sensitivity could be highly enhanced by decreasing of the TCF diameter and the temperature-cross sensitivities were up to 3-7 Oe/°C at 1550 nm. Then, the theoretical analysis and experimental demonstration of the proposed dual-parameter sensing scheme were conducted. Experimental results show that, the reflection of the FBG has a magnetic field intensity and temperature sensitivities of -0.017 dB/Oe and 0.133 dB/°C, respectively, while the Bragg wavelength of the FBG is insensitive to magnetic field and has a temperature sensitivity of 13.23 pm/°C. Thus by using the sensing matrix method, the intensity of the magnetic field and the temperature variance can be measured, which enables magnetic field sensing under strict temperature environments. In the on-off time response test, the fabricated sensor exhibited high repeatability and short response time of ∼19.4 s. Meanwhile the reflective sensing probe type is more compact and practical for applications in hard-to-reach conditions.
Fernández-de-Las-Peñas, César; Coppieters, Michel W; Cuadrado, María Luz; Pareja, Juan A
2008-04-01
This study aimed to establish whether increased sensitivity to mechanical stimuli is present in neural tissues in chronic tension-type headache (CTTH). Muscle hyperalgesia is a common finding in CTTH. No previous studies have investigated the sensitivity of peripheral nerves in patients with CTTH. A blinded controlled study. Pressure pain thresholds (PPT) and pain intensity following palpation of the supra-orbital nerve (V1) were compared between 20 patients with CTTH and 20 healthy matched subjects. A pressure algometer and numerical pain rate scale were used to quantify PPT and pain to palpation. A headache diary was kept for 4 weeks to substantiate the diagnosis and record the pain history. The analysis of variance demonstrated significantly lower PPT for patients (0.86+/-0.13 kg/cm2) than controls (1.50+/-0.19 kg/cm2) (P<.001). Pain to palpation was also higher for patients (2.73+/-1.58) than controls (0.15+/-0.28) (P<.001). Within the CTTH group, intensity, frequency, and duration of the headaches were negatively correlated with PPT (rs
Space station integrated wall design and penetration damage control
NASA Technical Reports Server (NTRS)
Coronado, A. R.; Gibbins, M. N.; Wright, M. A.; Stern, P. H.
1987-01-01
The analysis code BUMPER executes a numerical solution to the problem of calculating the probability of no penetration (PNP) of a spacecraft subject to man-made orbital debris or meteoroid impact. The codes were developed on a DEC VAX 11/780 computer that uses the Virtual Memory System (VMS) operating system, which is written in FORTRAN 77 with no VAX extensions. To help illustrate the steps involved, a single sample analysis is performed. The example used is the space station reference configuration. The finite element model (FEM) of this configuration is relatively complex but demonstrates many BUMPER features. The computer tools and guidelines are described for constructing a FEM for the space station under consideration. The methods used to analyze the sensitivity of PNP to variations in design, are described. Ways are suggested for developing contour plots of the sensitivity study data. Additional BUMPER analysis examples are provided, including FEMs, command inputs, and data outputs. The mathematical theory used as the basis for the code is described, and illustrates the data flow within the analysis.
David, Frank; Tienpont, Bart; Devos, Christophe; Lerch, Oliver; Sandra, Pat
2013-10-25
Laboratories focusing on residue analysis in food are continuously seeking to increase sample throughput by minimizing sample preparation. Generic sample extraction methods such as QuEChERS lack selectivity and consequently extracts are not free from non-volatile material that contaminates the analytical system. Co-extracted matrix constituents interfere with target analytes, even if highly sensitive and selective GC-MS/MS is used. A number of GC approaches are described that can be used to increase laboratory productivity. These techniques include automated inlet liner exchange and column backflushing for preservation of the performance of the analytical system and heart-cutting two-dimensional GC for increasing sensitivity and selectivity. The application of these tools is illustrated by the analysis of pesticides in vegetables and fruits, PCBs in milk powder and coplanar PCBs in fish. It is demonstrated that considerable increase in productivity can be achieved by decreasing instrument down-time, while analytical performance is equal or better compared to conventional trace contaminant analysis. Copyright © 2013 Elsevier B.V. All rights reserved.
Extended Testability Analysis Tool
NASA Technical Reports Server (NTRS)
Melcher, Kevin; Maul, William A.; Fulton, Christopher
2012-01-01
The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Fang; Wang, Kaihua; Lin, Yuehe
2005-10-10
A novel, sensitive immunochromatographic electrochemical biosensor (IEB) which combines an immunochromatographic strip technique with an electrochemical detection technique is demonstrated. The IEB takes advantages of the speed and low-cost of the conventional immunochromatographic test kits and high-sensitivity of stripping voltammetry. Bismuth ions (Bi3+) have been coupled with the antibody through the bifunctional chelating agent diethylenetriamine pentaacetic acid (DTPA). After immunoreactions, Bi3+ was released and quantified by anodic stripping voltammetry at a built-in single-use screen-printed electrode. As an example for the applications of such novel device, the detection of human chorionic gonadotronphin (HCG) in a specimen was performed. This biosensor providesmore » a more user-friendly, rapid, clinically accurate, and less expensive immunoassay for such analysis in specimens than currently available test kits.« less
Bhardwaj, Jyoti; Mahajan, Monika; Yadav, Sudesh Kumar
2013-08-01
DNA methylation is known as an epigenetic modification that affects gene expression in plants. Variation in CpG methylation behavior was studied in two natural horse gram (Macrotyloma uniflorum [Lam.] Verdc.) genotypes, HPKC2 (drought-sensitive) and HPK4 (drought-tolerant). The methylation pattern in both genotypes was studied through methylation-sensitive amplified polymorphism. The results revealed that methylation was higher in HPKC2 (10.1%) than in HPK4 (8.6%). Sequencing demonstrated sequence homology with the DRE binding factor (cbf1), the POZ/BTB protein, and the Ty1-copia retrotransposon among some of the polymorphic fragments showing alteration in methylation behavior. Differences in DNA methylation patterns could explain the differential drought tolerance and the epigenetic signature of these two horse gram genotypes.
Marzulli, F; Maguire, H C
1982-02-01
Several guinea-pig predictive test methods were evaluated by comparison of results with those obtained with human predictive tests, using ten compounds that have been used in cosmetics. The method involves the statistical analysis of the frequency with which guinea-pig tests agree with the findings of tests in humans. In addition, the frequencies of false positive and false negative predictive findings are considered and statistically analysed. The results clearly demonstrate the superiority of adjuvant tests (complete Freund's adjuvant) in determining skin sensitizers and the overall superiority of the guinea-pig maximization test in providing results similar to those obtained by human testing. A procedure is suggested for utilizing adjuvant and non-adjuvant test methods for characterizing compounds as of weak, moderate or strong sensitizing potential.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less
Rapid analysis of controlled substances using desorption electrospray ionization mass spectrometry.
Rodriguez-Cruz, Sandra E
2006-01-01
The recently developed technique of desorption electrospray ionization (DESI) has been applied to the rapid analysis of controlled substances. Experiments have been performed using a commercial ThermoFinnigan LCQ Advantage MAX ion-trap mass spectrometer with limited modifications. Results from the ambient sampling of licit and illicit tablets demonstrate the ability of the DESI technique to detect the main active ingredient(s) or controlled substance(s), even in the presence of other higher-concentration components. Full-scan mass spectrometry data provide preliminary identification by molecular weight determination, while rapid analysis using the tandem mass spectrometry (MS/MS) mode provides fragmentation data which, when compared to the laboratory-generated ESI-MS/MS spectral library, provide structural information and final identification of the active ingredient(s). The consecutive analysis of tablets containing different active components indicates there is no cross-contamination or interference from tablet to tablet, demonstrating the reliability of the DESI technique for rapid sampling (one tablet/min or better). Active ingredients have been detected for tablets in which the active component represents less than 1% of the total tablet weight, demonstrating the sensitivity of the technique. The real-time sampling of cannabis plant material is also presented.
Comparison of the efficiency control of mycotoxins by some optical immune biosensors
NASA Astrophysics Data System (ADS)
Slyshyk, N. F.; Starodub, N. F.
2013-11-01
It was compared the efficiency of patulin control at the application of such optical biosensors which were based on the surface plasmon resonance (SPR) and nano-porous silicon (sNPS). In last case the intensity of the immune reaction was registered by measuring level of chemiluminescence (ChL) or photocurrent of nPS. The sensitivity of this mycotoxin determination by first type of immune biosensor was 0.05-10 mg/L Approximately the same sensitivity as well as the overall time analysis were demonstrated by the immune biosensor based on the nPS too. Nevertheless, the last type of biosensor was simpler in technical aspect and the cost of analysis was cheapest. That is why, it was recommend the nPS based immune biosensor for wide screening application and SPR one for some additional control or verification of preliminary obtained results. In this article a special attention was given to condition of sample preparation for analysis, in particular, micotoxin extraction from potao and some juices. Moreover, it was compared the efficiency of the above mentioned immune biosensors with such traditional approach of mycotoxin determination as the ELISA-method. In the result of investigation and discussion of obtained data it was concluded that both type of the immune biosensors are able to fulfill modern practice demand in respect sensitivity, rapidity, simplicity and cheapness of analysis.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Sensitive Analysis of Protein Adsorption to Colloidal Gold by Differential Centrifugal Sedimentation
2017-01-01
It is demonstrated that the adsorption of bovine serum albumin (BSA) to aqueous gold colloids can be quantified with molecular resolution by differential centrifugal sedimentation (DCS). This method separates colloidal particles of comparable density by mass. When proteins adsorb to the nanoparticles, both their mass and their effective density change, which strongly affects the sedimentation time. A straightforward analysis allows quantification of the adsorbed layer. Most importantly, unlike many other methods, DCS can be used to detect chemisorbed proteins (“hard corona”) as well as physisorbed proteins (“soft corona”). The results for BSA on gold colloid nanoparticles can be modeled in terms of Langmuir-type adsorption isotherms (Hill model). The effects of surface modification with small thiol-PEG ligands on protein adsorption are also demonstrated. PMID:28513153
PDB-NMA of a protein homodimer reproduces distinct experimental motility asymmetry.
Tirion, Monique M; Ben-Avraham, Daniel
2018-01-16
We have extended our analytically derived PDB-NMA formulation, Atomic Torsional Modal Analysis or ATMAN (Tirion and ben-Avraham 2015 Phys. Rev. E 91 032712), to include protein dimers using mixed internal and Cartesian coordinates. A test case on a 1.3 [Formula: see text] resolution model of a small homodimer, ActVA-ORF6, consisting of two 112-residue subunits identically folded in a compact 50 [Formula: see text] sphere, reproduces the distinct experimental Debye-Waller motility asymmetry for the two chains, demonstrating that structure sensitively selects vibrational signatures. The vibrational analysis of this PDB entry, together with biochemical and crystallographic data, demonstrates the cooperative nature of the dimeric interaction of the two subunits and suggests a mechanical model for subunit interconversion during the catalytic cycle.
Gold nanoparticle-based optical microfluidic sensors for analysis of environmental pollutants.
Lafleur, Josiane P; Senkbeil, Silja; Jensen, Thomas G; Kutter, Jörg P
2012-11-21
Conventional methods of environmental analysis can be significantly improved by the development of portable microscale technologies for direct in-field sensing at remote locations. This report demonstrates the vast potential of gold nanoparticle-based microfluidic sensors for the rapid, in-field, detection of two important classes of environmental contaminants - heavy metals and pesticides. Using gold nanoparticle-based microfluidic sensors linked to a simple digital camera as the detector, detection limits as low as 0.6 μg L(-1) and 16 μg L(-1) could be obtained for the heavy metal mercury and the dithiocarbamate pesticide ziram, respectively. These results demonstrate that the attractive optical properties of gold nanoparticle probes combine synergistically with the inherent qualities of microfluidic platforms to offer simple, portable and sensitive sensors for environmental contaminants.
PDB-NMA of a protein homodimer reproduces distinct experimental motility asymmetry
NASA Astrophysics Data System (ADS)
Tirion, Monique M.; ben-Avraham, Daniel
2018-03-01
We have extended our analytically derived PDB-NMA formulation, Atomic Torsional Modal Analysis or ATMAN (Tirion and ben-Avraham 2015 Phys. Rev. E 91 032712), to include protein dimers using mixed internal and Cartesian coordinates. A test case on a 1.3 {\\mathringA} resolution model of a small homodimer, ActVA-ORF6, consisting of two 112-residue subunits identically folded in a compact 50 {\\mathringA} sphere, reproduces the distinct experimental Debye-Waller motility asymmetry for the two chains, demonstrating that structure sensitively selects vibrational signatures. The vibrational analysis of this PDB entry, together with biochemical and crystallographic data, demonstrates the cooperative nature of the dimeric interaction of the two subunits and suggests a mechanical model for subunit interconversion during the catalytic cycle.
The impact of missing trauma data on predicting massive transfusion
Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.
2013-01-01
INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
Accelerated Insertion of Materials - Composites
2001-08-28
Details • Damage Tolerance • Repair • Validation of Analysis Methodology • Fatigue • Static • Acoustic • Configuration Details • Damage Tolerance...Sensitivity – Fatigue – Adhesion – Damage Tolerance – All critical modes and environments Products: Material Specifications, B-Basis Design Allowables...Demonstrate damage tolerance AIM-C DARPA DARPA Workshop, Annapolis, August 27-28, 2001 Requalification of Polymer / Composite Parts • Material Changes – Raw
Janisse, Kevyn; Doucet, Stéphanie M.
2017-01-01
Perceptual models of animal vision have greatly contributed to our understanding of animal-animal and plant-animal communication. The receptor-noise model of color contrasts has been central to this research as it quantifies the difference between two colors for any visual system of interest. However, if the properties of the visual system are unknown, assumptions regarding parameter values must be made, generally with unknown consequences. In this study, we conduct a sensitivity analysis of the receptor-noise model using avian visual system parameters to systematically investigate the influence of variation in light environment, photoreceptor sensitivities, photoreceptor densities, and light transmission properties of the ocular media and the oil droplets. We calculated the chromatic contrast of 15 plumage patches to quantify a dichromatism score for 70 species of Galliformes, a group of birds that display a wide range of sexual dimorphism. We found that the photoreceptor densities and the wavelength of maximum sensitivity of the short-wavelength-sensitive photoreceptor 1 (SWS1) can change dichromatism scores by 50% to 100%. In contrast, the light environment, transmission properties of the oil droplets, transmission properties of the ocular media, and the peak sensitivities of the cone photoreceptors had a smaller impact on the scores. By investigating the effect of varying two or more parameters simultaneously, we further demonstrate that improper parameterization could lead to differences between calculated and actual contrasts of more than 650%. Our findings demonstrate that improper parameterization of tetrachromatic visual models can have very large effects on measures of dichromatism scores, potentially leading to erroneous inferences. We urge more complete characterization of avian retinal properties and recommend that researchers either determine whether their species of interest possess an ultraviolet or near-ultraviolet sensitive SWS1 photoreceptor, or present models for both. PMID:28076391
Yang, Xiupei; Huo, Feng; Yuan, Hongyan; Zhang, Bo; Xiao, Dan; Choi, Martin M F
2011-01-01
This paper reports the enhancement of sensitivity of detection for in-column fiber optic-induced fluorescence detection system in CE by tapered optical fiber (TOF). Two types of optical fiber, TOF and conventional cylindrical optical fiber (COF), were employed to construct the CE (TOF-CE and COF-CE) and were compared for sensitivity to riboflavin (RF). The fluorescence intensities from a RF sample with excitation light sources and fibers at various coupling angles were investigated. The fluorescence signal from TOF-CE was ca. ten times that of COF-CE. In addition, the detection performance of four excitation light source-fiber configurations including Laser-TOF, Laser-COF, LED-TOF, and LED-COF were compared. The LODs for RF were 0.21, 0.82, 0.80, and 7.5 nM, respectively, for the four excitation light source-fiber configurations. The results demonstrate that the sensitivity obtained by LED-TOF is close to that of Laser-COF. Both Laser-TOF and LED-TOF can greatly improve the sensitivity of detection in CE. TOF has the major attribute of collecting and focusing the excitation light intensity. Thus, the sensitivity obtained by LED-TOF without focusing lens is just same as that of LED-COF with a focusing lens. This demonstrates that the CE system can be further simplified by eliminating the focusing lens for excitation light. LED-TOF-CE and LED-COF-CE system were applied to the separation and determination of RF in real sample (green tea), respectively. The tapered fiber optic-induced fluorescence detection system in CE is an ideal tool for trace analysis. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pool, Jan J. M.; van Tulder, Maurits W.; Riphagen, Ingrid I.; de Vet, Henrica C. W.
2006-01-01
Clinical provocative tests of the neck, which position the neck and arm inorder to aggravate or relieve arm symptoms, are commonly used in clinical practice in patients with a suspected cervical radiculopathy. Their diagnostic accuracy, however, has never been examined in a systematic review. A comprehensive search was conducted in order to identify all possible studies fulfilling the inclusion criteria. A study was included if: (1) any provocative test of the neck for diagnosing cervical radiculopathy was identified; (2) any reference standard was used; (3) sensitivity and specificity were reported or could be (re-)calculated; and, (4) the publication was a full report. Two reviewers independently selected studies, and assessed methodological quality. Only six studies met the inclusion criteria, which evaluated five provocative tests. In general, Spurling’s test demonstrated low to moderate sensitivity and high specificity, as did traction/neck distraction, and Valsalva’s maneuver. The upper limb tension test (ULTT) demonstrated high sensitivity and low specificity, while the shoulder abduction test demonstrated low to moderate sensitivity and moderate to high specificity. Common methodological flaws included lack of an optimal reference standard, disease progression bias, spectrum bias, and review bias. Limitations include few primary studies, substantial heterogeneity, and numerous methodological flaws among the studies; therefore, a meta-analysis was not conducted. This review suggests that, when consistent with the history and other physical findings, a positive Spurling’s, traction/neck distraction, and Valsalva’s might be indicative of a cervical radiculopathy, while a negative ULTT might be used to rule it out. However, the lack of evidence precludes any firm conclusions regarding their diagnostic value, especially when used in primary care. More high quality studies are necessary in order to resolve this issue. PMID:17013656
FLUXNET to MODIS: Connecting the dots to capture heterogenious biosphere metabolism
NASA Astrophysics Data System (ADS)
Woods, K. D.; Schwalm, C.; Huntzinger, D. N.; Massey, R.; Poulter, B.; Kolb, T.
2015-12-01
Eddy co-variance flux towers provide our most widely distributed network of direct observations for land-atmosphere carbon exchange. Carbon flux sensitivity analysis is a method that uses in situ networks to understand how ecosystems respond to changes in climatic variables. Flux towers concurrently observe key ecosystem metabolic processes (e..g. gross primary productivity) and micrometeorological variation, but only over small footprints. Remotely sensed vegetation indices from MODIS offer continuous observations of the vegetated land surface, but are less direct, as they are based on light use efficiency algorithms, and not on the ground observations. The marriage of these two data products offers an opportunity to validate remotely sensed indices with in situ observations and translate information derived from tower sites to globally gridded products. Here we provide correlations between Enhanced Vegetation Index (EVI), Leaf Area Index (LAI) and MODIS gross primary production with FLUXNET derived estimates of gross primary production, respiration and net ecosystem exchange. We demonstrate remotely sensed vegetation products which have been transformed to gridded estimates of terrestrial biosphere metabolism on a regional-to-global scale. We demonstrate anomalies in gross primary production, respiration, and net ecosystem exchange as predicted by both MODIS-carbon flux sensitivities and meteorological driver-carbon flux sensitivities. We apply these sensitivities to recent extreme climatic events and demonstrate both our ability to capture changes in biosphere metabolism, and differences in the calculation of carbon flux anomalies based on method. The quantification of co-variation in these two methods of observation is important as it informs both how remotely sensed vegetation indices are correlated with on the ground tower observations, and with what certainty we can expand these observations and relationships.
NASA Astrophysics Data System (ADS)
Sahoo, Amaresh Kumar; Sharma, Shilpa; Chattopadhyay, Arun; Ghosh, Siddhartha Sankar
2012-02-01
Rapid, simple and sensitive detection of bacterial contamination is critical for safeguarding public health and the environment. Herein, we report an easy method of detection as well as enumeration of the bacterial cell number on the basis of fluorescence quenching of a non-antibacterial fluorescent nanocomposite, consisting of paracetamol dimer (PD) and Au nanoparticles (NPs), in the presence of bacteria. The composite was synthesized by reaction of paracetamol (p-hydroxyacetanilide) with HAuCl4. The Au NPs of the composite were characterized using UV-Vis spectroscopy, transmission electron microscopy (TEM), X-ray diffraction and selected area electron diffraction analysis. The paracetamol dimer in the composite showed emission peak at 435 nm when excited at 320 nm. The method successfully detected six bacterial strains with a sensitivity of 100 CFU mL-1. The Gram-positive and Gram-negative bacteria quenched the fluorescence of the composite differently, making it possible to distinguish between the two. The TEM analysis showed interaction of the composite with bacteria without any apparent damage to the bacteria. The chi-square test established the accuracy of the method. Quick, non-specific and highly sensitive detection of bacteria over a broad range of logarithmic dilutions within a short span of time demonstrates the potential of this method as an alternative to conventional methods.Rapid, simple and sensitive detection of bacterial contamination is critical for safeguarding public health and the environment. Herein, we report an easy method of detection as well as enumeration of the bacterial cell number on the basis of fluorescence quenching of a non-antibacterial fluorescent nanocomposite, consisting of paracetamol dimer (PD) and Au nanoparticles (NPs), in the presence of bacteria. The composite was synthesized by reaction of paracetamol (p-hydroxyacetanilide) with HAuCl4. The Au NPs of the composite were characterized using UV-Vis spectroscopy, transmission electron microscopy (TEM), X-ray diffraction and selected area electron diffraction analysis. The paracetamol dimer in the composite showed emission peak at 435 nm when excited at 320 nm. The method successfully detected six bacterial strains with a sensitivity of 100 CFU mL-1. The Gram-positive and Gram-negative bacteria quenched the fluorescence of the composite differently, making it possible to distinguish between the two. The TEM analysis showed interaction of the composite with bacteria without any apparent damage to the bacteria. The chi-square test established the accuracy of the method. Quick, non-specific and highly sensitive detection of bacteria over a broad range of logarithmic dilutions within a short span of time demonstrates the potential of this method as an alternative to conventional methods. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr11837h
Jarujamrus, Purim; Meelapsom, Rattapol; Pencharee, Somkid; Obma, Apinya; Amatatongchai, Maliwan; Ditcharoen, Nadh; Chairam, Sanoe; Tamuang, Suparb
2018-01-01
A smartphone application, called CAnal, was developed as a colorimetric analyzer in paper-based devices for sensitive and selective determination of mercury(II) in water samples. Measurement on the double layer of a microfluidic paper-based analytical device (μPAD) fabricated by alkyl ketene dimer (AKD)-inkjet printing technique with special design doped with unmodified silver nanoparticles (AgNPs) onto the detection zones was performed by monitoring the gray intensity in the blue channel of AgNPs, which disintegrated when exposed to mercury(II) on μPAD. Under the optimized conditions, the developed approach showed high sensitivity, low limit of detection (0.003 mg L -1 , 3SD blank/slope of the calibration curve), small sample volume uptake (two times of 2 μL), and short analysis time. The linearity range of this technique ranged from 0.01 to 10 mg L -1 (r 2 = 0.993). Furthermore, practical analysis of various water samples was also demonstrated to have acceptable performance that was in agreement with the data from cold vapor atomic absorption spectrophotometry (CV-AAS), a conventional method. The proposed technique allows for a rapid, simple (instant report of the final mercury(II) concentration in water samples via smartphone display), sensitive, selective, and on-site analysis with high sample throughput (48 samples h -1 , n = 3) of trace mercury(II) in water samples, which is suitable for end users who are unskilled in analyzing mercury(II) in water samples.
Classification of Phase Transitions by Microcanonical Inflection-Point Analysis
NASA Astrophysics Data System (ADS)
Qi, Kai; Bachmann, Michael
2018-05-01
By means of the principle of minimal sensitivity we generalize the microcanonical inflection-point analysis method by probing derivatives of the microcanonical entropy for signals of transitions in complex systems. A strategy of systematically identifying and locating independent and dependent phase transitions of any order is proposed. The power of the generalized method is demonstrated in applications to the ferromagnetic Ising model and a coarse-grained model for polymer adsorption onto a substrate. The results shed new light on the intrinsic phase structure of systems with cooperative behavior.
DOSY Analysis of Micromolar Analytes: Resolving Dilute Mixtures by SABRE Hyperpolarization.
Reile, Indrek; Aspers, Ruud L E G; Tyburn, Jean-Max; Kempf, James G; Feiters, Martin C; Rutjes, Floris P J T; Tessari, Marco
2017-07-24
DOSY is an NMR spectroscopy technique that resolves resonances according to the analytes' diffusion coefficients. It has found use in correlating NMR signals and estimating the number of components in mixtures. Applications of DOSY in dilute mixtures are, however, held back by excessively long measurement times. We demonstrate herein, how the enhanced NMR sensitivity provided by SABRE hyperpolarization allows DOSY analysis of low-micromolar mixtures, thus reducing the concentration requirements by at least 100-fold. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kiseleva, Irina; Larionova, Natalie; Fedorova, Ekaterina; Bazhenova, Ekaterina; Dubrovina, Irina; Isakova-Sivak, Irina; Rudenko, Larisa
2014-01-01
Live attenuated influenza vaccine (LAIV) represent reassortant viruses with hemagglutinin (HA) and neuraminidase (NA) gene segments inherited from circulating wild-type (WT) parental influenza viruses recommended for inclusion into seasonal vaccine formulation, and the 6 internal protein-encoding gene segments from cold-adapted attenuated master donor viruses (genome composition 6:2). In this study, we describe the obstacles in developing LAIV strains while taking into account the phenotypic peculiarities of WT viruses used for reassortment. Genomic composition analysis of 849 seasonal LAIV reassortants revealed that over 80% of reassortants based on inhibitor-resistant WT viruses inherited WT NA, compared to 26% of LAIV reassortants based on inhibitor-sensitive WT viruses. In addition, the highest percentage of LAIV genotype reassortants was achieved when WT parental viruses were resistant to non-specific serum inhibitors. We demonstrate that NA may play a role in influenza virus sensitivity to non-specific serum inhibitors. Replacing NA of inhibitor-sensitive WT virus with the NA of inhibitor-resistant master donor virus significantly decreased the sensitivity of the resulting reassortant virus to serum heat-stable inhibitors. PMID:25132869
Zhu, Shan; Pang, Fufei; Huang, Sujuan; Zou, Fang; Dong, Yanhua; Wang, Tingyun
2015-06-01
Atomic layer deposition (ALD) technology is introduced to fabricate a high sensitivity refractive index sensor based on an adiabatic tapered optical fiber. Different thickness of Al2O3 nanofilm is coated around fiber taper precisely and uniformly under different deposition cycles. Attributed to the high refractive index of the Al2O3 nanofilm, an asymmetry Fabry-Perot like interferometer is constructed along the fiber taper. Based on the ray-optic analysis, total internal reflection happens on the nanofilm-surrounding interface. With the ambient refractive index changing, the phase delay induced by the Goos-Hänchen shift is changed. Correspondingly, the transmission resonant spectrum shifts, which can be utilized for realizing high sensitivity sensor. The high sensitivity sensor with 6008 nm/RIU is demonstrated by depositing 3000 layers Al2O3 nanofilm as the ambient refractive index is close to 1.33. This high sensitivity refractive index sensor is expected to have wide applications in biochemical sensors.
Suh, Chong Hyun; Choi, Young Jun; Baek, Jung Hwan; Lee, Jeong Hyun
2017-01-01
To evaluate the diagnostic performance of shear wave elastography for malignant cervical lymph nodes. We searched the Ovid-MEDLINE and EMBASE databases for published studies regarding the use of shear wave elastography for diagnosing malignant cervical lymph nodes. The diagnostic performance of shear wave elastography was assessed using bivariate modelling and hierarchical summary receiver operating characteristic modelling. Meta-regression analysis and subgroup analysis according to acoustic radiation force impulse imaging (ARFI) and Supersonic shear imaging (SSI) were also performed. Eight eligible studies which included a total sample size of 481 patients with 647 cervical lymph nodes, were included. Shear wave elastography showed a summary sensitivity of 81 % (95 % CI: 72-88 %) and specificity of 85 % (95 % CI: 70-93 %). The results of meta-regression analysis revealed that the prevalence of malignant lymph nodes was a significant factor affecting study heterogeneity (p < .01). According to the subgroup analysis, the summary estimates of the sensitivity and specificity did not differ between ARFI and SSI (p = .93). Shear wave elastography is an acceptable imaging modality for diagnosing malignant cervical lymph nodes. We believe that both ARFI and SSI may have a complementary role for diagnosing malignant cervical lymph nodes. • Shear wave elastography is acceptable modality for diagnosing malignant cervical lymph nodes. • Shear wave elastography demonstrated summary sensitivity of 81 % and specificity of 85 %. • ARFI and SSI have complementary roles for diagnosing malignant cervical lymph nodes.
Hioki, Yusaku; Tanimura, Ritsuko; Iwamoto, Shinichi; Tanaka, Koichi
2014-03-04
Nanoflow liquid chromatography (nano-LC) is an essential technique for highly sensitive analysis of complex biological samples, and matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is advantageous for rapid identification of proteins and in-depth analysis of post-translational modifications (PTMs). A combination of nano-LC and MALDI-MS (nano-LC/MALDI-MS) is useful for highly sensitive and detailed analysis in life sciences. However, the existing system does not fully utilize the advantages of each technique, especially in the interface of eluate transfer from nano-LC to a MALDI plate. To effectively combine nano-LC with MALDI-MS, we integrated a nano-LC column and a deposition probe for the first time (column probe) and incorporated it into a nano-LC/MALDI-MS system. Spotting nanoliter eluate droplets directly from the column onto the MALDI plate prevents postcolumn diffusion and preserves the chromatographic resolution. A DHB prespotted plate was prepared to suit the fabricated column probe to concentrate the droplets of nano-LC eluate. The performance of the advanced nano-LC/MALDI-MS system was substantiated by analyzing protein digests. When the system was coupled with multidimensional liquid chromatography (MDLC), trace amounts of glycopeptides that spiked into complex samples were successfully detected. Thus, a nano-LC/MALDI-MS direct-spotting system that eliminates postcolumn diffusion was constructed, and the efficacy of the system was demonstrated through highly sensitive analysis of the protein digests or spiked glycopeptides.
Skariyachan, Sinosh; Acharya, Archana B; Subramaniyan, Saumya; Babu, Sumangala; Kulkarni, Shruthi; Narayanappa, Rajeswari
2016-09-01
The current study explores therapeutic potential of metabolites extracted from marine sponge (Cliona sp.)-associated bacteria against MDR pathogens and predicts the binding prospective of probable lead molecules against VP40 target of Ebola virus. The metabolite-producing bacteria were characterized by agar overlay assay and as per the protocols in Bergey's manual of determinative bacteriology. The antibacterial activities of extracted metabolites were tested against clinical pathogens by well-diffusion assay. The selected metabolite producers were characterized by 16S rDNA sequencing. Chemical screening and Fourier Transform Infrared (FTIR) analysis for selected compounds were performed. The probable lead molecules present in the metabolites were hypothesized based on proximate analysis, FTIR data, and literature survey. The drug-like properties and binding potential of lead molecules against VP40 target of Ebola virus were hypothesized by computational virtual screening and molecular docking. The current study demonstrated that clear zones around bacterial colonies in agar overlay assay. Antibiotic sensitivity profiling demonstrated that the clinical isolates were multi-drug resistant, however; most of them showed sensitivity to secondary metabolites (MIC-15 μl/well). The proximate and FTIR analysis suggested that probable metabolites belonged to alkaloids with O-H, C-H, C=O, and N-H groups. 16S rDNA characterization of selected metabolite producers demonstrated that 96% and 99% sequence identity to Comamonas testosteroni and Citrobacter freundii, respectively. The docking studies suggested that molecules such as Gymnastatin, Sorbicillactone, Marizomib, and Daryamide can designed as probable lead candidates against VP40 target of Ebola virus.
Regier, Nicole; Baerlocher, Loïc; Münsterkötter, Martin; Farinelli, Laurent; Cosio, Claudia
2013-08-06
Toxic metals polluting aquatic ecosystems are taken up by inhabitants and accumulate in the food web, affecting species at all trophic levels. It is therefore important to have good tools to assess the level of risk represented by toxic metals in the environment. Macrophytes are potential organisms for the identification of metal-responsive biomarkers but are still underrepresented in ecotoxicology. In the present study, we used next-generation sequencing to investigate the transcriptomic response of Elodea nuttallii exposed to enhanced concentrations of Hg and Cd. We de novo assembled more than 60 000 contigs, of which we found 170 to be regulated dose-dependently by Hg and 212 by Cd. Functional analysis showed that these genes were notably related to energy and metal homeostasis. Expression analysis using nCounter of a subset of genes showed that the gene expression pattern was able to assess toxic metal exposure in complex environmental samples and was more sensitive than other end points (e.g., bioaccumulation, photosynthesis, etc.). In conclusion, we demonstrate the feasibility of using gene expression signatures for the assessment of environmental contamination, using an organism without previous genetic information. This is of interest to ecotoxicology in a wider sense given the possibility to develop specific and sensitive bioassays.
Pd/Ag coated fiber Bragg grating sensor for hydrogen monitoring in power transformers.
Ma, G M; Jiang, J; Li, C R; Song, H T; Luo, Y T; Wang, H B
2015-04-01
Compared with conventional DGA (dissolved gas analysis) method for on-line monitoring of power transformers, FBG (fiber Bragg grating) hydrogen sensor represents marked advantages over immunity to electromagnetic field, time-saving, and convenience to defect location. Thus, a novel FBG hydrogen sensor based on Pd/Ag (Palladium/Silver) along with polyimide composite film to measure dissolved hydrogen concentration in large power transformers is proposed in this article. With the help of Pd/Ag composite coating, the enhanced performance on mechanical strength and sensitivity is demonstrated, moreover, the response time and sensitivity influenced by oil temperature are solved by correction lines. Sensitivity measurement and temperature calibration of the specific hydrogen sensor have been done respectively in the lab. And experiment results show a high sensitivity of 0.055 pm/(μl/l) with instant response time about 0.4 h under the typical operating temperature of power transformers, which proves a potential utilization inside power transformers to monitor the health status by detecting the dissolved hydrogen concentration.
Chen, Qinghua; Raghavan, Prashant; Mukherjee, Sugoto; Jameson, Mark J; Patrie, James; Xin, Wenjun; Xian, Junfang; Wang, Zhenchang; Levine, Paul A; Wintermark, Max
2015-10-01
The aim of this study was to systematically compare a comprehensive array of magnetic resonance (MR) imaging features in terms of their sensitivity and specificity to diagnose cervical lymph node metastases in patients with thyroid cancer. The study included 41 patients with thyroid malignancy who underwent surgical excision of cervical lymph nodes and had preoperative MR imaging ≤4weeks prior to surgery. Three head and neck neuroradiologists independently evaluated all the MR images. Using the pathology results as reference, the sensitivity, specificity and interobserver agreement of each MR imaging characteristic were calculated. On multivariate analysis, no single imaging feature was significantly correlated with metastasis. In general, imaging features demonstrated high specificity, but poor sensitivity and moderate interobserver agreement at best. Commonly used MR imaging features have limited sensitivity at correctly identifying cervical lymph node metastases in patients with thyroid cancer. A negative neck MR scan should not dissuade a surgeon from performing a neck dissection in patients with thyroid carcinomas.
NASA Astrophysics Data System (ADS)
Starodub, N. F.; Ogorodniichuk, J.; Lebedeva, T.; Shpylovyy, P.
2013-11-01
In this work we have designed high-specific biosensors for Salmonella typhimurium detection based on the surface plasmon resonance (SPR) and total internal reflection ellipsometry (TIRE). It has been demonstrated high selectivity and sensitivity of analysis. As a registering part for our experiments the Spreeta (USA) and "Plasmonotest" (Ukraine) with flowing cell have been applied among of SPR device. Previous researches confirmed an efficiency of SPR biosensors using for detecting of specific antigen-antibody interactions therefore this type of reactions with some previous preparations of surface binding layer was used as reactive part. It has been defined that in case with Spreeta sensitivity was on the level 103 - 107 cells/ml. Another biosensor based on the SPR has shown the sensitivity within 101 - 106 cells/ml. Maximal sensitivity was on the level of several cells in 10 ml (up to the fact that less than 5 cells) which has been obtained using the biosensor based on TIRE.
Kobayashi, Tsuneo
2018-03-01
Diagnosis using a specific tumor marker is difficult because the sensitivity of this detection method is under 20%. Herein, a tumor marker combination assay, combining growth-related tumor marker and associated tumor marker (Cancer, 73(7), 1994), was employed. This double-blind tumor marker combination assay (TMCA) showed 87.5% sensitivity as the results, but a low specificity, ranging from 30 to 76%. To overcome this low specificity, we exploited complex markers, a multivariate analysis and serum fractionation by biochemical biopsy. Thus, in this study, a combination of new techniques was used to re-evaluate these serum samples. Three serum panels, containing 90, 120, and 97 samples were obtained from the Mayo Clinic. The final results showed 80-90% sensitivity, 84-85% specificity, and 83-88% accuracy. We demonstrated a notable tumor marker combination assay with high accuracy. This TMCA should be applicable for primary cancer detection and recurrence prevention. © 2018 The Author. Cancer Medicine published by John Wiley & Sons Ltd.
Zhang, Wei; Jin, Xin; Li, Heng; Zhang, Run-Run; Wu, Cheng-Wei
2018-04-15
Hydrogels based on chitosan/hyaluronic acid/β-sodium glycerophosphate demonstrate injectability, body temperature sensitivity, pH sensitive drug release and adhesion to cancer cell. The drug (doxorubicin) loaded hydrogel precursor solutions are injectable and turn to hydrogels when the temperature is increased to body temperature. The acidic condition (pH 4.00) can trigger the release of drug and the cancer cell (Hela) can adhere to the surface of the hydrogels, which will be beneficial for tumor site-specific administration of drug. The mechanical strength, the gelation temperature, and the drug release behavior can be tuned by varying hyaluronic acid content. The mechanisms were characterized using dynamic mechanical analysis, Fourier transform infrared spectroscopy, scanning electron microscopy and fluorescence microscopy. The carboxyl group in hyaluronic acid can form the hydrogen bondings with the protonated amine in chitosan, which promotes the increase of mechanical strength of the hydrogels and depresses the initial burst release of drug from the hydrogel. Copyright © 2018 Elsevier Ltd. All rights reserved.
Han, Xin-Yu; Yang, Huang; Rao, Shi-Tao; Liu, Guang-Yu; Hu, Meng-Jun; Zeng, Bin-Chang; Cao, Min-Jie; Liu, Guang-Ming
2018-03-21
The Maillard reaction was established to reduce the sensitization of tropomyosin (TM) and arginine kinase (AK) from Scylla paramamosain, and the mechanism of the attenuated sensitization was investigated. In the present study, the Maillard reaction conditions were optimized for heating at 100 °C for 60 min (pH 8.5) with arabinose. A low level of allergenicity in mice was shown by the levels of allergen-specific antibodies, and more Th1 and less Th2 cells cytokines produced and associated transcription factors with the Maillard reacted allergen (mAllergen). The tolerance potency in mice was demonstrated by the increased ratio of Th1/Th2 cytokines. Moreover, mass spectrometry analysis showed that some key amino acids of IgE-binding epitopes (K 112 , R 125 , R 133 of TM; K 33 , K 118 , R 202 of AK) were modified by the Maillard reaction. The Maillard reaction with arabinose reduced the sensitization of TM and AK, which may be due to the masked epitopes.
Low level detection of Cs-135 and Cs-137 in environmental samples by ICP-MS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liezers, Martin; Farmer, Orville T.; Thomas, Linda MP
2009-10-01
The measurement of the fission product cesium isotopes 135Cs and 137Cs at low femtogram (fg) 10-15 levels in ground water by Inductively Coupled Plasma-Mass Spectrometry ICP-MS is reported. To eliminate the potential natural barium isobaric interference on the cesium isotopes, in-line chromatographic separation of the cesium from barium was performed followed by high sensitivity ICP-MS analysis. A high efficiency desolvating nebulizer system was employed to maximize ICP-MS sensitivity ~10cps/femtogram. The three sigma detection limit measured for 135Cs was 2fg/ml (0.1uBq/ml) and for 137Cs 0.9fg/ml (0.0027Bq/ml) with analysis time of less than 30 minutes/sample. Cesium detection and 135/137 isotope ratio measurementmore » at very low femtogram levels using this method in a ground water matrix is also demonstrated.« less
NASA Astrophysics Data System (ADS)
Hewer, Micah J.; Gough, William A.
2016-11-01
Based on a case study of the Toronto Zoo (Canada), multivariate regression analysis, involving both climatic and social variables, was employed to assess the relationship between daily weather and visitation. Zoo visitation was most sensitive to weather variability during the shoulder season, followed by the off-season and, then, the peak season. Temperature was the most influential weather variable in relation to zoo visitation, followed by precipitation and, then, wind speed. The intensity and direction of the social and climatic variables varied between seasons. Temperatures exceeding 26 °C during the shoulder season and 28 °C during the peak season suggested a behavioural threshold associated with zoo visitation, with conditions becoming too warm for certain segments of the zoo visitor market, causing visitor numbers to decline. Even light amounts of precipitation caused average visitor numbers to decline by nearly 50 %. Increasing wind speeds also demonstrated a negative influence on zoo visitation.
Economic assessments of small-scale drinking-water interventions in pursuit of MDG target 7C.
Cameron, John; Jagals, Paul; Hunter, Paul R; Pedley, Steve; Pond, Katherine
2011-12-01
This paper uses an applied rural case study of a safer water intervention in South Africa to illustrate how three levels of economic assessment can be used to understand the impact of the intervention on people's well-being. It is set in the context of Millennium Development Goal 7 which sets a target (7C) for safe drinking-water provision and the challenges of reaching people in remote rural areas with relatively small-scale schemes. The assessment moves from cost efficiency to cost effectiveness to a full social cost-benefit analysis (SCBA) with an associated sensitivity test. In addition to demonstrating techniques of analysis, the paper brings out many of the challenges in understanding how safer drinking-water impacts on people's livelihoods. The SCBA shows the case study intervention is justified economically, though the sensitivity test suggests 'downside' vulnerability. Copyright © 2011 Elsevier B.V. All rights reserved.
Bringing gender sensitivity into healthcare practice: a systematic review.
Celik, Halime; Lagro-Janssen, Toine A L M; Widdershoven, Guy G A M; Abma, Tineke A
2011-08-01
Despite the body of literature on gender dimensions and disparities between the sexes in health, practical improvements will not be realized effectively as long as we lack an overview of the ways how to implement these ideas. This systematic review provides a content analysis of literature on the implementation of gender sensitivity in health care. Literature was identified from CINAHL, PsycINFO, Medline, EBSCO and Cochrane (1998-2008) and the reference lists of relevant articles. The quality and relevance of 752 articles were assessed and finally 11 original studies were included. Our results demonstrate that the implementation of gender sensitivity includes tailoring opportunities and barriers related to the professional, organizational and the policy level. As gender disparities are embedded in healthcare, a multiple track approach to implement gender sensitivity is needed to change gendered healthcare systems. Conventional approaches, taking into account one barrier and/or opportunity, fail to prevent gender inequality in health care. For gender-sensitive health care we need to change systems and structures, but also to enhance understanding, raise awareness and develop skills among health professionals. To bring gender sensitivity into healthcare practice, interventions should address a range of factors. Copyright © 2010. Published by Elsevier Ireland Ltd.
PharmacoGx: an R package for analysis of large pharmacogenomic datasets.
Smirnov, Petr; Safikhani, Zhaleh; El-Hachem, Nehme; Wang, Dong; She, Adrian; Olsen, Catharina; Freeman, Mark; Selby, Heather; Gendoo, Deena M A; Grossmann, Patrick; Beck, Andrew H; Aerts, Hugo J W L; Lupien, Mathieu; Goldenberg, Anna; Haibe-Kains, Benjamin
2016-04-15
Pharmacogenomics holds great promise for the development of biomarkers of drug response and the design of new therapeutic options, which are key challenges in precision medicine. However, such data are scattered and lack standards for efficient access and analysis, consequently preventing the realization of the full potential of pharmacogenomics. To address these issues, we implemented PharmacoGx, an easy-to-use, open source package for integrative analysis of multiple pharmacogenomic datasets. We demonstrate the utility of our package in comparing large drug sensitivity datasets, such as the Genomics of Drug Sensitivity in Cancer and the Cancer Cell Line Encyclopedia. Moreover, we show how to use our package to easily perform Connectivity Map analysis. With increasing availability of drug-related data, our package will open new avenues of research for meta-analysis of pharmacogenomic data. PharmacoGx is implemented in R and can be easily installed on any system. The package is available from CRAN and its source code is available from GitHub. bhaibeka@uhnresearch.ca or benjamin.haibe.kains@utoronto.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Integrated on-chip derivatization and electrophoresis for the rapid analysis of biogenic amines.
Beard, Nigel P; Edel, Joshua B; deMello, Andrew J
2004-07-01
We demonstrate the monolithic integration of a chemical reactor with a capillary electrophoresis device for the rapid and sensitive analysis of biogenic amines. Fluorescein isothiocyanate (FITC) is widely employed for the analysis of amino-group containing analytes. However, the slow reaction kinetics hinders the use of this dye for on-chip labeling applications. Other alternatives are available such as o-phthaldehyde (OPA), however, the inferior photophysical properties and the UV lambdamax present difficulties when using common excitation sources leading to a disparity in sensitivity. Consequently, we present for the first time the use of dichlorotriazine fluorescein (DTAF) as a superior in situ derivatizing agent for biogenic amines in microfluidic devices. The developed microdevice employs both hydrodynamic and electroosmotic flow, facilitating the creation of a polymeric microchip to perform both precolumn derivatization and electrophoretic analysis. The favorable photophysical properties of the DTAF and its fast reaction kinetics provide detection limits down to 1 nM and total analysis times (including on-chip mixing and reaction) of <60 s. The detection limits are two orders of magnitude lower than current limits obtained with both FITC and OPA. The optimized microdevice is also employed to probe biogenic amines in real samples.
Treatment strategies for pelvic organ prolapse: a cost-effectiveness analysis.
Hullfish, Kathie L; Trowbridge, Elisa R; Stukenborg, George J
2011-05-01
To compare the relative cost effectiveness of treatment decision alternatives for post-hysterectomy pelvic organ prolapse (POP). A Markov decision analysis model was used to assess and compare the relative cost effectiveness of expectant management, use of a pessary, and surgery for obtaining months of quality-adjusted life over 1 year. Sensitivity analysis was conducted to determine whether the results depended on specific estimates of patient utilities for pessary use, probabilities for complications and other events, and estimated costs. Only two treatment alternatives were found to be efficient choices: initial pessary use and vaginal reconstructive surgery (VRS). Pessary use (including patients that eventually transitioned to surgery) achieved 10.4 quality-adjusted months, at a cost of $10,000 per patient, while VRS obtained 11.4 quality-adjusted months, at $15,000 per patient. Sensitivity analysis demonstrated that these baseline results depended on several key estimates in the model. This analysis indicates that pessary use and VRS are the most cost-effective treatment alternatives for treating post-hysterectomy vaginal prolapse. Additional research is needed to standardize POP outcomes and complications, so that healthcare providers can best utilize cost information in balancing the risks and benefits of their treatment decisions.
NASA Astrophysics Data System (ADS)
Chen, Long; Wang, Yue; Liu, Nenrong; Lin, Duo; Weng, Cuncheng; Zhang, Jixue; Zhu, Lihuan; Chen, Weisheng; Chen, Rong; Feng, Shangyuan
2013-06-01
The diagnostic capability of using tissue intrinsic micro-Raman signals to obtain biochemical information from human esophageal tissue is presented in this paper. Near-infrared micro-Raman spectroscopy combined with multivariate analysis was applied for discrimination of esophageal cancer tissue from normal tissue samples. Micro-Raman spectroscopy measurements were performed on 54 esophageal cancer tissues and 55 normal tissues in the 400-1750 cm-1 range. The mean Raman spectra showed significant differences between the two groups. Tentative assignments of the Raman bands in the measured tissue spectra suggested some changes in protein structure, a decrease in the relative amount of lactose, and increases in the percentages of tryptophan, collagen and phenylalanine content in esophageal cancer tissue as compared to those of a normal subject. The diagnostic algorithms based on principal component analysis (PCA) and linear discriminate analysis (LDA) achieved a diagnostic sensitivity of 87.0% and specificity of 70.9% for separating cancer from normal esophageal tissue samples. The result demonstrated that near-infrared micro-Raman spectroscopy combined with PCA-LDA analysis could be an effective and sensitive tool for identification of esophageal cancer.
Hofmann, Matthias J.; Koelsch, Patrick
2015-01-01
Vibrational sum-frequency generation (SFG) spectroscopy has become an established technique for in situ surface analysis. While spectral recording procedures and hardware have been optimized, unique data analysis routines have yet to be established. The SFG intensity is related to probing geometries and properties of the system under investigation such as the absolute square of the second-order susceptibility χ(2)2. A conventional SFG intensity measurement does not grant access to the complex parts of χ(2) unless further assumptions have been made. It is therefore difficult, sometimes impossible, to establish a unique fitting solution for SFG intensity spectra. Recently, interferometric phase-sensitive SFG or heterodyne detection methods have been introduced to measure real and imaginary parts of χ(2) experimentally. Here, we demonstrate that iterative phase-matching between complex spectra retrieved from maximum entropy method analysis and fitting of intensity SFG spectra (iMEMfit) leads to a unique solution for the complex parts of χ(2) and enables quantitative analysis of SFG intensity spectra. A comparison between complex parts retrieved by iMEMfit applied to intensity spectra and phase sensitive experimental data shows excellent agreement between the two methods. PMID:26450297
Sato, Harumi; Higashi, Noboru; Ikehata, Akifumi; Koide, Noriko; Ozaki, Yukihiro
2007-07-01
The aim of the present study is to propose a totally new technique for the utilization of far-ultraviolet (UV) spectroscopy in polymer thin film analysis. Far-UV spectra in the 120-300 nm region have been measured in situ for six kinds of commercial polymer wrap films by use of a novel type of far-UV spectrometer that does not need vacuum evaporation. These films can be straightforwardly classified into three groups, polyethylene (PE) films, polyvinyl chloride (PVC) films, and polyvinylidene chloride (PVDC) films, by using the raw spectra. The differences in the wavelength of the absorption band due to the sigma-sigma* transition of the C-C bond have been used for the classification of the six kinds of films. Using this method, it was easy to distinguish the three kinds of PE films and to separate the two kinds of PVDC films. Compared with other spectroscopic methods, the advantages of this technique include nondestructive analysis, easy spectral measurement, high sensitivity, and simple spectral analysis. The present study has demonstrated that far-UV spectroscopy is a very promising technique for polymer film analysis.
Meng, X Wei; Koh, Brian D; Zhang, Jin-San; Flatten, Karen S; Schneider, Paula A; Billadeau, Daniel D; Hess, Allan D; Smith, B Douglas; Karp, Judith E; Kaufmann, Scott H
2014-07-25
Recombinant human tumor necrosis factor-α-related apoptosis inducing ligand (TRAIL), agonistic monoclonal antibodies to TRAIL receptors, and small molecule TRAIL receptor agonists are in various stages of preclinical and early phase clinical testing as potential anticancer drugs. Accordingly, there is substantial interest in understanding factors that affect sensitivity to these agents. In the present study we observed that the poly(ADP-ribose) polymerase (PARP) inhibitors olaparib and veliparib sensitize the myeloid leukemia cell lines ML-1 and K562, the ovarian cancer line PEO1, non-small cell lung cancer line A549, and a majority of clinical AML isolates, but not normal marrow, to TRAIL. Further analysis demonstrated that PARP inhibitor treatment results in activation of the FAS and TNFRSF10B (death receptor 5 (DR5)) promoters, increased Fas and DR5 mRNA, and elevated cell surface expression of these receptors in sensitized cells. Chromatin immunoprecipitation demonstrated enhanced binding of the transcription factor Sp1 to the TNFRSF10B promoter in the presence of PARP inhibitor. Knockdown of PARP1 or PARP2 (but not PARP3 and PARP4) not only increased expression of Fas and DR5 at the mRNA and protein level, but also recapitulated the sensitizing effects of the PARP inhibition. Conversely, Sp1 knockdown diminished the PARP inhibitor effects. In view of the fact that TRAIL is part of the armamentarium of natural killer cells, these observations identify a new facet of PARP inhibitor action while simultaneously providing the mechanistic underpinnings of a novel therapeutic combination that warrants further investigation.
NASA Astrophysics Data System (ADS)
Ishii, Yoshitaka; Wickramasinghe, Ayesha; Matsuda, Isamu; Endo, Yuki; Ishii, Yuji; Nishiyama, Yusuke; Nemoto, Takahiro; Kamihara, Takayuki
2018-01-01
Proton-detected solid-state NMR (SSNMR) spectroscopy has attracted much attention due to its excellent sensitivity and effectiveness in the analysis of trace amounts of amyloid proteins and other important biological systems. In this perspective article, we present the recent sensitivity limit of 1H-detected SSNMR using "ultra-fast" magic-angle spinning (MAS) at a spinning rate (νR) of 80-100 kHz. It was demonstrated that the high sensitivity of 1H-detected SSNMR at νR of 100 kHz and fast recycling using the paramagnetic-assisted condensed data collection (PACC) approach permitted "super-fast" collection of 1H-detected 2D protein SSNMR. A 1H-detected 2D 1H-15N correlation SSNMR spectrum for ∼27 nmol of a uniformly 13C- and 15N-labeled GB1 protein sample in microcrystalline form was acquired in only 9 s with 50% non-uniform sampling and short recycle delays of 100 ms. Additional data suggests that it is now feasible to detect as little as 1 nmol of the protein in 5.9 h by 1H-detected 2D 1H-15N SSNMR at a nominal signal-to-noise ratio of five. The demonstrated sensitivity is comparable to that of modern solution protein NMR. Moreover, this article summarizes the influence of ultra-fast MAS and 1H-detection on the spectral resolution and sensitivity of protein SSNMR. Recent progress in signal assignment and structural elucidation by 1H-detected protein SSNMR is outlined with both theoretical and experimental aspects.
Du, Jiangbing; He, Zuyuan
2013-11-04
In this work, highly sensitive measurements of strain and temperature have been demonstrated using a fiber Bragg grating (FBG) sensor with significantly enhance sensitivity by all-optical signal processing. The sensitivity enhancement is achieved by degenerated Four Wave Mixing (FWM) for frequency chirp magnification (FCM), which can be used for magnifying the wavelength drift of the FBG sensor induced by strain and temperature change. Highly sensitive measurements of static strain and temperature have been experimentally demonstrated with strain sensitivity of 5.36 pm/με and temperature sensitivity of 54.09 pm/°C. The sensitivity has been enhanced by a factor of five based on a 4-order FWM in a highly nonlinear fiber (HNLF).
Wang, Zhen; Kwok, Kevin W H; Lui, Gilbert C S; Zhou, Guang-Jie; Lee, Jae-Seong; Lam, Michael H W; Leung, Kenneth M Y
2014-06-01
Due to a lack of saltwater toxicity data in tropical regions, toxicity data generated from temperate or cold water species endemic to North America and Europe are often adopted to derive water quality guidelines (WQG) for protecting tropical saltwater species. If chemical toxicity to most saltwater organisms increases with water temperature, the use of temperate species data and associated WQG may result in under-protection to tropical species. Given the differences in species composition and environmental attributes between tropical and temperate saltwater ecosystems, there are conceivable uncertainties in such 'temperate-to-tropic' extrapolations. This study aims to compare temperate and tropical saltwater species' acute sensitivity to 11 chemicals through a comprehensive meta-analysis, by comparing species sensitivity distributions (SSDs) between the two groups. A 10 percentile hazardous concentration (HC10) is derived from each SSD, and then a temperate-to-tropic HC10 ratio is computed for each chemical. Our results demonstrate that temperate and tropical saltwater species display significantly different sensitivity towards all test chemicals except cadmium, although such differences are small with the HC10 ratios ranging from 0.094 (un-ionised ammonia) to 2.190 (pentachlorophenol) only. Temperate species are more sensitive to un-ionised ammonia, chromium, lead, nickel and tributyltin, whereas tropical species are more sensitive to copper, mercury, zinc, phenol and pentachlorophenol. Through comparison of a limited number of taxon-specific SSDs, we observe that there is a general decline in chemical sensitivity from algae to crustaceans, molluscs and then fishes. Following a statistical analysis of the results, we recommend an extrapolation factor of two for deriving tropical WQG from temperate information. Copyright © 2013 Elsevier Ltd. All rights reserved.
Biased and less sensitive: A gamified approach to delay discounting in heroin addiction.
Scherbaum, Stefan; Haber, Paul; Morley, Kirsten; Underhill, Dylan; Moustafa, Ahmed A
2018-03-01
People with addiction will continue to use drugs despite adverse long-term consequences. We hypothesized (a) that this deficit persists during substitution treatment, and (b) that this deficit might be related not only to a desire for immediate gratification, but also to a lower sensitivity for optimal decision making. We investigated how individuals with a history of heroin addiction perform (compared to healthy controls) in a virtual reality delay discounting task. This novel task adds to established measures of delay discounting an assessment of the optimality of decisions, especially in how far decisions are influenced by a general choice bias and/or a reduced sensitivity to the relative value of the two alternative rewards. We used this measure of optimality to apply diffusion model analysis to the behavioral data to analyze the interaction between decision optimality and reaction time. The addiction group consisted of 25 patients with a history of heroin dependency currently participating in a methadone maintenance program; the control group consisted of 25 healthy participants with no history of substance abuse, who were recruited from the Western Sydney community. The patient group demonstrated greater levels of delay discounting compared to the control group, which is broadly in line with previous observations. Diffusion model analysis yielded a reduced sensitivity for the optimality of a decision in the patient group compared to the control group. This reduced sensitivity was reflected in lower rates of information accumulation and higher decision criteria. Increased discounting in individuals with heroin addiction is related not only to a generally increased bias to immediate gratification, but also to reduced sensitivity for the optimality of a decision. This finding is in line with other findings about the sensitivity of addicts in distinguishing optimal from nonoptimal choice options.
Milton, Alyssa C; Ellis, Louise A; Davenport, Tracey A; Burns, Jane M; Hickie, Ian B
2017-09-26
Web-based self-report surveying has increased in popularity, as it can rapidly yield large samples at a low cost. Despite this increase in popularity, in the area of youth mental health, there is a distinct lack of research comparing the results of Web-based self-report surveys with the more traditional and widely accepted computer-assisted telephone interviewing (CATI). The Second Australian Young and Well National Survey 2014 sought to compare differences in respondent response patterns using matched items on CATI versus a Web-based self-report survey. The aim of this study was to examine whether responses varied as a result of item sensitivity, that is, the item's susceptibility to exaggeration on underreporting and to assess whether certain subgroups demonstrated this effect to a greater extent. A subsample of young people aged 16 to 25 years (N=101), recruited through the Second Australian Young and Well National Survey 2014, completed the identical items on two occasions: via CATI and via Web-based self-report survey. Respondents also rated perceived item sensitivity. When comparing CATI with the Web-based self-report survey, a Wilcoxon signed-rank analysis showed that respondents answered 14 of the 42 matched items in a significantly different way. Significant variation in responses (CATI vs Web-based) was more frequent if the item was also rated by the respondents as highly sensitive in nature. Specifically, 63% (5/8) of the high sensitivity items, 43% (3/7) of the neutral sensitivity items, and 0% (0/4) of the low sensitivity items were answered in a significantly different manner by respondents when comparing their matched CATI and Web-based question responses. The items that were perceived as highly sensitive by respondents and demonstrated response variability included the following: sexting activities, body image concerns, experience of diagnosis, and suicidal ideation. For high sensitivity items, a regression analysis showed respondents who were male (beta=-.19, P=.048) or who were not in employment, education, or training (NEET; beta=-.32, P=.001) were significantly more likely to provide different responses on matched items when responding in the CATI as compared with the Web-based self-report survey. The Web-based self-report survey, however, demonstrated some evidence of avidity and attrition bias. Compared with CATI, Web-based self-report surveys are highly cost-effective and had higher rates of self-disclosure on sensitive items, particularly for respondents who identify as male and NEET. A drawback to Web-based surveying methodologies, however, includes the limited control over avidity bias and the greater incidence of attrition bias. These findings have important implications for further development of survey methods in the area of health and well-being, especially when considering research topics (in this case diagnosis, suicidal ideation, sexting, and body image) and groups that are being recruited (young people, males, and NEET). ©Alyssa C Milton, Louise A Ellis, Tracey A Davenport, Jane M Burns, Ian B Hickie. Originally published in JMIR Mental Health (http://mental.jmir.org), 26.09.2017.
Osterhoff, Georg; O'Hara, Nathan N; D'Cruz, Jennifer; Sprague, Sheila A; Bansback, Nick; Evaniew, Nathan; Slobogean, Gerard P
2017-03-01
There is ongoing debate regarding the optimal surgical treatment of complex proximal humeral fractures in elderly patients. To evaluate the cost-effectiveness of reverse total shoulder arthroplasty (RTSA) compared with hemiarthroplasty (HA) in the management of complex proximal humeral fractures, using a cost-utility analysis. On the basis of data from published literature, a cost-utility analysis was conducted using decision tree and Markov modeling. A single-payer perspective, with a willingness-to-pay (WTP) threshold of Can$50,000 (Canadian dollars), and a lifetime time horizon were used. The incremental cost-effectiveness ratio (ICER) was used as the study's primary outcome measure. In comparison with HA, the incremental cost per quality-adjusted life-year gained for RTSA was Can$13,679. One-way sensitivity analysis revealed the model to be sensitive to the RTSA implant cost and the RTSA procedural cost. The ICER of Can$13,679 is well below the WTP threshold of Can$50,000, and probabilistic sensitivity analysis demonstrated that 92.6% of model simulations favored RTSA. Our economic analysis found that RTSA for the treatment of complex proximal humeral fractures in the elderly is the preferred economic strategy when compared with HA. The ICER of RTSA is well below standard WTP thresholds, and its estimate of cost-effectiveness is similar to other highly successful orthopedic strategies such as total hip arthroplasty for the treatment of hip arthritis. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
A Quantitative Approach to Scar Analysis
Khorasani, Hooman; Zheng, Zhong; Nguyen, Calvin; Zara, Janette; Zhang, Xinli; Wang, Joyce; Ting, Kang; Soo, Chia
2011-01-01
Analysis of collagen architecture is essential to wound healing research. However, to date no consistent methodologies exist for quantitatively assessing dermal collagen architecture in scars. In this study, we developed a standardized approach for quantitative analysis of scar collagen morphology by confocal microscopy using fractal dimension and lacunarity analysis. Full-thickness wounds were created on adult mice, closed by primary intention, and harvested at 14 days after wounding for morphometrics and standard Fourier transform-based scar analysis as well as fractal dimension and lacunarity analysis. In addition, transmission electron microscopy was used to evaluate collagen ultrastructure. We demonstrated that fractal dimension and lacunarity analysis were superior to Fourier transform analysis in discriminating scar versus unwounded tissue in a wild-type mouse model. To fully test the robustness of this scar analysis approach, a fibromodulin-null mouse model that heals with increased scar was also used. Fractal dimension and lacunarity analysis effectively discriminated unwounded fibromodulin-null versus wild-type skin as well as healing fibromodulin-null versus wild-type wounds, whereas Fourier transform analysis failed to do so. Furthermore, fractal dimension and lacunarity data also correlated well with transmission electron microscopy collagen ultrastructure analysis, adding to their validity. These results demonstrate that fractal dimension and lacunarity are more sensitive than Fourier transform analysis for quantification of scar morphology. PMID:21281794
Bauman, Cathy A; Jones-Bitton, Andria; Jansen, Jocelyn; Kelton, David; Menzies, Paula
2016-09-20
The study's objective was to evaluate the ability of fecal culture (FCUL) and fecal PCR (FPCR) to identify dairy goat and dairy sheep shedding Mycobacterium avium ssp. paratuberculosis. A cross-sectional study of the small ruminant populations was performed in Ontario, Canada between October 2010 and August 2011. Twenty-nine dairy goat herds and 21 dairy sheep flocks were visited, and 20 lactating females > two years of age were randomly selected from each farm resulting in 580 goats and 397 sheep participating in the study. Feces were collected per rectum and cultured using the BD BACTEC™ MGIT™ 960 system using a standard (49 days) and an extended (240 days) incubation time, and underwent RT-PCR based on the hsp-X gene (Tetracore®). Statistical analysis was performed using a 2-test latent class Bayesian hierarchical model for each species fitted in WinBUGS. Extending the fecal culture incubation time statistically improved FCUL sensitivity from 23.1 % (95 % PI: 15.9-34.1) to 42.7 % (95 % PI: 33.0-54.5) in dairy goats and from 5.8 % (95 % PI: 2.3-12.4) to 19.0 % (95 % PI: 11.9-28.9) in dairy sheep. FPCR demonstrated statistically higher sensitivity than FCUL (49 day incubation) with a sensitivity of 31.9 % (95 % PI: 22.4-43.1) in goats and 42.6 % (95 % PI: 28.8-63.3) in sheep. Fecal culture demonstrates such low sensitivity at the standard incubation time it cannot be recommended as a screening test to detect shedding of MAP in either goats or sheep. Extending the incubation time resulted in improved sensitivity; however, it is still disappointingly low for screening purposes. Fecal PCR should be the screening test of choice in both species; however, it is important to recognize that control programs should not be based on testing alone when they demonstrate such low sensitivity.
Combined Raman spectroscopy and autofluoresence imaging method for in vivo skin tumor diagnosis
NASA Astrophysics Data System (ADS)
Zakharov, V. P.; Bratchenko, I. A.; Myakinin, O. O.; Artemyev, D. N.; Khristoforova, Y. A.; Kozlov, S. V.; Moryatov, A. A.
2014-09-01
The fluorescence and Raman spectroscopy (RS) combined method of in vivo detection of malignant human skin cancer was demonstrated. The fluorescence analysis was used for detection of abnormalities during fast scanning of large tissue areas. In suspected cases of malignancy the Raman spectrum analysis of biological tissue was performed to determine the type of neoplasm. A special RS phase method was proposed for in vivo identification of skin tumor. Quadratic Discriminant Analysis was used for tumor type classification on phase planes. It was shown that the application of phase method provides a diagnosis of malignant melanoma with a sensitivity of 89% and a specificity of 87%.
Van Dessel, E; Fierens, K; Pattyn, P; Van Nieuwenhove, Y; Berrevoet, F; Troisi, R; Ceelen, W
2009-01-01
Approximately 5%-20% of colorectal cancer (CRC) patients present with synchronous potentially resectable liver metastatic disease. Preclinical and clinical studies suggest a benefit of the 'liver first' approach, i.e. resection of the liver metastasis followed by resection of the primary tumour. A formal decision analysis may support a rational choice between several therapy options. Survival and morbidity data were retrieved from relevant clinical studies identified by a Web of Science search. Data were entered into decision analysis software (TreeAge Pro 2009, Williamstown, MA, USA). Transition probabilities including the risk of death from complications or disease progression associated with individual therapy options were entered into the model. Sensitivity analysis was performed to evaluate the model's validity under a variety of assumptions. The result of the decision analysis confirms the superiority of the 'liver first' approach. Sensitivity analysis demonstrated that this assumption is valid on condition that the mortality associated with the hepatectomy first is < 4.5%, and that the mortality of colectomy performed after hepatectomy is < 3.2%. The results of this decision analysis suggest that, in patients with synchronous resectable colorectal liver metastases, the 'liver first' approach is to be preferred. Randomized trials will be needed to confirm the results of this simulation based outcome.
NASA Astrophysics Data System (ADS)
Hirai, Toshiro; Yoshioka, Yasuo; Izumi, Natsumi; Ichihashi, Ko-Ichi; Handa, Takayuki; Nishijima, Nobuo; Uemura, Eiichiro; Sagami, Ko-Ichi; Takahashi, Hideki; Yamaguchi, Manami; Nagano, Kazuya; Mukai, Yohei; Kamada, Haruhiko; Tsunoda, Shin-Ichi; Ishii, Ken J.; Higashisaka, Kazuma; Tsutsumi, Yasuo
2016-09-01
Many people suffer from metal allergy, and the recently demonstrated presence of naturally occurring metal nanoparticles in our environment could present a new candidate for inducing metal allergy. Here, we show that mice pretreated with silver nanoparticles (nAg) and lipopolysaccharides, but not with the silver ions that are thought to cause allergies, developed allergic inflammation in response to the silver. nAg-induced acquired immune responses depended on CD4+ T cells and elicited IL-17A-mediated inflammation, similar to that observed in human metal allergy. Nickel nanoparticles also caused sensitization in the mice, whereas gold and silica nanoparticles, which are minimally ionizable, did not. Quantitative analysis of the silver distribution suggested that small nAg (≤10 nm) transferred to the draining lymph node and released ions more readily than large nAg (>10 nm). These results suggest that metal nanoparticles served as ion carriers to enable metal sensitization. Our data demonstrate a potentially new trigger for metal allergy.
Predictive factors of the nursing diagnosis sedentary lifestyle in people with high blood pressure.
Guedes, Nirla Gomes; Lopes, Marcos Venícios de Oliveira; Araujo, Thelma Leite de; Moreira, Rafaella Pessoa; Martins, Larissa Castelo Guedes
2011-01-01
To verify the reproducibility of defining the characteristics and related factors in order to identify a sedentary lifestyle in patients with high blood pressure. A cross-sectional study. 310 patients diagnosed with high blood pressure. Socio-demographics and variables related to defining the characteristics and related factors of a sedentary lifestyle. The coefficient Kappa was utilized to analyze the reproducibility. The sensitivity, specificity, and predictive value of the defining characteristics were also analyzed. Logistic regression was applied in the analysis of possible predictors. The defining characteristic with the greatest sensitivity was demonstrates physical deconditioning (98.92%). The characteristics chooses a daily routine lacking physical exercise and verbalizes preference for activities low in physical activity presented higher values of specificity (99.21% and 95.97%, respectively). The following indicators were identified as powerful predictors (85.2%) for the identification of a sedentary lifestyle: demonstrates physical deconditioning, verbalizes preference for activities low in physical activity, and lack of training for accomplishment of physical exercise. © 2010 Wiley Periodicals, Inc.
Amadasi, Alberto; Borgonovo, Simone; Brandone, Alberto; Di Giancamillo, Mauro; Cattaneo, Cristina
2014-05-01
The radiological search for GSR is crucial in burnt material although it has been rarely tested. In this study, thirty-one bovine ribs were shot at near-contact range and burnt to calcination in an oven simulating a real combustion. Computed tomography (CT) and magnetic resonance (MR) were performed before and after carbonization and compared with former analyses with DR (digital radiography); thus comparing the assistance, the radiological methods can provide in the search for GSR in fresh and burnt bone. DR demonstrated the greatest ability in the detection of metallic residues, CT showed lower abilities, while MR showed a high sensitivity only in soft tissues. Thus, DR can be considered as the most sensitive method in the detection of GSR in charred bones, whereas CT and MR demonstrated much less reliability. Nonetheless, the MR ameliorates the analysis of gunshot wounds in other types of remains with large quantities of soft tissues. © 2013 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Zhao, Ting Ting; Peng, Zhe Wei; Yuan, Dan; Zhen, Shu Jun; Huang, Cheng Zhi; Li, Yuan Fang
2018-03-01
In this contribution, we demonstrated that Cu-based metal-organic gel (Cu-MOG) was able to serve as a novel amplification platform for fluorescence anisotropy (FA) assay for the first time, which was confirmed by the sensitive detection of a common cancer biomarker, prostate specific antigen (PSA). The dye-labeled probe aptamer (PA) product was adsorbed onto the benzimidazole derivative-containing Cu-MOG via electrostatic incorporation and strong π-π stacking interactions, which significantly increased the FA value due to the enlargement of the molecular volume of the PA/Cu-MOG complex. With the introduction of target PSA, the FA value was obviously decreased on account of the specific recognition between PSA and PA which resulted in the detachment of PA from the surface of MOG. The linear range was from 0.5-8 ng/mL, with a detection limit of 0.33 ng/mL. Our work has thus helped to demonstrate promising application of MOG material in the fields of biomolecules analysis and disease diagnosis.
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
A Bacterial Glycoengineered Antigen for Improved Serodiagnosis of Porcine Brucellosis
Cortina, María E.; Balzano, Rodrigo E.; Rey Serantes, Diego A.; Caillava, Ana J.; Elena, Sebastián; Ferreira, A. C.; Nicola, Ana M.; Ugalde, Juan E.
2016-01-01
Brucellosis is a highly zoonotic disease that affects animals and human beings. Brucella suis is the etiological agent of porcine brucellosis and one of the major human brucellosis pathogens. Laboratory diagnosis of porcine brucellosis mainly relies on serological tests, and it has been widely demonstrated that serological assays based on the detection of anti O-polysaccharide antibodies are the most sensitive tests. Here, we validate a recombinant glycoprotein antigen, an N-formylperosamine O-polysaccharide–protein conjugate (OAg-AcrA), for diagnosis of porcine brucellosis. An indirect immunoassay based on the detection of anti-O-polysaccharide IgG antibodies was developed coupling OAg-AcrA to enzyme-linked immunosorbent assay plates (glyco-iELISA). To validate the assay, 563 serum samples obtained from experimentally infected and immunized pigs, as well as animals naturally infected with B. suis biovar 1 or 2, were tested. A receiver operating characteristic (ROC) analysis was performed, and based on this analysis, the optimum cutoff value was 0.56 (relative reactivity), which resulted in a diagnostic sensitivity and specificity of 100% and 99.7%, respectively. A cutoff value of 0.78 resulted in a test sensitivity of 98.4% and a test specificity of 100%. Overall, our results demonstrate that the glyco-iELISA is highly accurate for diagnosis of porcine brucellosis, improving the diagnostic performance of current serological tests. The recombinant glycoprotein OAg-AcrA can be produced in large homogeneous batches in a standardized way, making it an ideal candidate for further validation as a universal antigen for diagnosis of “smooth” brucellosis in animals and humans. PMID:26984975
Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane
The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less
Peng, Zhiyong; Young, Brandon; Baird, Alison E; Soper, Steven A
2013-08-20
Expression analysis of mRNAs transcribed from certain genes can be used as important sources of biomarkers for in vitro diagnostics. While the use of reverse transcription quantitative PCR (RT-qPCR) can provide excellent analytical sensitivity for monitoring transcript numbers, more sensitive approaches for expression analysis that can report results in near real-time are needed for many critical applications. We report a novel assay that can provide exquisite limits-of-quantitation and consists of reverse transcription (RT) followed by a ligase detection reaction (LDR) with single-pair fluorescence resonance energy transfer (spFRET) to provide digital readout through molecular counting. For this assay, no PCR was employed, which enabled short assay turnaround times. To facilitate implementation of the assay, a cyclic olefin copolymer (COC) microchip, which was fabricated using hot embossing, was employed to carry out the LDR in a continuous flow format with online single-molecule detection following the LDR. As demonstrators of the assay's utility, MMP-7 mRNA was expression profiled from several colorectal cancer cell lines. It was found that the RT-LDR/spFRET assay produced highly linear calibration plots even in the low copy number regime. Comparison to RT-qPCR indicated a better linearity over the low copy number range investigated (10-10,000 copies) with an R(2) = 0.9995 for RT-LDR/spFRET and R(2) = 0.98 for RT-qPCR. In addition, differentiating between copy numbers of 10 and 50 could be performed with higher confidence using RT-LDR/spFRET. To demonstrate the short assay turnaround times obtainable using the RT-LDR/spFRET assay, a two thermal cycle LDR was carried out on amphiphysin gene transcripts that can serve as important diagnostic markers for ischemic stroke. The ability to supply diagnostic information on possible stroke events in short turnaround times using RT-LDR/spFRET will enable clinicians to treat patients effectively with appropriate time-sensitive therapeutics.