Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
A sensitive procedure is described for trace analysis of hydrogen peroxide in water. The process involves the peroxide-catalyzed oxidation of the leuco forms of two dyes, crystal violet and malachite green. The sensitivity of this procedure, as well as of another procedure based ...
Strickland, Justin C.; Feinstein, Max A.; Lacy, Ryan T.; Smith, Mark A.
2016-01-01
Impulsive choice is a diagnostic feature and/or complicating factor for several psychological disorders and may be examined in the laboratory using delay-discounting procedures. Recent investigators have proposed using quantitative measures of analysis to examine the behavioral processes contributing to impulsive choice. The purpose of this study was to examine the effects of physical activity (i.e., wheel running) on impulsive choice in a single-response, discrete-trial procedure using two quantitative methods of analysis. To this end, rats were assigned to physical activity or sedentary groups and trained to respond in a delay-discounting procedure. In this procedure, one lever always produced one food pellet immediately, whereas a second lever produced three food pellets after a 0, 10, 20, 40, or 80-second delay. Estimates of sensitivity to reinforcement amount and sensitivity to reinforcement delay were determined using (1) a simple linear analysis and (2) an analysis of logarithmically transformed response ratios. Both analyses revealed that physical activity decreased sensitivity to reinforcement amount and sensitivity to reinforcement delay. These findings indicate that (1) physical activity has significant but functionally opposing effects on the behavioral processes that contribute to impulsive choice and (2) both quantitative methods of analysis are appropriate for use in single-response, discrete-trial procedures. PMID:26964905
Strickland, Justin C; Feinstein, Max A; Lacy, Ryan T; Smith, Mark A
2016-05-01
Impulsive choice is a diagnostic feature and/or complicating factor for several psychological disorders and may be examined in the laboratory using delay-discounting procedures. Recent investigators have proposed using quantitative measures of analysis to examine the behavioral processes contributing to impulsive choice. The purpose of this study was to examine the effects of physical activity (i.e., wheel running) on impulsive choice in a single-response, discrete-trial procedure using two quantitative methods of analysis. To this end, rats were assigned to physical activity or sedentary groups and trained to respond in a delay-discounting procedure. In this procedure, one lever always produced one food pellet immediately, whereas a second lever produced three food pellets after a 0, 10, 20, 40, or 80-s delay. Estimates of sensitivity to reinforcement amount and sensitivity to reinforcement delay were determined using (1) a simple linear analysis and (2) an analysis of logarithmically transformed response ratios. Both analyses revealed that physical activity decreased sensitivity to reinforcement amount and sensitivity to reinforcement delay. These findings indicate that (1) physical activity has significant but functionally opposing effects on the behavioral processes that contribute to impulsive choice and (2) both quantitative methods of analysis are appropriate for use in single-response, discrete-trial procedures. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
Hyperspectral data analysis procedures with reduced sensitivity to noise
NASA Technical Reports Server (NTRS)
Landgrebe, David A.
1993-01-01
Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.
NASA Technical Reports Server (NTRS)
Martin, Carl J., Jr.
1996-01-01
This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.
Sensitivity analysis for large-scale problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
NASA Technical Reports Server (NTRS)
Hornberger, G. M.; Rastetter, E. B.
1982-01-01
A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
Spatial Analysis for Monitoring Forest Health
Francis A. Roesch
1994-01-01
A plan for the spatial analysis for the sample design for the detection monitoring phase in the joint USDA Forest Service/EPA Forest Health Monitoring Program (FHM) in the United States is discussed. The spatial analysis procedure is intended to more quickly identify changes in forest health by providing increased sensitivity to localized changes. The procedure is...
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.
2005-09-15
The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less
Evaluation of microarray data normalization procedures using spike-in experiments
Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders
2006-01-01
Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
NASA Astrophysics Data System (ADS)
Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye
2016-03-01
Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.
Atmospheric model development in support of SEASAT. Volume 1: Summary of findings
NASA Technical Reports Server (NTRS)
Kesel, P. G.
1977-01-01
Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.
Gough, H; Luke, G A; Beeley, J A; Geddes, D A
1996-02-01
The aim of this project was to develop an analytical procedure with the required level of sensitivity for the determination of glucose concentrations in small volumes of unstimulated fasting whole saliva. The technique involves high-performance ion-exchange chromatography at high pH and pulsed amperometric detection. It has a high level of reproducibility, a sensitivity as low as 0.1 mumol/l and requires only 50 microliters samples (sensitivity = 0.002 pmol). Inhibition of glucose metabolism, by procedures such as collection into 0.1% (w/v) sodium fluoride, was shown to be essential if accurate results are to be obtained. Collection on to ice followed by storage at -20 degrees C was shown to be unsuitable and resulted in glucose loss by degradation. There were inter- and intraindividual variations in the glucose concentration in unstimulated mixed saliva (range; 0.02-0.4 mmol/l). The procedure can be used for the analysis of other salivary carbohydrates and for monitoring the clearance of dietary carbohydrates from the mouth.
Design and analysis for detection monitoring of forest health
F. A. Roesch
1995-01-01
An analysis procedure is proposed for the sample design of the Forest Health Monitoring Program (FHM) in the United States. The procedure is intended to provide increased sensitivity to localized but potentially important changes in forest health by explicitly accounting for the spatial relationships between plots in the FHM design. After a series of median sweeps...
DOT National Transportation Integrated Search
2002-05-01
Two procedures for adjusting as-measured test-day spectra to reference day conditions : the Society of Automotive Engineers (SAE) Aerospace Recommended Practice (ARP) : No. 866A (866A) and a procedure utilizing pure-tone absorption equations, ...
DOT National Transportation Integrated Search
2002-04-01
The Society of Automotive Engineers (SAE) Aerospace Recommended Practice (ARP) No. 866A (866A), and a : procedure utilizing pure-tone absorption equations developed in support of the International Organization : for Standardizations (ISO) 9613-...
Partial pressure analysis in space testing
NASA Technical Reports Server (NTRS)
Tilford, Charles R.
1994-01-01
For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.
NASA Astrophysics Data System (ADS)
Newman, James Charles, III
1997-10-01
The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for several two- and three-dimensional cases. In twodimensions, an initially symmetric NACA-0012 airfoil and a high-lift multielement airfoil were examined. For the three-dimensional configurations, an initially rectangular wing with uniform NACA-0012 cross-sections was optimized; in addition, a complete Boeing 747-200 aircraft was studied. Furthermore, the current study also examines the effect of inconsistency in the order of spatial accuracy between the nonlinear fluid and linear shape sensitivity equations. The second step was to develop a computationally efficient, high-fidelity, integrated static aeroelastic analysis procedure. To accomplish this, a structural analysis code was coupled with the aforementioned unstructured grid aerodynamic analysis solver. The use of an unstructured grid scheme for the aerodynamic analysis enhances the interaction compatibility with the wing structure. The structural analysis utilizes finite elements to model the wing so that accurate structural deflections may be obtained. In the current work, parameters have been introduced to control the interaction of the computational fluid dynamics and structural analyses; these control parameters permit extremely efficient static aeroelastic computations. To demonstrate and evaluate this procedure, static aeroelastic analysis results for a flexible wing in low subsonic, high subsonic (subcritical), transonic (supercritical), and supersonic flow conditions are presented.
Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel
2013-12-01
This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.
Analysis of Sensitivity Experiments - An Expanded Primer
2017-03-08
diehard practitioners. The difficulty associated with mastering statistical inference presents a true dilemma. Statistics is an extremely applied...lost, perhaps forever. In other words, when on this safari, you need a guide. This report is designed to be a guide, of sorts. It focuses on analytical...estimated accurately if our analysis is to have real meaning. For this reason, the sensitivity test procedure is designed to concentrate measurements
ERIC Educational Resources Information Center
Angrist, Joshua; Pischke, Jorn-Steffen
2010-01-01
This essay reviews progress in empirical economics since Leamer'rs (1983) critique. Leamer highlighted the benefits of sensitivity analysis, a procedure in which researchers show how their results change with changes in specification or functional form. Sensitivity analysis has had a salutary but not a revolutionary effect on econometric practice.…
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
SASS wind ambiguity removal by direct minimization. II - Use of smoothness and dynamical constraints
NASA Technical Reports Server (NTRS)
Hoffman, R. N.
1984-01-01
A variational analysis method (VAM) is used to remove the ambiguity of the Seasat-A Satellite Scatterometer (SASS) winds. The VAM yields the best fit to the data by minimizing an objective function S which is a measure of the lack of fit. The SASS data are described and the function S and the analysis procedure are defined. Analyses of a single ship report which are analogous to Green's functions are presented. The analysis procedure is tuned and its sensitivity is described using the QE II storm. The procedure is then applied to a case study of September 6, 1978, south of Japan.
Critical analysis of radiologist-patient interaction.
Morris, K J; Tarico, V S; Smith, W L; Altmaier, E M; Franken, E A
1987-05-01
A critical incident interview technique was used to identify features of radiologist-patient interactions considered effective and ineffective by patients. During structured interviews with 35 radiology patients and five patients' parents, three general categories of physician behavior were described: attention to patient comfort, explanation of procedure and results, and interpersonal sensitivity. The findings indicated that patients are sensitive to physicians' interpersonal styles and that they want physicians to explain procedures and results in an understandable manner and to monitor their well-being during procedures. The sample size of the study is small; thus further confirmation is needed. However, the implications for training residents and practicing radiologists in these behaviors are important in the current competitive medical milieu.
Peter, Jochen F; Otto, Angela M
2010-02-01
The effective isolation and purification of proteins from biological fluids is the most crucial step for a successful protein analysis when only minute amounts are available. While conventional purification methods such as dialysis, ultrafiltration or protein precipitation often lead to a marked loss of protein, SPE with small-sized particles is a powerful alternative. The implementation of particles with superparamagnetic cores facilitates the handling of those particles and allows the application of particles in the nanometer to low micrometer range. Due to the small diameters, magnetic particles are advantageous for increasing sensitivity when using subsequent MS analysis or gel electrophoresis. In the last years, different types of magnetic particles were developed for specific protein purification purposes followed by analysis or screening procedures using MS or SDS gel electrophoresis. In this review, the use of magnetic particles for different applications, such as, the extraction and analysis of DNA/RNA, peptides and proteins, is described.
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.
Reanalysis, compatibility and correlation in analysis of modified antenna structures
NASA Technical Reports Server (NTRS)
Levy, R.
1989-01-01
A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.
Three-dimensional aerodynamic shape optimization of supersonic delta wings
NASA Technical Reports Server (NTRS)
Burgreen, Greg W.; Baysal, Oktay
1994-01-01
A recently developed three-dimensional aerodynamic shape optimization procedure AeSOP(sub 3D) is described. This procedure incorporates some of the most promising concepts from the area of computational aerodynamic analysis and design, specifically, discrete sensitivity analysis, a fully implicit 3D Computational Fluid Dynamics (CFD) methodology, and 3D Bezier-Bernstein surface parameterizations. The new procedure is demonstrated in the preliminary design of supersonic delta wings. Starting from a symmetric clipped delta wing geometry, a Mach 1.62 asymmetric delta wing and two Mach 1. 5 cranked delta wings were designed subject to various aerodynamic and geometric constraints.
Generic Hypersonic Inlet Module Analysis
NASA Technical Reports Server (NTRS)
Cockrell, Chares E., Jr.; Huebner, Lawrence D.
2004-01-01
A computational study associated with an internal inlet drag analysis was performed for a generic hypersonic inlet module. The purpose of this study was to determine the feasibility of computing the internal drag force for a generic scramjet engine module using computational methods. The computational study consisted of obtaining two-dimensional (2D) and three-dimensional (3D) computational fluid dynamics (CFD) solutions using the Euler and parabolized Navier-Stokes (PNS) equations. The solution accuracy was assessed by comparisons with experimental pitot pressure data. The CFD analysis indicates that the 3D PNS solutions show the best agreement with experimental pitot pressure data. The internal inlet drag analysis consisted of obtaining drag force predictions based on experimental data and 3D CFD solutions. A comparative assessment of each of the drag prediction methods is made and the sensitivity of CFD drag values to computational procedures is documented. The analysis indicates that the CFD drag predictions are highly sensitive to the computational procedure used.
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welch, B. T., E-mail: Welch.brian@mayo.edu; Eiken, P. W.; Atwell, T. D.
PurposeMesothelioma has been considered a difficult pathologic diagnosis to achieve via image-guided core needle biopsy. The purpose of this study was to assess the diagnostic sensitivity of percutaneous image-guided biopsy for diagnosis of pleural mesothelioma.Materials and MethodsRetrospective review was performed to identify patients with a confirmed diagnosis of pleural mesothelioma and who underwent image-guided needle biopsy between January 1, 2002, and January 1, 2016. Thirty-two patients with pleural mesothelioma were identified and included for analysis in 33 image-guided biopsy procedures. Patient, procedural, and pathologic characteristics were recorded. Complications were characterized via standardized nomenclature [Common Terminology for Clinically Adverse Events (CTCAE)].ResultsPercutaneousmore » image-guided biopsy was associated with an overall sensitivity of 81%. No CTCAE clinically significant complications were observed. No image-guided procedures were complicated by pneumothorax or necessitated chest tube placement. No patients had tumor seeding of the biopsy tract.ConclusionPercutaneous image-guided biopsy can achieve high sensitivity for pathologic diagnosis of pleural mesothelioma with a low procedural complication rate, potentially obviating need for surgical biopsy.« less
ERIC Educational Resources Information Center
Kleppinger, E. W.; And Others
1984-01-01
Although determination of phosphorus is important in biology, physiology, and environmental science, traditional gravimetric and colorimetric methods are cumbersome and lack the requisite sensitivity. Therefore, a derivative activation analysis method is suggested. Background information, procedures, and results are provided. (JN)
Separation and Analysis of Citral Isomers.
ERIC Educational Resources Information Center
Sacks, Jeff; And Others
1983-01-01
Provides background information, procedures, and results of an experiments designed to introduce undergraduates to the technique of steam distillation as a means of isolating thermally sensitive compounds. Chromatographic techniques (HPLC) and mass spectrometric analysis are used in the experiment which requires three laboratory periods. (JN)
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Jörg
2016-01-01
This work presents a computer-based approach to analyze the two-dimensional vocal fold dynamics of endoscopic high-speed videos, and constitutes an extension and generalization of a previously proposed wavelet-based procedure. While most approaches aim for analyzing sustained phonation conditions, the proposed method allows for a clinically adequate analysis of both dynamic as well as sustained phonation paradigms. The analysis procedure is based on a spatio-temporal visualization technique, the phonovibrogram, that facilitates the documentation of the visible laryngeal dynamics. From the phonovibrogram, a low-dimensional set of features is computed using a principle component analysis strategy that quantifies the type of vibration patterns, irregularity, lateral symmetry and synchronicity, as a function of time. Two different test bench data sets are used to validate the approach: (I) 150 healthy and pathologic subjects examined during sustained phonation. (II) 20 healthy and pathologic subjects that were examined twice: during sustained phonation and a glissando from a low to a higher fundamental frequency. In order to assess the discriminative power of the extracted features, a Support Vector Machine is trained to distinguish between physiologic and pathologic vibrations. The results for sustained phonation sequences are compared to the previous approach. Finally, the classification performance of the stationary analyzing procedure is compared to the transient analysis of the glissando maneuver. For the first test bench the proposed procedure outperformed the previous approach (proposed feature set: accuracy: 91.3%, sensitivity: 80%, specificity: 97%, previous approach: accuracy: 89.3%, sensitivity: 76%, specificity: 96%). Comparing the classification performance of the second test bench further corroborates that analyzing transient paradigms provides clear additional diagnostic value (glissando maneuver: accuracy: 90%, sensitivity: 100%, specificity: 80%, sustained phonation: accuracy: 75%, sensitivity: 80%, specificity: 70%). The incorporation of parameters describing the temporal evolvement of vocal fold vibration clearly improves the automatic identification of pathologic vibration patterns. Furthermore, incorporating a dynamic phonation paradigm provides additional valuable information about the underlying laryngeal dynamics that cannot be derived from sustained conditions. The proposed generalized approach provides a better overall classification performance than the previous approach, and hence constitutes a new advantageous tool for an improved clinical diagnosis of voice disorders. Copyright © 2015 Elsevier B.V. All rights reserved.
Soil-contact decay tests using small blocks : a procedural analysis
Rodney C. De Groot; James W. Evans; Paul G. Forsyth; Camille M. Freitag; Jeffrey J. Morrell
Much discussion has been held regarding the merits of laboratory decay tests compared with field tests to evaluate wood preservatives. In this study, procedural aspects of soil jar decay tests with 1 cm 3 blocks were critically examined. Differences among individual bottles were a major source of variation in this method. The reproducibility and sensitivity of the soil...
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.
1997-01-01
A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.
Zhao, Zhiyong; Liu, Na; Yang, Lingchen; Deng, Yifeng; Wang, Jianhua; Song, Suquan; Lin, Shanhai; Wu, Aibo; Zhou, Zhenlei; Hou, Jiafa
2015-09-01
Mycotoxins have the potential to enter the human food chain through carry-over of contaminants from feed into animal-derived products. The objective of the study was to develop a reliable and sensitive method for the analysis of 30 mycotoxins in animal feed and animal-derived food (meat, edible animal tissues, and milk) using liquid chromatography-tandem mass spectrometry (LC-MS/MS). In the study, three extraction procedures, as well as various cleanup procedures, were evaluated to select the most suitable sample preparation procedure for different sample matrices. In addition, timed and highly selective reaction monitoring on LC-MS/MS was used to filter out isobaric matrix interferences. The performance characteristics (linearity, sensitivity, recovery, precision, and specificity) of the method were determined according to Commission Decision 2002/657/EC and 401/2006/EC. The established method was successfully applied to screening of mycotoxins in animal feed and animal-derived food. The results indicated that mycotoxin contamination in feed directly influenced the presence of mycotoxin in animal-derived food. Graphical abstract Multi-mycotoxin analysis of animal feed and animal-derived food using LC-MS/MS.
Caballero, Gerardo M; D'Angelo, Carlos; Fraguío, Mariá Sol; Centurión, Osvaldo Teme
2004-01-01
The purpose of this study is to develop a sensitive and specific alternative to current gas chromatography (GC)-mass spectrometry (MS) selected ion monitoring confirmation methods of 11-nor-delta9-tetrahydrocannabinol-9-carboxylic acid (cTHC) in human urine samples, in the context of doping analysis. An identification procedure based on the comparison, among suspicious and control samples, of the relative abundances of cTHC selected product ions obtained by GC-tandem MS in an ion trap is presented. The method complies with the identification criteria for qualitative assays established by sports authorities; the comparison procedure is precise, reproducible, specific, and sensitive, thus indicating that it is fit for the purpose of identification accordingly to World Antidoping Agency requirements.
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
Bayesian Sensitivity Analysis of Statistical Models with Missing Data
ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG
2013-01-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Determination of T-2 and HT-2 toxins from maize by direct analysis in real time mass spectrometry
USDA-ARS?s Scientific Manuscript database
Direct analysis in real time (DART) ionization coupled to mass spectrometry (MS) was used for the rapid quantitative analysis of T-2 toxin, and the related HT-2 toxin, extracted from corn. Sample preparation procedures and instrument parameters were optimized to obtain sensitive and accurate determi...
Ramilo, Andrea; Navas, J Ignacio; Villalba, Antonio; Abollo, Elvira
2013-05-27
Bonamia ostreae and B. exitiosa have caused mass mortalities of various oyster species around the world and co-occur in some European areas. The World Organisation for Animal Health (OIE) has included infections with both species in the list of notifiable diseases. However, official methods for species-specific diagnosis of either parasite have certain limitations. In this study, new species-specific conventional PCR (cPCR) and real-time PCR techniques were developed to diagnose each parasite species. Moreover, a multiplex PCR method was designed to detect both parasites in a single assay. The analytical sensitivity and specificity of each new method were evaluated. These new procedures were compared with 2 OIE-recommended methods, viz. standard histology and PCR-RFLP. The new procedures showed higher sensitivity than the OIE recommended ones for the diagnosis of both species. The sensitivity of tests with the new primers was higher using oyster gills and gonad tissue, rather than gills alone. The lack of a 'gold standard' prevented accurate estimation of sensitivity and specificity of the new methods. The implementation of statistical tools (maximum likelihood method) for the comparison of the diagnostic tests showed the possibility of false positives with the new procedures, although the absence of a gold standard precluded certainty. Nevertheless, all procedures showed negative results when used for the analysis of oysters from a Bonamia-free area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1988-12-01
This document contains twelve papers on various aspects of low-level radioactive waste management. Topics of this volume include: performance assessment methodology; remedial action alternatives; site selection and site characterization procedures; intruder scenarios; sensitivity analysis procedures; mathematical models for mixed waste environmental transport; and risk assessment methodology. Individual papers were processed separately for the database. (TEM)
Kumar, Keshav; Mishra, Ashok Kumar
2015-07-01
Fluorescence characteristic of 8-anilinonaphthalene-1-sulfonic acid (ANS) in ethanol-water mixture in combination with partial least square (PLS) analysis was used to propose a simple and sensitive analytical procedure for monitoring the adulteration of ethanol by water. The proposed analytical procedure was found to be capable of detecting even small adulteration level of ethanol by water. The robustness of the procedure is evident from the statistical parameters such as square of correlation coefficient (R(2)), root mean square of calibration (RMSEC) and root mean square of prediction (RMSEP) that were found to be well with in the acceptable limits.
Performance optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.
1991-01-01
As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.
Dahling, Daniel R
2002-01-01
Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.
Disclosure of sensitive behaviors across self-administered survey modes: a meta-analysis.
Gnambs, Timo; Kaspar, Kai
2015-12-01
In surveys, individuals tend to misreport behaviors that are in contrast to prevalent social norms or regulations. Several design features of the survey procedure have been suggested to counteract this problem; particularly, computerized surveys are supposed to elicit more truthful responding. This assumption was tested in a meta-analysis of survey experiments reporting 460 effect sizes (total N =125,672). Self-reported prevalence rates of several sensitive behaviors for which motivated misreporting has been frequently observed were compared across self-administered paper-and-pencil versus computerized surveys. The results revealed that computerized surveys led to significantly more reporting of socially undesirable behaviors than comparable surveys administered on paper. This effect was strongest for highly sensitive behaviors and surveys administered individually to respondents. Moderator analyses did not identify interviewer effects or benefits of audio-enhanced computer surveys. The meta-analysis highlighted the advantages of computerized survey modes for the assessment of sensitive topics.
A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.
Gupta, Omesh P; Brown, Gary C; Brown, Melissa M
2008-05-01
To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.
Ronco, Guglielmo; Giorgi-Rossi, Paolo; Carozzi, Francesca; Dalla Palma, Paolo; Del Mistro, Annarosa; De Marco, Laura; De Lillo, Margherita; Naldoni, Carlo; Pierotti, Paola; Rizzolo, Raffaella; Segnan, Nereo; Schincaglia, Patrizia; Zorzi, Manuel; Confortini, Massimo; Cuzick, Jack
2006-07-01
Testing for human papillomavirus (HPV) DNA is more sensitive but less specific than cytological analysis. Loss in specificity is most relevant in women younger than 35 years because of increased HPV prevalence. We aimed to compare conventional screening with an experimental strategy in women aged 25-34 years, and investigate the effect of different criteria of referral to define the best methods of HPV screening. Women were randomly assigned to the conventional procedure (standard cytology, with referral to colposcopy if cytology showed atypical squamous cells of undetermined significance or more [ASCUS+]) or an experimental procedure (liquid-based cytology and testing for high-risk HPV types, with referral to colposcopy with ASCUS+ cytology). Women positive for HPV (cutoff > or = 1 pg/mL) but with normal cytology were retested after 1 year. The main endpoint was the presence of cervical intraepithelial neoplasia at grade 2 or more (CIN2+) in reviewed histology. The main analysis was by intention to screen. This trial is registered as an International Standard Randomised Controlled Trial, number ISRCTN81678807. We randomly assigned 5808 women aged 25-34 years to the conventional group and 6002 to the experimental group. The experimental procedure was significantly more sensitive than the conventional procedure (55 vs 33 CIN2+ lesions detected; relative sensitivity 1.61 [95% CI 1.05-2.48]), but had a lower positive predictive value (PPV; relative PPV 0.55 [0.37-0.82]). HPV testing (> or = 1 pg/mL) with cytology triage was also more sensitive than conventional cytology (relative sensitivity 1.58 [1.03-2.44], relative PPV 0.78 [0.52-1.16]). Relative PPV could be improved, with minimum loss in sensitivity, by use of a 2 pg/mL cutoff for HPV testing. Compared with conventional cytology, liquid-based cytology had a relative sensitivity of 1.32 (0.84-2.06), relative PPV 0.58 [0.38-0.89]). HPV testing alone with cytology triage could be a feasible alternative to conventional cytology for screening women younger than 35 years. Follow-up will provide data on possible overdiagnosis and on the feasibility of extended intervals.
Brock Stewart; Chris J. Cieszewski; Michal Zasada
2005-01-01
This paper presents a sensitivity analysis of the impact of various definitions and inclusions of different variables in the Forest Inventory and Analysis (FIA) inventory on data compilation results. FIA manuals have been changing recently to make the inventory consistent between all the States. Our analysis demonstrates the importance (or insignificance) of different...
Comparing five alternative methods of breast reconstruction surgery: a cost-effectiveness analysis.
Grover, Ritwik; Padula, William V; Van Vliet, Michael; Ridgway, Emily B
2013-11-01
The purpose of this study was to assess the cost-effectiveness of five standardized procedures for breast reconstruction to delineate the best reconstructive approach in postmastectomy patients in the settings of nonirradiated and irradiated chest walls. A decision tree was used to model five breast reconstruction procedures from the provider perspective to evaluate cost-effectiveness. Procedures included autologous flaps with pedicled tissue, autologous flaps with free tissue, latissimus dorsi flaps with breast implants, expanders with implant exchange, and immediate implant placement. All methods were compared with a "do-nothing" alternative. Data for model parameters were collected through a systematic review, and patient health utilities were calculated from an ad hoc survey of reconstructive surgeons. Results were measured in cost (2011 U.S. dollars) per quality-adjusted life-year. Univariate sensitivity analyses and Bayesian multivariate probabilistic sensitivity analysis were conducted. Pedicled autologous tissue and free autologous tissue reconstruction were cost-effective compared with the do-nothing alternative. Pedicled autologous tissue was the slightly more cost-effective of the two. The other procedures were not found to be cost-effective. The results were robust to a number of sensitivity analyses, although the margin between pedicled and free autologous tissue reconstruction is small and affected by some parameter values. Autologous pedicled tissue was slightly more cost-effective than free tissue reconstruction in irradiated and nonirradiated patients. Implant-based techniques were not cost-effective. This is in agreement with the growing trend at academic institutions to encourage autologous tissue reconstruction because of its natural recreation of the breast contour, suppleness, and resiliency in the setting of irradiated recipient beds.
The SCALE Verified, Archived Library of Inputs and Data - VALID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less
NASA Technical Reports Server (NTRS)
1971-01-01
The findings, conclusions, and recommendations relative to the investigations conducted to evaluate tests for classifying pyrotechnic materials and end items as to their hazard potential are presented. Information required to establish an applicable means of determining the potential hazards of pyrotechnics is described. Hazard evaluations are based on the peak overpressure or impulse resulting from the explosion as a function of distance from the source. Other hazard classification tests include dust ignition sensitivity, impact ignition sensitivity, spark ignition sensitivity, and differential thermal analysis.
Queiroz, R H; Lanchote, V L; Bonato, P S; Tozato, E; de Carvalho, D; Gomes, M A; Cerdeira, A L
1999-06-01
A simple, rapid and quantitative bioassay method was compared to a gas chromatography/mass spectrometry (GC/MS) procedure for the analysis of ametryn in surface and groundwater. This method was based on the activity of ametryn in inhibiting the growth of the primary root and shoot of germinating letuce, Lactuca sativa L. seed. The procedure was sensitive to 0.01 microgram/l and was applicable from this concentration up to 0.6 microgram/l. Initial surface sterilization of the seed, selection of pregerminated seed of certain root lengths and special equipment are not necessary. So, we concluded that the sensitivity of the bioassay method is compatible with the chromatographic method (GC-MS). However, the study of the correlation between methods suggests that the bioassay should be used only as a screening technique for the evaluation of ametryn residues in water.
Bermudo, R; Abia, D; Mozos, A; García-Cruz, E; Alcaraz, A; Ortiz, Á R; Thomson, T M; Fernández, P L
2011-01-01
Introduction: Currently, final diagnosis of prostate cancer (PCa) is based on histopathological analysis of needle biopsies, but this process often bears uncertainties due to small sample size, tumour focality and pathologist's subjective assessment. Methods: Prostate cancer diagnostic signatures were generated by applying linear discriminant analysis to microarray and real-time RT–PCR (qRT–PCR) data from normal and tumoural prostate tissue samples. Additionally, after removal of biopsy tissues, material washed off from transrectal biopsy needles was used for molecular profiling and discriminant analysis. Results: Linear discriminant analysis applied to microarray data for a set of 318 genes differentially expressed between non-tumoural and tumoural prostate samples produced 26 gene signatures, which classified the 84 samples used with 100% accuracy. To identify signatures potentially useful for the diagnosis of prostate biopsies, surplus material washed off from routine biopsy needles from 53 patients was used to generate qRT–PCR data for a subset of 11 genes. This analysis identified a six-gene signature that correctly assigned the biopsies as benign or tumoural in 92.6% of the cases, with 88.8% sensitivity and 96.1% specificity. Conclusion: Surplus material from prostate needle biopsies can be used for minimal-size gene signature analysis for sensitive and accurate discrimination between non-tumoural and tumoural prostates, without interference with current diagnostic procedures. This approach could be a useful adjunct to current procedures in PCa diagnosis. PMID:22009027
Design sensitivity analysis and optimization tool (DSO) for sizing design applications
NASA Technical Reports Server (NTRS)
Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa
1992-01-01
The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.
Aerial Radiological Measuring System (ARMS): systems, procedures and sensitivity (1976)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyns, P K
1976-07-01
This report describes the Aerial Radiological Measuring System (ARMS) designed and operated by EG and G, Inc., for the Energy Research and Development Administration's (ERDA) Division of Operational Safety with the cooperation of the Nuclear Regulatory Commission. Designed to rapidly survey large areas for low-level man-made radiation, the ARMS has also proven extremely useful in locating lost radioactive sources of relatively low activity. The system consists of sodium iodide scintillation detectors, data formatting and recording equipment, positioning equipment, meteorological instruments, direct readout hardware, and data analysis equipment. The instrumentation, operational procedures, data reduction techniques and system sensitivities are described, togethermore » with their applications and sample results.« less
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
NASA Astrophysics Data System (ADS)
Zayed, M. A.; El-Rasheedy, El-Gazy A.
2012-03-01
Two simple, sensitive, cheep and reliable spectrophotometric methods are suggested for micro-determination of pseudoephedrine in its pure form and in pharmaceutical preparation (Sinofree Tablets). The first one depends on the drug reaction with inorganic sensitive reagent like molybdate anion in aqueous media via formation of ion-pair mechanism. The second one depends on the drug reaction with π-acceptor reagent like DDQ in non-aqueous media via formation of charge transfer complex. These reactions were studied under various conditions and the optimum parameters were selected. Under proper conditions the suggested procedures were successfully applied for micro-determination of pseudoephedrine in pure and in Sinofree Tablets without interference from excepients. The values of SD, RSD, recovery %, LOD, LOQ and Sandell sensitivity refer to the high accuracy and precession of the applied procedures. The results obtained were compared with the data obtained by an official method, referring to confidence and agreement with DDQ procedure results; but it referred to the more accuracy of the molybdate data. Therefore, the suggested procedures are now successfully being applied in routine analysis of this drug in its pharmaceutical formulation (Sinofree) in Saudi Arabian Pharmaceutical Company (SPIMACO) in Boridah El-Qaseem, Saudi Arabia instead of imported kits had been previously used.
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
Analysis of Nonvolatile Residue (NVR) from Spacecraft Systems
NASA Technical Reports Server (NTRS)
Colony, J. A.
1985-01-01
Organic contamination on critical spacecraft surfaces can cause electronic problems, serious attenuation of various optical signals, thermal control changes, and adhesion problems. Such contaminants can be detected early by the controlled use of witness mirrors, witness plates, wipe sampling, or direct solvent extraction. Each method requires careful control of variables of technique and materials to attain the ultimate sensitivities inherent to that procedure. Subsequent chemical analysis of the contaminant sample by infrared and mass spectrometry identifies the components, gives semiquantitative estimates of contaminant thickness, indicates possible sources of the nonvolatile residue (NVR), and provides guidance for effective cleanup procedures.
The NBS Energy Model Assessment project: Summary and overview
NASA Astrophysics Data System (ADS)
Gass, S. I.; Hoffman, K. L.; Jackson, R. H. F.; Joel, L. S.; Saunders, P. B.
1980-09-01
The activities and technical reports for the project are summarized. The reports cover: assessment of the documentation of Midterm Oil and Gas Supply Modeling System; analysis of the model methodology characteristics of the input and other supporting data; statistical procedures undergirding construction of the model and sensitivity of the outputs to variations in input, as well as guidelines and recommendations for the role of these in model building and developing procedures for their evaluation.
Financial analysis of technology acquisition using fractionated lasers as a model.
Jutkowitz, Eric; Carniol, Paul J; Carniol, Alan R
2010-08-01
Ablative fractional lasers are among the most advanced and costly devices on the market. Yet, there is a dearth of published literature on the cost and potential return on investment (ROI) of such devices. The objective of this study was to provide a methodological framework for physicians to evaluate ROI. To facilitate this analysis, we conducted a case study on the potential ROI of eight ablative fractional lasers. In the base case analysis, a 5-year lease and a 3-year lease were assumed as the purchase option with a $0 down payment and 3-month payment deferral. In addition to lease payments, service contracts, labor cost, and disposables were included in the total cost estimate. Revenue was estimated as price per procedure multiplied by total number of procedures in a year. Sensitivity analyses were performed to account for variability in model assumptions. Based on the assumptions of the model, all lasers had higher ROI under the 5-year lease agreement compared with that for the 3-year lease agreement. When comparing results between lasers, those with lower operating and purchase cost delivered a higher ROI. Sensitivity analysis indicates the model is most sensitive to purchase method. If physicians opt to purchase the device rather than lease, they can significantly enhance ROI. ROI analysis is an important tool for physicians who are considering making an expensive device acquisition. However, physicians should not rely solely on ROI and must also consider the clinical benefits of a laser. (c) Thieme Medical Publishers.
Lucassen, Nicole; Tharner, Anne; Van Ijzendoorn, Marinus H; Bakermans-Kranenburg, Marian J; Volling, Brenda L; Verhulst, Frank C; Lambregtse-Van den Berg, Mijke P; Tiemeier, Henning
2011-12-01
For almost three decades, the association between paternal sensitivity and infant-father attachment security has been studied. The first wave of studies on the correlates of infant-father attachment showed a weak association between paternal sensitivity and infant-father attachment security (r = .13, p < .001, k = 8, N = 546). In the current paper, a meta-analysis of the association between paternal sensitivity and infant-father attachment based on all studies currently available is presented, and the change over time of the association between paternal sensitivity and infant-father attachment is investigated. Studies using an observational measure of paternal interactive behavior with the infant, and the Strange Situation Procedure to observe the attachment relationship were included. Paternal sensitivity is differentiated from paternal sensitivity combined with stimulation in the interaction with the infant. Higher levels of paternal sensitivity were associated with more infant-father attachment security (r = .12, p < .001, k = 16, N = 1,355). Fathers' sensitive play combined with stimulation was not more strongly associated with attachment security than sensitive interactions without stimulation of play. Despite possible changes in paternal role patterns, we did not find stronger associations between paternal sensitivity and infant attachment in more recent years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Y; Huang, H; Su, T
Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less
Sensitivity assessment of sea lice to chemotherapeutants: Current bioassays and best practices.
Marín, S L; Mancilla, J; Hausdorf, M A; Bouchard, D; Tudor, M S; Kane, F
2017-12-18
Traditional bioassays are still necessary to test sensitivity of sea lice species to chemotherapeutants, but the methodology applied by the different scientists has varied over time in respect to that proposed in "Sea lice resistance to chemotherapeutants: A handbook in resistance management" (2006). These divergences motivated the organization of a workshop during the Sea Lice 2016 conference "Standardization of traditional bioassay process by sharing best practices." There was an agreement by the attendants to update the handbook. The objective of this article is to provide a baseline analysis of the methodology for traditional bioassays and to identify procedures that need to be addressed to standardize the protocol. The methodology was divided into the following steps: bioassay design; material and equipment; sea lice collection, transportation and laboratory reception; preparation of dilution; parasite exposure; response evaluation; data analysis; and reporting. Information from the presentations of the workshop, and also from other studies, allowed for the identification of procedures inside a given step that need to be standardized as they were reported to be performed differently by the different working groups. Bioassay design and response evaluation were the targeted steps where more procedures need to be analysed and agreed upon. © 2017 John Wiley & Sons Ltd.
Cunningham, Charles E; Kostrzewa, Linda; Rimas, Heather; Chen, Yvonne; Deal, Ken; Blatz, Susan; Bowman, Alida; Buchanan, Don H; Calvert, Randy; Jennings, Barbara
2013-01-01
Patients value health service teams that function effectively. Organizational justice is linked to the performance, health, and emotional adjustment of the members of these teams. We used a discrete-choice conjoint experiment to study the organizational justice improvement preferences of pediatric health service providers. Using themes from a focus group with 22 staff, we composed 14 four-level organizational justice improvement attributes. A sample of 652 staff (76 % return) completed 30 choice tasks, each presenting three hospitals defined by experimentally varying the attribute levels. Latent class analysis yielded three segments. Procedural justice attributes were more important to the Decision Sensitive segment, 50.6 % of the sample. They preferred to contribute to and understand how all decisions were made and expected management to act promptly on more staff suggestions. Interactional justice attributes were more important to the Conduct Sensitive segment (38.5 %). A universal code of respectful conduct, consequences encouraging respectful interaction, and management's response when staff disagreed with them were more important to this segment. Distributive justice attributes were more important to the Benefit Sensitive segment, 10.9 % of the sample. Simulations predicted that, while Decision Sensitive (74.9 %) participants preferred procedural justice improvements, Conduct (74.6 %) and Benefit Sensitive (50.3 %) participants preferred interactional justice improvements. Overall, 97.4 % of participants would prefer an approach combining procedural and interactional justice improvements. Efforts to create the health service environments that patients value need to be comprehensive enough to address the preferences of segments of staff who are sensitive to different dimensions of organizational justice.
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
NASA Astrophysics Data System (ADS)
Eno, Larry; Rabitz, Herschel
1981-08-01
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.
Janus, Tomasz; Jasionowicz, Ewa; Potocka-Banaś, Barbara; Borowiak, Krzysztof
Routine toxicological analysis is mostly focused on the identification of non-organic and organic, chemically different compounds, but generally with low mass, usually not greater than 500–600 Da. Peptide compounds with atomic mass higher than 900 Da are a specific analytical group. Several dozen of them are highly-toxic substances well known in toxicological practice, for example mushroom toxin and animal venoms. In the paper the authors present an example of alpha-amanitin to explain the analytical problems and different original solutions in identifying peptides in urine samples with the use of the universal LC MS/MS procedure. The analyzed material was urine samples collected from patients with potential mushroom intoxication, routinely diagnosed for amanitin determination. Ultra filtration with centrifuge filter tubes (limited mass cutoff 3 kDa) was used. Filtrate fluid was directly injected on the chromatographic column and analyzed with a mass detector (MS/MS). The separation of peptides as organic, amphoteric compounds from biological material with the use of the SPE technique is well known but requires dedicated, specific columns. The presented paper proved that with the fast and simple ultra filtration technique amanitin can be effectively isolated from urine, and the procedure offers satisfactory sensitivity of detection and eliminates the influence of the biological matrix on analytical results. Another problem which had to be solved was the non-characteristic fragmentation of peptides in the MS/MS procedure providing non-selective chromatograms. It is possible to use higher collision energies in the analytical procedure, which results in more characteristic mass spectres, although it offers lower sensitivity. The ultra filtration technique as a procedure of sample preparation is effective for the isolation of amanitin from the biological matrix. The monitoring of selected mass corresponding to transition with the loss of water molecule offers satisfactory sensitivity of determination.
Saladino, R; Crestini, C; Mincione, E; Costanzo, G; Di Mauro, E; Negri, R
1997-11-01
We describe the reaction of formamide with 2'-deoxycytidine to give pyrimidine ring opening by nucleophilic addition on the electrophilic C(6) and C(4) positions. This information is confirmed by the analysis of the products of formamide attack on 2'-deoxycytidine, 5-methyl-2'-deoxycytidine, and 5-bromo-2'-deoxycytidine, residues when the latter are incorporated into oligonucleotides by DNA polymerase-driven polymerization and solid-phase phosphoramidite procedure. The increased sensitivity of 5-bromo-2'-deoxycytidine relative to that of 2'-deoxycytidine is pivotal for the improvement of the one-lane chemical DNA sequencing procedure based on the base-selective reaction of formamide with DNA. In many DNA sequencing cases it will in fact be possible to incorporate this base analogue into the DNA to be sequenced, thus providing a complete discrimination between its UV absorption signal and that of the thymidine residues. The wide spectrum of different sensitivities to formamide displayed by the 2'-deoxycytidine analogues solves, in the DNA single-lane chemical sequencing procedure, the possible source of errors due to low discrimination between C and T residues.
Mannelli, Ilaria; Minunni, Maria; Tombelli, Sara; Mascini, Marco
2003-03-01
A DNA piezoelectric sensor has been developed for the detection of genetically modified organisms (GMOs). Single stranded DNA (ssDNA) probes were immobilised on the sensor surface of a quartz crystal microbalance (QCM) device and the hybridisation between the immobilised probe and the target complementary sequence in solution was monitored. The probe sequences were internal to the sequence of the 35S promoter (P) and Nos terminator (T), which are inserted sequences in the genome of GMOs regulating the transgene expression. Two different probe immobilisation procedures were applied: (a) a thiol-dextran procedure and (b) a thiol-derivatised probe and blocking thiol procedure. The system has been optimised using synthetic oligonucleotides, which were then applied to samples of plasmidic and genomic DNA isolated from the pBI121 plasmid, certified reference materials (CRM), and real samples amplified by the polymerase chain reaction (PCR). The analytical parameters of the sensor have been investigated (sensitivity, reproducibility, lifetime etc.). The results obtained showed that both immobilisation procedures enabled sensitive and specific detection of GMOs, providing a useful tool for screening analysis in food samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alkalay, D.; Khemani, L.; Bartlett, M.F.
1978-01-01
Sensitive radioimmunoassay procedures are described for measuring p-hydroxyphenformin and apparent phenformin in human plasma or serum. The p-hydroxyphenformin procedure, which requires advance phenformin analysis, offers excellent accuracy. The determinations of apparent phenformin are influenced by the phenformin and p-hydroxyphenformin contents of the samples. Over a period of four months, the methods showed a precision associated with relative standard deviations of 12% for apparent phenformin and 13% for p-hydroxyphenformin.
Modal Analysis for Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
MANGO software is to provide a solution for improving small signal stability of power systems through adjusting operator-controllable variables using PMU measurement. System oscillation problems are one of the major threats to the grid stability and reliability in California and the Western Interconnection. These problems result in power fluctuations, lower grid operation efficiency, and may even lead to large-scale grid breakup and outages. This MANGO software aims to solve this problem by automatically generating recommended operation procedures termed Modal Analysis for Grid Operation (MANGO) to improve damping of inter-area oscillation modes. The MANGO procedure includes three steps: recognizing small signalmore » stability problems, implementing operating point adjustment using modal sensitivity, and evaluating the effectiveness of the adjustment. The MANGO software package is designed to help implement the MANGO procedure.« less
HLA-typing analysis following allogeneic bone grafting for sinus lifting.
Piaia, Marcelo; Bub, Carolina Bonet; Succi, Guilherme de Menezes; Torres, Margareth; Costa, Thiago Henrique; Pinheiro, Fabricio Costa; Napimoga, Marcelo Henrique
2017-03-01
According to the Brazilian Association of Organ Transplants, in 2015, 19,408 bone transplants were performed in Brazil, over 90% by Dental Surgeons. The surgical technique itself has a respectable number of reports regarding its clinical efficacy, as measured by long-term survival of dental implants in grafted areas. Uncertainty remains, however, as to whether fresh frozen grafts from human bone donors remain immunologically innocuous in the body of the host. Six male with no previous medical history of note, including systemic diseases, surgery or blood transfusion were selected. These patients underwent reconstructive procedures (sinus lifting) using fresh frozen human bone from a tissue bank. All patients had venous blood samples collected prior to surgery and 6 months after the procedure. Anti-HLA analysis for the detection of HLA (human leukocyte antigen) antibodies was performed using methods such as the LABScreen PRA Class I and Class II, LABScreen Single Antigen Class I and Class II, Luminex Platform. Reactive individuals to the screening tests (LABScreen PRA) were further investigated to determine the specificity of the antibodies detected (LABScreen Single Antigen) with a cutoff value of median fluorescence intensity ≥500. As a result, it was observed that two patients (33%) were positive in screening tests, one presenting with anti-HLA Class I and II sensitization and the other with anti-HLA class II. The specificity analysis showed that the patients sensitized to HLA class II presented 4 specificities, 3 of which immunologically relevant. In the second individual, 23 specificities were identified, 6 of which immunologically important for HLA class I and 4 specificities for HLA class II, 3 of these were immunologically important. All specificities detected had average fluorescence. These findings are suggestive that sinus-lifting procedures with allogeneic bone can induce immunological sensitization.
NASA Astrophysics Data System (ADS)
Romano, N.; Petroselli, A.; Grimaldi, S.
2012-04-01
With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.
Zayed, M A; El-Rasheedy, El-Gazy A
2012-03-01
Two simple, sensitive, cheep and reliable spectrophotometric methods are suggested for micro-determination of pseudoephedrine in its pure form and in pharmaceutical preparation (Sinofree Tablets). The first one depends on the drug reaction with inorganic sensitive reagent like molybdate anion in aqueous media via formation of ion-pair mechanism. The second one depends on the drug reaction with π-acceptor reagent like DDQ in non-aqueous media via formation of charge transfer complex. These reactions were studied under various conditions and the optimum parameters were selected. Under proper conditions the suggested procedures were successfully applied for micro-determination of pseudoephedrine in pure and in Sinofree Tablets without interference from excepients. The values of SD, RSD, recovery %, LOD, LOQ and Sandell sensitivity refer to the high accuracy and precession of the applied procedures. The results obtained were compared with the data obtained by an official method, referring to confidence and agreement with DDQ procedure results; but it referred to the more accuracy of the molybdate data. Therefore, the suggested procedures are now successfully being applied in routine analysis of this drug in its pharmaceutical formulation (Sinofree) in Saudi Arabian Pharmaceutical Company (SPIMACO) in Boridah El-Qaseem, Saudi Arabia instead of imported kits had been previously used. Copyright © 2011 Elsevier B.V. All rights reserved.
Determination of colonoscopy indication from administrative claims data.
Ko, Cynthia W; Dominitz, Jason A; Neradilek, Moni; Polissar, Nayak; Green, Pam; Kreuter, William; Baldwin, Laura-Mae
2014-04-01
Colonoscopy outcomes, such as polyp detection or complication rates, may differ by procedure indication. To develop methods to classify colonoscopy indications from administrative data, facilitating study of colonoscopy quality and outcomes. We linked 14,844 colonoscopy reports from the Clinical Outcomes Research Initiative, a national repository of endoscopic reports, to the corresponding Medicare Carrier and Outpatient File claims. Colonoscopy indication was determined from the procedure reports. We developed algorithms using classification and regression trees and linear discriminant analysis (LDA) to classify colonoscopy indication. Predictor variables included ICD-9CM and CPT/HCPCS codes present on the colonoscopy claim or in the 12 months prior, patient demographics, and site of colonoscopy service. Algorithms were developed on a training set of 7515 procedures, then validated using a test set of 7329 procedures. Sensitivity was lowest for identifying average-risk screening colonoscopies, varying between 55% and 86% for the different algorithms, but specificity for this indication was consistently over 95%. Sensitivity for diagnostic colonoscopy varied between 77% and 89%, with specificity between 55% and 87%. Algorithms with classification and regression trees with 7 variables or LDA with 10 variables had similar overall accuracy, and generally lower accuracy than the algorithm using LDA with 30 variables. Algorithms using Medicare claims data have moderate sensitivity and specificity for colonoscopy indication, and will be useful for studying colonoscopy quality in this population. Further validation may be needed before use in alternative populations.
Theoretical considerations of some nonlinear aspects of hypersonic panel flutter
NASA Technical Reports Server (NTRS)
Mcintosh, S. C., Jr.
1974-01-01
A research project to analyze the effects of hypersonic nonlinear aerodynamic loading on panel flutter is reported. The test equipment and procedures for conducting the tests are explained. The effects of aerodynamic linearities on stability were evaluated by determining constant-initial-energy amplitude-sensitive stability boundaries and comparing them with the corresponding linear stability boundaries. An attempt to develop an alternative method of analysis for systems where amplitude-sensitive instability is possible is presented.
Discrete analysis of spatial-sensitivity models
NASA Technical Reports Server (NTRS)
Nielsen, Kenneth R. K.; Wandell, Brian A.
1988-01-01
Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.
Online and offline tools for head movement compensation in MEG.
Stolk, Arjen; Todorovic, Ana; Schoffelen, Jan-Mathijs; Oostenveld, Robert
2013-03-01
Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments. Copyright © 2012 Elsevier Inc. All rights reserved.
Currency arbitrage detection using a binary integer programming model
NASA Astrophysics Data System (ADS)
Soon, Wanmei; Ye, Heng-Qing
2011-04-01
In this article, we examine the use of a new binary integer programming (BIP) model to detect arbitrage opportunities in currency exchanges. This model showcases an excellent application of mathematics to the real world. The concepts involved are easily accessible to undergraduate students with basic knowledge in Operations Research. Through this work, students can learn to link several types of basic optimization models, namely linear programming, integer programming and network models, and apply the well-known sensitivity analysis procedure to accommodate realistic changes in the exchange rates. Beginning with a BIP model, we discuss how it can be reduced to an equivalent but considerably simpler model, where an efficient algorithm can be applied to find the arbitrages and incorporate the sensitivity analysis procedure. A simple comparison is then made with a different arbitrage detection model. This exercise helps students learn to apply basic Operations Research concepts to a practical real-life example, and provides insights into the processes involved in Operations Research model formulations.
Probabilistic analysis of a materially nonlinear structure
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.; Fossum, A. F.
1990-01-01
A probabilistic finite element program is used to perform probabilistic analysis of a materially nonlinear structure. The program used in this study is NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), under development at Southwest Research Institute. The cumulative distribution function (CDF) of the radial stress of a thick-walled cylinder under internal pressure is computed and compared with the analytical solution. In addition, sensitivity factors showing the relative importance of the input random variables are calculated. Significant plasticity is present in this problem and has a pronounced effect on the probabilistic results. The random input variables are the material yield stress and internal pressure with Weibull and normal distributions, respectively. The results verify the ability of NESSUS to compute the CDF and sensitivity factors of a materially nonlinear structure. In addition, the ability of the Advanced Mean Value (AMV) procedure to assess the probabilistic behavior of structures which exhibit a highly nonlinear response is shown. Thus, the AMV procedure can be applied with confidence to other structures which exhibit nonlinear behavior.
Schoenfeld, Andrew J; Serrano, Jose A; Waterman, Brian R; Bader, Julia O; Belmont, Philip J
2013-11-01
Few studies have addressed the role of residents' participation in morbidity and mortality after orthopaedic surgery. The present study utilized the 2005-2010 National Surgical Quality Improvement Program (NSQIP) dataset to assess the risk of 30-day post-operative complications and mortality associated with resident participation in orthopaedic procedures. The NSQIP dataset was queried using codes for 12 common orthopaedic procedures. Patients identified as having received one of the procedures had their records abstracted to obtain demographic data, medical history, operative time, and resident involvement in their surgical care. Thirty-day post-operative outcomes, including complications and mortality, were assessed for all patients. A step-wise multivariate logistic regression model was constructed to evaluate the impact of resident participation on mortality- and complication-risk while controlling for other factors in the model. Primary analyses were performed comparing cases where the attending surgeon operated alone to all other case designations, while a subsequent sensitivity analysis limited inclusion to cases where resident participation was reported by post-graduate year. In the NSQIP dataset, 43,343 patients had received one of the 12 orthopaedic procedures queried. Thirty-five percent of cases were performed with resident participation. The mortality rate, overall, was 2.5 and 10 % sustained one or more complications. Multivariate analysis demonstrated a significant association between resident participation and the risk of one or more complications [OR 1.3 (95 % CI 1.1, 1.4); p < 0.001] as well as major systemic complications [OR 1.6 (95 % CI 1.3, 2.0); p < 0.001] for primary joint arthroplasty procedures only. These findings persisted even after sensitivity testing. A mild to moderate risk for complications was noted following resident involvement in joint arthroplasty procedures. No significant risk of post-operative morbidity or mortality was appreciated for the other orthopaedic procedures studied. II (Prognostic).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mestrovic, Ante; Chitsazzadeh, Shadi; Wells, Derek
2016-08-15
Purpose: To develop a highly sensitive patient specific QA procedure for gated VMAT stereotactic ablative radiotherapy (SABR) treatments. Methods: A platform was constructed to attach the translational stage of a Quasar respiratory motion phantom to a pinpoint ion chamber insert and move the ion chamber inside the ArcCheck. The Quasar phantom controller uses a patient-specific breathing pattern to translate the ion chamber in a superior-inferior direction inside the ArcCheck. With this system the ion chamber is used to QA the correct phase of the gated delivery and the ArcCheck diodes are used to QA the overall dose distribution. This novelmore » approach requires a single plan delivery for a complete QA of a gated plan. The sensitivity of the gating QA procedure was investigated with respect to the following parameters: PTV size, exhale duration, baseline drift, gating window size. Results: The difference between the measured dose to a point in the penumbra and the Eclipse calculated dose was under 2% for small residual motions. The QA procedure was independent of PTV size and duration of exhale. Baseline drift and gating window size, however, significantly affected the penumbral dose measurement, with differences of up to 30% compared to Eclipse. Conclusion: This study described a highly sensitive QA procedure for gated VMAT SABR treatments. The QA outcome was dependent on the gating window size and baseline drift. Analysis of additional patient breathing patterns is currently undergoing to determine a clinically relevant gating window size and an appropriate tolerance level for this procedure.« less
Application of design sensitivity analysis for greater improvement on machine structural dynamics
NASA Technical Reports Server (NTRS)
Yoshimura, Masataka
1987-01-01
Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.
Moore, Eileen M.; Forrest, Robert D.; Boehm, Stephen L.
2012-01-01
Adolescent individuals display altered behavioral sensitivity to ethanol, which may contribute to the increased ethanol consumption seen in this age-group. However, genetics also exert considerable influence on both ethanol intake and sensitivity. Thus far there is little research assessing the combined influence of developmental and genetic alcohol sensitivities. Sensitivity to the aversive effects of ethanol using a conditioned taste aversion (CTA) procedure was measured during both adolescence (P30) and adulthood (P75) in 8 inbred mouse strains (C57BL/6J, DBA/2J, 129S1/SvImJ, A/J, BALB/cByJ, BTBR T+tf/J, C3H/HeJ, and FVB/NJ). Adolescent and adult mice were water deprived, and subsequently provided with access to 0.9% (v/v) NaCl solution for 1h. Immediately following access mice were administered ethanol (0, 1.5, 2.25, 3g/kg, ip). This procedure was repeated in 72h intervals for a total of 5 CTA trials. Sensitivity to the aversive effects of ethanol was highly dependent upon both strain and age. Within an inbred strain, adolescent animals were consistently less sensitive to the aversive effects of ethanol than their adult counterparts. However, the dose of ethanol required to produce an aversion response differed as a function of both age and strain. PMID:23171343
Two-layer convective heating prediction procedures and sensitivities for blunt body reentry vehicles
NASA Technical Reports Server (NTRS)
Bouslog, Stanley A.; An, Michael Y.; Wang, K. C.; Tam, Luen T.; Caram, Jose M.
1993-01-01
This paper provides a description of procedures typically used to predict convective heating rates to hypersonic reentry vehicles using the two-layer method. These procedures were used to compute the pitch-plane heating distributions to the Apollo geometry for a wind tunnel test case and for three flight cases. Both simple engineering methods and coupled inviscid/boundary layer solutions were used to predict the heating rates. The sensitivity of the heating results in the choice of metrics, pressure distributions, boundary layer edge conditions, and wall catalycity used in the heating analysis were evaluated. Streamline metrics, pressure distributions, and boundary layer edge properties were defined from perfect gas (wind tunnel case) and chemical equilibrium and nonequilibrium (flight cases) inviscid flow-field solutions. The results of this study indicated that the use of CFD-derived metrics and pressures provided better predictions of heating when compared to wind tunnel test data. The study also showed that modeling entropy layer swallowing and ionization had little effect on the heating predictions.
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.
Makhija, D; Rock, M; Xiong, Y; Epstein, J D; Arnold, M R; Lattouf, O M; Calcaterra, D
2017-06-01
A recent retrospective comparative effectiveness study found that use of the FLOSEAL Hemostatic Matrix in cardiac surgery was associated with significantly lower risks of complications, blood transfusions, surgical revisions, and shorter length of surgery than use of SURGIFLO Hemostatic Matrix. These outcome improvements in cardiac surgery procedures may translate to economic savings for hospitals and payers. The objective of this study was to estimate the cost-consequence of two flowable hemostatic matrices (FLOSEAL or SURGIFLO) in cardiac surgeries for US hospitals. A cost-consequence model was constructed using clinical outcomes from a previously published retrospective comparative effectiveness study of FLOSEAL vs SURGIFLO in adult cardiac surgeries. The model accounted for the reported differences between these products in length of surgery, rates of major and minor complications, surgical revisions, and blood product transfusions. Costs were derived from Healthcare Cost and Utilization Project's National Inpatient Sample (NIS) 2012 database and converted to 2015 US dollars. Savings were modeled for a hospital performing 245 cardiac surgeries annually, as identified as the average for hospitals in the NIS dataset. One-way sensitivity analysis and probabilistic sensitivity analysis were performed to test model robustness. The results suggest that if FLOSEAL is utilized in a hospital that performs 245 mixed cardiac surgery procedures annually, 11 major complications, 31 minor complications, nine surgical revisions, 79 blood product transfusions, and 260.3 h of cumulative operating time could be avoided. These improved outcomes correspond to a net annualized saving of $1,532,896. Cost savings remained consistent between $1.3m and $1.8m and between $911k and $2.4m, even after accounting for the uncertainty around clinical and cost inputs, in a one-way and probabilistic sensitivity analysis, respectively. Outcome differences associated with FLOSEAL vs SURGIFLO that were previously reported in a comparative effectiveness study may result in substantial cost savings for US hospitals.
Jahn, I; Foraita, R
2008-01-01
In Germany gender-sensitive approaches are part of guidelines for good epidemiological practice as well as health reporting. They are increasingly claimed to realize the gender mainstreaming strategy in research funding by the federation and federal states. This paper focuses on methodological aspects of data analysis, as an empirical data example of which serves the health report of Bremen, a population-based cross-sectional study. Health reporting requires analysis and reporting methods that are able to discover sex/gender issues of questions, on the one hand, and consider how results can adequately be communicated, on the other hand. The core question is: Which consequences do a different inclusion of the category sex in different statistical analyses for identification of potential target groups have on the results? As evaluation methods logistic regressions as well as a two-stage procedure were exploratively conducted. This procedure combines graphical models with CHAID decision trees and allows for visualising complex results. Both methods are analysed by stratification as well as adjusted by sex/gender and compared with each other. As a result, only stratified analyses are able to detect differences between the sexes and within the sex/gender groups as long as one cannot resort to previous knowledge. Adjusted analyses can detect sex/gender differences only if interaction terms have been included in the model. Results are discussed from a statistical-epidemiological perspective as well as in the context of health reporting. As a conclusion, the question, if a statistical method is gender-sensitive, can only be answered by having concrete research questions and known conditions. Often, an appropriate statistic procedure can be chosen after conducting a separate analysis for women and men. Future gender studies deserve innovative study designs as well as conceptual distinctiveness with regard to the biological and the sociocultural elements of the category sex/gender.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2003-01-01
An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2001-01-01
An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number
Wegelin, Olivier; Bartels, Diny W M; Tromp, Ellen; Kuypers, Karel C; van Melick, Harm H E
2015-10-01
To evaluate the effects of cystoscopy on urine cytology and additional cytokeratin-20 (CK-20) staining in patients presenting with gross hematuria. For 83 patients presenting with gross hematuria, spontaneous and instrumented paired urine samples were analyzed. Three patients were excluded. Spontaneous samples were collected within 1 hour before cystoscopy, and the instrumented samples were tapped through the cystoscope. Subsequently, patients underwent cystoscopic evaluation and imaging of the urinary tract. If tumor suspicious lesions were found on cystoscopy or imaging, subjects underwent transurethral resection or ureterorenoscopy. Two blinded uropathological reviewers (DB, KK) evaluated 160 urine samples. Reference standards were results of cystoscopy, imaging, or histopathology. Thirty-seven patients (46.3%) underwent transurethral resection or ureterorenoscopy procedures. In 30 patients (37.5%) tumor presence was confirmed by histopathology. The specificity of urine analysis was significantly higher for spontaneous samples than instrumented samples for both cytology alone (94% vs 72%, P = .01) and for cytology combined with CK-20 analysis (98% vs 84%, P = .02). The difference in sensitivity between spontaneous and instrumented samples was not significant for both cytology alone (40% vs 53%) and combined with CK-20 analysis (67% vs 67%). The addition of CK-20 analysis to cytology significantly increases test sensitivity in spontaneous urine cytology (67% vs 40%, P = .03). Instrumentation significantly decreases specificity of urine cytology. This may lead to unnecessary diagnostic procedures. Additional CK-20 staining in spontaneous urine cytology significantly increases sensitivity but did not improve the already high specificity. We suggest performing urine cytology and CK-20 analysis on spontaneously voided urine. Copyright © 2015 Elsevier Inc. All rights reserved.
Sergé, Arnauld; Bernard, Anne-Marie; Phélipot, Marie-Claire; Bertaux, Nicolas; Fallet, Mathieu; Grenot, Pierre; Marguet, Didier; He, Hai-Tao; Hamon, Yannick
2013-01-01
We introduce a series of experimental procedures enabling sensitive calcium monitoring in T cell populations by confocal video-microscopy. Tracking and post-acquisition analysis was performed using Methods for Automated and Accurate Analysis of Cell Signals (MAAACS), a fully customized program that associates a high throughput tracking algorithm, an intuitive reconnection routine and a statistical platform to provide, at a glance, the calcium barcode of a population of individual T-cells. Combined with a sensitive calcium probe, this method allowed us to unravel the heterogeneity in shape and intensity of the calcium response in T cell populations and especially in naive T cells, which display intracellular calcium oscillations upon stimulation by antigen presenting cells. PMID:24086124
Extension of latin hypercube samples with correlated variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hora, Stephen Curtis; Helton, Jon Craig; Sallaberry, Cedric J. PhD.
2006-11-01
A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number ofmore » model evaluations.« less
ERIC Educational Resources Information Center
Geri, George A.; Hubbard, David C.
Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual…
Vo, Elaine; Davila, Jessica A; Hou, Jason; Hodge, Krystle; Li, Linda T; Suliburk, James W; Kao, Lillian S; Berger, David H; Liang, Mike K
2013-08-01
Large databases provide a wealth of information for researchers, but identifying patient cohorts often relies on the use of current procedural terminology (CPT) codes. In particular, studies of stoma surgery have been limited by the accuracy of CPT codes in identifying and differentiating ileostomy procedures from colostomy procedures. It is important to make this distinction because the prevalence of complications associated with stoma formation and reversal differ dramatically between types of stoma. Natural language processing (NLP) is a process that allows text-based searching. The Automated Retrieval Console is an NLP-based software that allows investigators to design and perform NLP-assisted document classification. In this study, we evaluated the role of CPT codes and NLP in differentiating ileostomy from colostomy procedures. Using CPT codes, we conducted a retrospective study that identified all patients undergoing a stoma-related procedure at a single institution between January 2005 and December 2011. All operative reports during this time were reviewed manually to abstract the following variables: formation or reversal and ileostomy or colostomy. Sensitivity and specificity for validation of the CPT codes against the mastery surgery schedule were calculated. Operative reports were evaluated by use of NLP to differentiate ileostomy- from colostomy-related procedures. Sensitivity and specificity for identifying patients with ileostomy or colostomy procedures were calculated for CPT codes and NLP for the entire cohort. CPT codes performed well in identifying stoma procedures (sensitivity 87.4%, specificity 97.5%). A total of 664 stoma procedures were identified by CPT codes between 2005 and 2011. The CPT codes were adequate in identifying stoma formation (sensitivity 97.7%, specificity 72.4%) and stoma reversal (sensitivity 74.1%, specificity 98.7%), but they were inadequate in identifying ileostomy (sensitivity 35.0%, specificity 88.1%) and colostomy (75.2% and 80.9%). NLP performed with greater sensitivity, specificity, and accuracy than CPT codes in identifying stoma procedures and stoma types. Major differences where NLP outperformed CPT included identifying ileostomy (specificity 95.8%, sensitivity 88.3%, and accuracy 91.5%) and colostomy (97.6%, 90.5%, and 92.8%, respectively). CPT codes can identify effectively patients who have had stoma procedures and are adequate in distinguishing between formation and reversal; however, CPT codes cannot differentiate ileostomy from colostomy. NLP can be used to differentiate between ileostomy- and colostomy-related procedures. The role of NLP in conjunction with electronic medical records in data retrieval warrants further investigation. Published by Mosby, Inc.
40 CFR 63.786 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... by Gas Chromatography (incorporation by reference—see § 63.14). In determining the sensitivity, the... Practice for Gas Chromatography [incorporation by reference—see § 63.14].) (c) A coating manufacturer or... mixture under analysis are not known. In such cases a single column gas chromatograph (GC) may not be...
40 CFR 63.786 - Test methods and procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... by Gas Chromatography (incorporation by reference—see § 63.14). In determining the sensitivity, the... Practice for Gas Chromatography [incorporation by reference—see § 63.14].) (c) A coating manufacturer or... mixture under analysis are not known. In such cases a single column gas chromatograph (GC) may not be...
ERIC Educational Resources Information Center
Robinson-Cimpian, Joseph P.
2014-01-01
This article introduces novel sensitivity-analysis procedures for investigating and reducing the bias that mischievous responders (i.e., youths who provide extreme, and potentially untruthful, responses to multiple questions) often introduce in adolescent disparity estimates based on data from self-administered questionnaires (SAQs). Mischievous…
DOT National Transportation Integrated Search
2009-11-01
The new Mechanistic-Empirical Pavement Design Guide (NCHRP 1-37A and 1-40D) is based on fundamental engineering principles and is far more comprehensive than the current empirical AASHTO Design Guide developed for conditions more than 40 years previo...
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eno, L.; Rabitz, H.
1981-08-15
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix S/sup IOS/ with respect to a parameter which reintroduces the internal energy operator h/sub 0/ into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (h/sub 0/ in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result ismore » obtained for the effect of h/sub 0/ on S/sup IOS/. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H/sub 2/ system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.« less
NASA Astrophysics Data System (ADS)
Prideaux, Brendan; Atkinson, Sally J.; Carolan, Vikki A.; Morton, Jacqueline; Clench, Malcolm R.
2007-02-01
Aspects of the indirect examination of xenobiotic distribution on the surface of and within skin sections by imaging matrix assisted laser desorption ionisation mass spectrometry (MALDI-MS) have been examined. A solvent assisted blotting technique previously developed for the examination of the absorption of agrochemicals into leaves has been examined for the analysis of the distribution of hydrocortisone on the surface of skin. It was found that by careful control of the extraction and blotting procedure an 80-fold sensitivity improvement could by obtained over dry blotting with only 10% lateral diffusion of the image. However, in contrast it was found that the use of a hydrophobic blotting membrane was more suitable for the examination of the transdermal absorption of the pesticide chlorpyrifos. The potential of incorporating a derivatisation step into the solvent assisted blotting procedure was investigated by blotting isocyanate treated skin onto a methanol soaked blotting membrane. This served the dual purpose of derivatising the isocyanate to a stable substituted urea derivative and extracting it from the skin. Preliminary data indicate that this approach may have some merit for field sampling for such compound and clearly derivatisation also offers the potential for sensitivity enhancements. Finally, the use of principal components analysis with an ion species specific normalisation procedure is proposed to identify regions of drug treated skin where the ion abundance of the compound of interest is low.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling.
Sauer, Bryan G; Singh, Kanwar P; Wagner, Barry L; Vanden Hoek, Matthew S; Twilley, Katherine; Cohn, Steven M; Shami, Vanessa M; Wang, Andrew Y
2016-11-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling
Sauer, Bryan G.; Singh, Kanwar P.; Wagner, Barry L.; Vanden Hoek, Matthew S.; Twilley, Katherine; Cohn, Steven M.; Shami, Vanessa M.; Wang, Andrew Y.
2016-01-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience. PMID:27853739
NASA Astrophysics Data System (ADS)
Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.
2011-04-01
This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.
Austin, P D; Hand, K S; Elia, M
2016-06-01
Diagnosis of intravascular catheter infection may be affected by the definition and procedures applied in the absence of blood culture data. To examine the extent to which different definitions of catheter infection and procedures for handling absent blood culture data can affect reported catheter infection rates. Catheter infection rates were established in a cohort of hospitalized patients administered parenteral nutrition according to three clinical and four published definitions. Paired and unpaired comparisons were made using available case analyses, sensitivity analyses and intention-to-categorize analyses. Complete data were available for each clinical definition (N = 193), and there were missing data (4.1-26.9%) for the published definitions. In an available case analysis, the catheter infection rate was 13.0-36.8% for the clinical definitions and 2.1-12.4% for the published definitions. For the published definitions, the rate was 1.6-32.1% in a sensitivity analysis and 11.4-16.9% in an intention-to-categorize analysis, with suggestion of bias towards a higher catheter infection rate in those with missing data, in keeping with the analyses of the clinical definitions. For paired comparisons, the strength of agreement between definitions varied from 'poor' (Cohen's kappa <0.21) to 'very good' (Cohen's kappa ≥0.81). The use of different definitions of catheter infection and procedures applied in the absence of blood culture data produced widely different catheter infection rates, which could compromise measurements or comparisons of service quality or study outcome. As such, there is a need to establish and use a valid, consistent and practical definition. Copyright © 2016 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
Inter-laboratory comparison of the in vivo comet assay including three image analysis systems.
Plappert-Helbig, Ulla; Guérard, Melanie
2015-12-01
To compare the extent of potential inter-laboratory variability and the influence of different comet image analysis systems, in vivo comet experiments were conducted using the genotoxicants ethyl methanesulfonate and methyl methanesulfonate. Tissue samples from the same animals were processed and analyzed-including independent slide evaluation by image analysis-in two laboratories with extensive experience in performing the comet assay. The analysis revealed low inter-laboratory experimental variability. Neither the use of different image analysis systems, nor the staining procedure of DNA (propidium iodide vs. SYBR® Gold), considerably impacted the results or sensitivity of the assay. In addition, relatively high stability of the staining intensity of propidium iodide-stained slides was found in slides that were refrigerated for over 3 months. In conclusion, following a thoroughly defined protocol and standardized routine procedures ensures that the comet assay is robust and generates comparable results between different laboratories. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
Hasan, Nazim; Gopal, Judy; Wu, Hui-Fen
2011-11-01
Biofilm studies have extensive significance since their results can provide insights into the behavior of bacteria on material surfaces when exposed to natural water. This is the first attempt of using matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS) for detecting the polysaccharides formed in a complex biofilm consisting of a mixed consortium of marine microbes. MALDI-MS has been applied to directly analyze exopolysaccharides (EPS) in the biofilm formed on aluminum surfaces exposed to seawater. The optimal conditions for MALDI-MS applied to EPS analysis of biofilm have been described. In addition, microbiologically influenced corrosion of aluminum exposed to sea water by a marine fungus was also observed and the fungus identity established using MALDI-MS analysis of EPS. Rapid, sensitive and direct MALDI-MS analysis on biofilm would dramatically speed up and provide new insights into biofilm studies due to its excellent advantages such as simplicity, high sensitivity, high selectivity and high speed. This study introduces a novel, fast, sensitive and selective platform for biofilm study from natural water without the need of tedious culturing steps or complicated sample pretreatment procedures. Copyright © 2011 John Wiley & Sons, Ltd.
Rapid Method for the Radioisotopic Analysis of Gaseous End Products of Anaerobic Metabolism
Nelson, David R.; Zeikus, J. G.
1974-01-01
A gas chromatographic procedure for the simultaneous analysis of 14C-labeled and unlabeled metabolic gases from microbial methanogenic systems is described. H2, CH4, and CO2 were separated within 2.5 min on a Carbosieve B column and were detected by thermal conductivity. Detector effluents were channeled into a gas proportional counter for measurement of radioactivity. This method was more rapid, sensitive, and convenient than gas chromatography-liquid scintillation techniques. The gas chromatography-gas proportional counting procedure was used to characterize the microbial decomposition of organic matter in anaerobic lake sediments and to monitor 14CH4 formation from H2 and 14CO2 by Methanosarcina barkeri. PMID:4854029
Search for the lepton-family-number nonconserving decay μ+-->e+γ
NASA Astrophysics Data System (ADS)
Ahmed, M.; Amann, J. F.; Barlow, D.; Black, K.; Bolton, R. D.; Brooks, M. L.; Carius, S.; Chen, Y. K.; Chernyshev, A.; Concannon, H. M.; Cooper, M. D.; Cooper, P. S.; Crocker, J.; Dittmann, J. R.; Dzemidzic, M.; Empl, A.; Fisk, R. J.; Fleet, E.; Foreman, W.; Gagliardi, C. A.; Haim, D.; Hallin, A.; Hoffman, C. M.; Hogan, G. E.; Hughes, E. B.; Hungerford, E. V.; Jui, C. C.; Kim, G. J.; Knott, J. E.; Koetke, D. D.; Kozlowski, T.; Kroupa, M. A.; Kunselman, A. R.; Lan, K. A.; Laptev, V.; Lee, D.; Liu, F.; Manweiler, R. W.; Marshall, R.; Mayes, B. W.; Mischke, R. E.; Nefkens, B. M.; Nickerson, L. M.; Nord, P. M.; Oothoudt, M. A.; Otis, J. N.; Phelps, R.; Piilonen, L. E.; Pillai, C.; Pinsky, L.; Ritter, M. W.; Smith, C.; Stanislaus, T. D.; Stantz, K. M.; Szymanski, J. J.; Tang, L.; Tippens, W. B.; Tribble, R. E.; Tu, X. L.; van Ausdeln, L. A.; von Witch, W. H.; Whitehouse, D.; Wilkinson, C.; Wright, B.; Wright, S. C.; Zhang, Y.; Ziock, K. O.
2002-06-01
The MEGA experiment, which searched for the muon- and electron-number violating decay μ+→e+γ, is described. The spectrometer system, the calibrations, the data taking procedures, the data analysis, and the sensitivity of the experiment are discussed. The most stringent upper limit on the branching ratio, B(μ+→e+γ)<1.2×10-11 with 90% confidence, is derived from a likelihood analysis.
Cost-effectiveness of unicondylar versus total knee arthroplasty: a Markov model analysis.
Peersman, Geert; Jak, Wouter; Vandenlangenbergh, Tom; Jans, Christophe; Cartier, Philippe; Fennema, Peter
2014-01-01
Unicondylar knee arthroplasty (UKA) is believed to lead to less morbidity and enhanced functional outcomes when compared with total knee arthroplasty (TKA). Conversely, UKA is also associated with a higher revision risk than TKA. In order to further clarify the key differences between these separate procedures, the current study assessing the cost-effectiveness of UKA versus TKA was undertaken. A state-transition Markov model was developed to compare the cost-effectiveness of UKA versus TKA for unicondylar osteoarthritis using a Belgian payer's perspective. The model was designed to include the possibility of two revision procedures. Model estimates were obtained through literature review and revision rates were based on registry data. Threshold analysis and probabilistic sensitivity analysis were performed to assess the model's robustness. UKA was associated with a cost reduction of €2,807 and a utility gain of 0.04 quality-adjusted life years in comparison with TKA. Analysis determined that the model is sensitive to clinical effectiveness, and that a marginal reduction in the clinical performance of UKA would lead to TKA being the more cost-effective solution. UKA yields clear advantages in terms of costs and marginal advantages in terms of health effects, in comparison with TKA. © 2014 Elsevier B.V. All rights reserved.
Macyszyn, Luke; Attiah, Mark; Ma, Tracy S; Ali, Zarina; Faught, Ryan; Hossain, Alisha; Man, Karen; Patel, Hiren; Sobota, Rosanna; Zager, Eric L; Stein, Sherman C
2017-05-01
OBJECTIVE Moyamoya disease (MMD) is a chronic cerebrovascular disease that can lead to devastating neurological outcomes. Surgical intervention is the definitive treatment, with direct, indirect, and combined revascularization procedures currently employed by surgeons. The optimal surgical approach, however, remains unclear. In this decision analysis, the authors compared the effectiveness of revascularization procedures in both adult and pediatric patients with MMD. METHODS A comprehensive literature search was performed for studies of MMD. Using complication and success rates from the literature, the authors constructed a decision analysis model for treatment using a direct and indirect revascularization technique. Utility values for the various outcomes and complications were extracted from the literature examining preferences in similar clinical conditions. Sensitivity analysis was performed. RESULTS A structured literature search yielded 33 studies involving 4197 cases. Cases were divided into adult and pediatric populations. These were further subdivided into 3 different treatment groups: indirect, direct, and combined revascularization procedures. In the pediatric population at 5- and 10-year follow-up, there was no significant difference between indirect and combination procedures, but both were superior to direct revascularization. In adults at 4-year follow-up, indirect was superior to direct revascularization. CONCLUSIONS In the absence of factors that dictate a specific approach, the present decision analysis suggests that direct revascularization procedures are inferior in terms of quality-adjusted life years in both adults at 4 years and children at 5 and 10 years postoperatively, respectively. These findings were statistically significant (p < 0.001 in all cases), suggesting that indirect and combination procedures may offer optimal results at long-term follow-up.
Analysis of D-penicillamine by gas chromatography utilizing nitrogen--phosphorus detection.
Rushing, L G; Hansen, E B; Thompson, H C
1985-01-11
A method is presented for the analysis of the "orphan" drug D-penicillamine (D-Pa), which is used for the treatment of the inherited rare copper metabolism dysfunction known as Wilson's disease, by assaying a derivative of the compound by gas chromatography employing a rubidium sensitized nitrogen--phosphorus detector. Analytical procedures are described for the analyses of residues of D-Pa X HCl salt in animal feed and for the analyses of the salt or free base from aqueous solutions by utilizing a single-step double derivatization with diazomethane--acetone. Stability data for D-Pa X HCl in animal feed and for the free base in water are presented. An ancillary fluorescence derivatization procedure for the analysis of D-Pa in water is also reported.
Cognition and procedure representational requirements for predictive human performance models
NASA Technical Reports Server (NTRS)
Corker, K.
1992-01-01
Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods including procedural backtracking with concurrent search, temporal reasoning, and constraint checking for partial ordering of procedures. Finally, the representation is being linked to models of human decision making processes that include heuristic, propositional and prescriptive judgement models that are sensitive to the procedural content in which the valuative functions are being performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less
Design enhancement tools in MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Wallerstein, D. V.
1984-01-01
Design sensitivity is the calculation of derivatives of constraint functions with respect to design variables. While a knowledge of these derivatives is useful in its own right, the derivatives are required in many efficient optimization methods. Constraint derivatives are also required in some reanalysis methods. It is shown where the sensitivity coefficients fit into the scheme of a basic organization of an optimization procedure. The analyzer is to be taken as MSC/NASTRAN. The terminator program monitors the termination criteria and ends the optimization procedure when the criteria are satisfied. This program can reside in several plances: in the optimizer itself, in a user written code, or as part of the MSC/EOS (Engineering Operating System) MSC/EOS currently under development. Since several excellent optimization codes exist and since they require such very specialized technical knowledge, the optimizer under the new MSC/EOS is considered to be selected and supplied by the user to meet his specific needs and preferences. The one exception to this is a fully stressed design (FSD) based on simple scaling. The gradients are currently supplied by various design sensitivity options now existing in MSC/NASTRAN's design sensitivity analysis (DSA).
Wang, Mei; Kimbrell, Hillary Z.; Sholl, Andrew B.; Tulman, David B.; Elfer, Katherine N.; Schlichenmeyer, Tyler C.; Lee, Benjamin R.; Lacey, Michelle; Brown, J. Quincy
2015-01-01
Rapid assessment of prostate core biopsy pathology at the point-of-procedure could provide benefit in a variety of clinical situations. Even with advanced trans-rectal ultrasound guidance and saturation biopsy protocols, prostate cancer can be missed in up to half of all initial biopsy procedures. In addition, collection of tumor specimens for downstream histological, molecular, and genetic analysis is hindered by low tumor yield due to inability to identify prostate cancer grossly. However, current point-of-procedure pathology protocols such as frozen section analysis (FSA) are destructive, and too time- and labor-intensive to be practical or economical. Ex vivo microscopy of the excised specimens, stained with fast-acting fluorescent histology dyes, could be an attractive non-destructive alternative to FSA. In this work, we report the first demonstration of video-rate structured illumination microscopy (VR-SIM) for rapid high-resolution diagnostic imaging of prostate biopsies in realistic point-of-procedure timeframes. Large mosaic images of prostate biopsies stained with acridine orange are rendered in seconds, and contain excellent contrast and detail, exhibiting close correlation with corresponding H&E histology. A clinically-relevant review of VR-SIM images of 34 unfixed and uncut prostate core biopsies by two independent pathologists resulted in an area under the ROC curve (AUC) of 0.82–0.88, with a sensitivity ranging from 63–88% and a specificity ranging from 78–89%. When biopsies contained more than 5% tumor content, the sensitivity improved to 75–92%. The image quality, speed, minimal complexity, and ease of use of VR-SIM could prove to be features in favor of adoption as an alternative to destructive pathology at the point-of-procedure. PMID:26282168
Moore, E M; Forrest, R D; Boehm, S L
2013-02-01
Adolescent individuals display altered behavioral sensitivity to ethanol, which may contribute to the increased ethanol consumption seen in this age-group. However, genetics also exert considerable influence on both ethanol intake and sensitivity. Currently there is little research assessing the combined influence of developmental and genetic alcohol sensitivities. Sensitivity to the aversive effects of ethanol using a conditioned taste aversion (CTA) procedure was measured during both adolescence (P30) and adulthood (P75) in eight inbred mouse strains (C57BL/6J, DBA/2J, 129S1/SvImJ, A/J, BALB/cByJ, BTBR T(+) tf/J, C3H/HeJ and FVB/NJ). Adolescent and adult mice were water deprived, and subsequently provided with access to 0.9% (v/v) NaCl solution for 1 h. Immediately following access mice were administered ethanol (0, 1.5, 2.25 and 3 g/kg, ip). This procedure was repeated in 72 h intervals for a total of five CTA trials. Sensitivity to the aversive effects of ethanol was highly dependent upon both strain and age. Within an inbred strain, adolescent animals were consistently less sensitive to the aversive effects of ethanol than their adult counterparts. However, the dose of ethanol required to produce an aversion response differed as a function of both age and strain. © 2012 Blackwell Publishing Ltd and International Behavioural and Neural Genetics Society.
Diagnostic staging laparoscopy in gastric cancer treatment: A cost-effectiveness analysis.
Li, Kevin; Cannon, John G D; Jiang, Sam Y; Sambare, Tanmaya D; Owens, Douglas K; Bendavid, Eran; Poultsides, George A
2018-05-01
Accurate preoperative staging helps avert morbidity, mortality, and cost associated with non-therapeutic laparotomy in gastric cancer (GC) patients. Diagnostic staging laparoscopy (DSL) can detect metastases with high sensitivity, but its cost-effectiveness has not been previously studied. We developed a decision analysis model to assess the cost-effectiveness of preoperative DSL in GC workup. Analysis was based on a hypothetical cohort of GC patients in the U.S. for whom initial imaging shows no metastases. The cost-effectiveness of DSL was measured as cost per quality-adjusted life-year (QALY) gained. Drivers of cost-effectiveness were assessed in sensitivity analysis. Preoperative DSL required an investment of $107 012 per QALY. In sensitivity analysis, DSL became cost-effective at a threshold of $100 000/QALY when the probability of occult metastases exceeded 31.5% or when test sensitivity for metastases exceeded 86.3%. The likelihood of cost-effectiveness increased from 46% to 93% when both parameters were set at maximum reported values. The cost-effectiveness of DSL for GC patients is highly dependent on patient and test characteristics, and is more likely when DSL is used selectively where procedure yield is high, such as for locally advanced disease or in detecting peritoneal and superficial versus deep liver lesions. © 2017 Wiley Periodicals, Inc.
Methods for analysis of selected metals in water by atomic absorption
Fishman, Marvin J.; Downs, Sanford C.
1966-01-01
This manual describes atomic-absorption-spectroscopy methods for determining calcium, copper, lithium, magnesium, manganese, potassium, sodium, strontium and zinc in atmospheric precipitation, fresh waters, and brines. The procedures are intended to be used by water quality laboratories of the Water Resources Division of the U.S. Geological Survey. Detailed procedures, calculations, and methods for the preparation of reagents are given for each element along with data on accuracy, precision, and sensitivity. Other topics discussed briefly are the principle of atomic absorption, instrumentation used, and special analytical techniques.
Modjadidi, Karima; Kovera, Margaret Bull
2018-06-01
We investigated whether watching a videotaped photo array administration or expert testimony could sensitize jurors to the suggestiveness of single-blind eyewitness identification procedures. Mock jurors recruited from the community (N = 231) watched a videotaped simulation of a robbery trial in which the primary evidence against the defendant was an eyewitness identification. We varied whether the witness made an identification from a single- or double-blind photo array, the evidence included a videotape of the photo array procedure, and an expert testified about the effects of single-blind identification procedures on administrators' behaviors and witness accuracy. Watching the videotaped photo array administration sensitized mock jurors to the suggestiveness of the single-blind procedure, causing them to be less likely to convict a defendant identified through single-rather than double-blind procedures. Exposure to the videotaped procedure also decreased the favorability of mock jurors' ratings of the eyewitness, irrespective of whether the lineup was conducted by a single-blind administrator. Expert testimony did not sensitize jurors to administrator bias. Thus, videotaping identification procedures could serve as an important procedural reform that both preserves a record of whether the lineup administration was suggestive and might improve jurors' evaluations of eyewitness evidence. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Wu, Qi; Yuan, Huiming; Zhang, Lihua; Zhang, Yukui
2012-06-20
With the acceleration of proteome research, increasing attention has been paid to multidimensional liquid chromatography-mass spectrometry (MDLC-MS) due to its high peak capacity and separation efficiency. Recently, many efforts have been put to improve MDLC-based strategies including "top-down" and "bottom-up" to enable highly sensitive qualitative and quantitative analysis of proteins, as well as accelerate the whole analytical procedure. Integrated platforms with combination of sample pretreatment, multidimensional separations and identification were also developed to achieve high throughput and sensitive detection of proteomes, facilitating highly accurate and reproducible quantification. This review summarized the recent advances of such techniques and their applications in qualitative and quantitative analysis of proteomes. Copyright © 2012 Elsevier B.V. All rights reserved.
Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Modal test/analysis correlation of Space Station structures using nonlinear sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Takeyoshi, Masahiro; Sawaki, Masakuni; Yamasaki, Kanji; Kimber, Ian
2003-09-30
The murine local lymph node assay (LLNA) is used for the identification of chemicals that have the potential to cause skin sensitization. However, it requires specific facility and handling procedures to accommodate a radioisotopic (RI) endpoint. We have developed non-radioisotopic (non-RI) endpoint of LLNA based on BrdU incorporation to avoid a use of RI. Although this alternative method appears viable in principle, it is somewhat less sensitive than the standard assay. In this study, we report investigations to determine the use of statistical analysis to improve the sensitivity of a non-RI LLNA procedure with alpha-hexylcinnamic aldehyde (HCA) in two separate experiments. Consequently, the alternative non-RI method required HCA concentrations of greater than 25% to elicit a positive response based on the criterion for classification as a skin sensitizer in the standard LLNA. Nevertheless, dose responses to HCA in the alternative method were consistent in both experiments and we examined whether the use of an endpoint based upon the statistical significance of induced changes in LNC turnover, rather than an SI of 3 or greater, might provide for additional sensitivity. The results reported here demonstrate that with HCA at least significant responses were, in each of two experiments, recorded following exposure of mice to 25% of HCA. These data suggest that this approach may be more satisfactory-at least when BrdU incorporation is measured. However, this modification of the LLNA is rather less sensitive than the standard method if employing statistical endpoint. Taken together the data reported here suggest that a modified LLNA in which BrdU is used in place of radioisotope incorporation shows some promise, but that in its present form, even with the use of a statistical endpoint, lacks some of the sensitivity of the standard method. The challenge is to develop strategies for further refinement of this approach.
A sensitive continuum analysis method for gamma ray spectra
NASA Technical Reports Server (NTRS)
Thakur, Alakh N.; Arnold, James R.
1993-01-01
In this work we examine ways to improve the sensitivity of the analysis procedure for gamma ray spectra with respect to small differences in the continuum (Compton) spectra. The method developed is applied to analyze gamma ray spectra obtained from planetary mapping by the Mars Observer spacecraft launched in September 1992. Calculated Mars simulation spectra and actual thick target bombardment spectra have been taken as test cases. The principle of the method rests on the extraction of continuum information from Fourier transforms of the spectra. We study how a better estimate of the spectrum from larger regions of the Mars surface will improve the analysis for smaller regions with poorer statistics. Estimation of signal within the continuum is done in the frequency domain which enables efficient and sensitive discrimination of subtle differences between two spectra. The process is compared to other methods for the extraction of information from the continuum. Finally we explore briefly the possible uses of this technique in other applications of continuum spectra.
Sensitivity to friction for primary explosives.
Matyáš, Robert; Šelešovský, Jakub; Musil, Tomáš
2012-04-30
The sensitivity to friction for a selection of primary explosives has been studied using a small BAM friction apparatus. The probit analysis was used for the construction of a sensitivity curve for each primary explosive tested. Two groups of primary explosives were chosen for measurement (a) the most commonly used industrially produced primary explosives (e.g. lead azide, tetrazene, dinol, lead styphnate) and (b) the most produced improvised primary explosives (e.g. triacetone triperoxide, hexamethylenetriperoxide diamine, mercury fulminate, acetylides of heavy metals). A knowledge of friction sensitivity is very important for determining manipulation safety for primary explosives. All the primary explosives tested were carefully characterised (synthesis procedure, shape and size of crystals). The sensitivity curves obtained represent a unique set of data, which cannot be found anywhere else in the available literature. Copyright © 2012 Elsevier B.V. All rights reserved.
Hernández-Borges, Javier; Rodriguez-Delgado, Miguel Angel; García-Montelongo, Francisco J; Cifuentes, Alejandro
2005-06-01
In this work, the determination of a group of triazolopyrimidine sulfoanilide herbicides (cloransulam-methyl, metosulam, flumetsulam, florasulam, and diclosulam) in soy milk by capillary electrophoresis-mass spectrometry (CE-MS) is presented. The main electrospray interface (ESI) parameters (nebulizer pressure, dry gas flow rate, dry gas temperature, and composition of the sheath liquid) are optimized using a central composite design. To increase the sensitivity of the CE-MS method, an off-line sample preconcentration procedure based on solid-phase extraction (SPE) is combined with an on-line stacking procedure (i.e. normal stacking mode, NSM). Samples could be injected for up to 100 s, providing limits of detection (LODs) down to 74 microg/L, i.e., at the low ppb level, with relative standard deviation values (RSD,%) between 3.8% and 6.4% for peak areas on the same day, and between 6.5% and 8.1% on three different days. The usefulness of the optimized SPE-NSM-CE-MS procedure is demonstrated through the sensitive quantification of the selected pesticides in soy milk samples.
NASA Astrophysics Data System (ADS)
Guo, Xiaowei; Chen, Mingyong; Zhu, Jianhua; Ma, Yanqin; Du, Jinglei; Guo, Yongkang; Du, Chunlei
2006-01-01
A novel method for the fabrication of continuous micro-optical components is presented in this paper. It employs a computer controlled digital-micromirror-device(DMD TM) as a switchable projection mask and silver-halide sensitized gelatin (SHSG) as recording material. By etching SHSG with enzyme solution, the micro-optical components with relief modulation can be generated through special processing procedures. The principles of etching SHSG with enzyme and theoretical analysis for deep etching are also discussed in detail, and the detailed quantitative experiments on the processing procedures are conducted to determine optimum technique parameters. A good linear relationship within a depth range of 4μm was experimentally obtained between exposure dose and relief depth. At last, the microlensarray with 256.8μm radius and 2.572μm depth was achieved. This method is simple, cheap and the aberration in processing procedures can be corrected in the step of designing mask, so it is a practical method to fabricate good continuous profile for low-volume production.
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
Davis, S C; Makarov, A A; Hughes, J D
1999-01-01
Analysis of sub-ppb levels of polynuclear aromatic hydrocarbons (PAHs) in drinking water by high performance liquid chromatography (HPLC) fluorescence detection typically requires large water samples and lengthy extraction procedures. The detection itself, although selective, does not give compound identity confirmation. Benchtop gas chromatography/mass spectrometry (GC/MS) systems operating in the more sensitive selected ion monitoring (SIM) acquisition mode discard spectral information and, when operating in scanning mode, are less sensitive and scan too slowly. The selectivity of hyperthermal surface ionisation (HSI), the high column flow rate capacity of the supersonic molecular beam (SMB) GC/MS interface, and the high acquisition rate of time-of-flight (TOF) mass analysis, are combined here to facilitate a rapid, specific and sensitive technique for the analysis of trace levels of PAHs in water. This work reports the advantages gained by using the GC/HSI-TOF system over the HPLC fluorescence method, and discusses in some detail the nature of the instrumentation used.
Sensitivity of surface meteorological analyses to observation networks
NASA Astrophysics Data System (ADS)
Tyndall, Daniel Paul
A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.
Smith, Toby O; Drew, Benjamin T; Toms, Andoni P
2012-07-01
Magnetic resonance imaging (MRI) and magnetic resonance arthrography (MRA) have gained increasing favour in the assessment of patients with suspected glenoid labral injuries. The purpose of this study was to determine the diagnostic accuracy of MRI or MRA in the detection of gleniod labral lesions. A systematic review was undertaken of the electronic databases Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, AMED and CINAHL, in addition to a search of unpublished literature databases. All studies which compared the ability of MRI or MRA (index test) to assess gleniod labral tears or lesions, when verified with a surgical procedure (arthroscopy or open surgery-reference test) were included. Data extraction and methodological appraisal using the QUADAS tool were both conducted by two reviewers independently. Data were analysed through a summary receiver operator characteristic curve and pooled sensitivity and specificity analysis were calculated with 95% confidence intervals. Sixty studies including 4,667 shoulders from 4,574 patients were reviewed. There appeared slightly greater diagnostic test accuracy for MRA over MRI for the detection of overall gleniod labral lesions (MRA-sensitivity 88%, specificity 93% vs. MRI sensitivity 76% vs. specificity 87%). Methodologically, studies recruited and identified their samples appropriately and clearly defined the radiological procedures. In general, it was not clearly defined why patients were lost during the study, and studies were poor at recording whether the same clinical data were available to the radiologist interpreting the MRI or MRA as would be available in clinical practice. Most studies did not state whether the surgeon interpreting the arthroscopic procedure was blinded to the results of the MR or MRA imaging. Based on the available literature, overall MRA appeared marginally superior to MRI for the detection of glenohumeral labral lesions. Level 2a.
2015-01-01
The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707
Medialization thyroplasty versus injection laryngoplasty: a cost minimization analysis.
Tam, Samantha; Sun, Hongmei; Sarma, Sisira; Siu, Jennifer; Fung, Kevin; Sowerby, Leigh
2017-02-20
Medialization thyroplasty and injection laryngoplasty are widely accepted treatment options for unilateral vocal fold paralysis. Although both procedures result in similar clinical outcomes, little is known about the corresponding medical care costs. Medialization thyroplasty requires expensive operating room resources while injection laryngoplasty utilizes outpatient resources but may require repeated procedures. The purpose of this study, therefore, is to quantify the cost differences in adult patients with unilateral vocal fold paralysis undergoing medialization thyroplasty versus injection laryngoplasty. Cost minimization analysis conducted using a decision tree model. A decision tree model was constructed to capture clinical scenarios for medialization thyroplasty and injection laryngoplasty. Probabilities for various events were obtained from a retrospective cohort from the London Health Sciences Centre, Canada. Costs were derived from the published literature and the London Health Science Centre. All costs were reported in 2014 Canadian dollars. Time horizon was 5 years. The study was conducted from an academic hospital perspective in Canada. Various sensitivity analyses were conducted to assess differences in procedure-specific costs and probabilities of key events. Sixty-three patients underwent medialization thyroplasty and 41 underwent injection laryngoplasty. Cost of medialization thyroplasty was C$2499.10 per patient whereas those treated with injection laryngoplasty cost C$943.19. Results showed that cost savings with IL were C$1555.91. Deterministic and probabilistic sensitivity analyses suggested cost savings ranged from C$596 to C$3626. Treatment with injection laryngoplasty results in cost savings of C$1555.91 per patient. Our extensive sensitivity analyses suggest that switching from medialization thyroplasty to injection laryngoplasty will lead to a minimum cost savings of C$596 per patient. Considering the significant cost savings and similar effectiveness, injection laryngoplasty should be strongly considered as a preferred treatment option for patients diagnosed with unilateral vocal fold paralysis.
NASA Astrophysics Data System (ADS)
Osten, W.; Pedrini, G.; Weidmann, P.; Gadow, R.
2015-08-01
A minimum invasive but high resolution method for residual stress analysis of ceramic coatings made by thermal spraycoating using a pulsed laser for flexible hole drilling is described. The residual stresses are retrieved by applying the measured surface data for a model-based reconstruction procedure. While the 3D deformations and the profile of the machined area are measured with digital holography, the residual stresses are calculated by FE analysis. To improve the sensitivity of the method, a SLM is applied to control the distribution and the shape of the holes. The paper presents the complete measurement and reconstruction procedure and discusses the advantages and challenges of the new technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chitsazzadeh, S; Wells, D; Mestrovic, A
2016-06-15
Purpose: To develop a QA procedure for gated VMAT stereotactic ablative radiotherapy (SABR) treatments. Methods: An interface was constructed to attach the translational stage of a Quasar respiratory motion phantom to a pinpoint ion chamber insert and move the ion chamber inside an ArcCheck diode array. The Quasar phantom controller used a patient specific breathing pattern to translate the ion chamber in a superior-inferior direction inside the ArcCheck. An amplitude-based RPM tracking system was specified to turn the beam on during the exhale phase of the breathing pattern. SABR plans were developed using Eclipse for liver PTVs ranging in sizemore » from 3-12 cm in diameter using a 2-arc VMAT technique. Dose was measured in the middle of the penumbra region, where the high dose gradient allowed for sensitive detection of any inaccuracies in gated dose delivery. The overall fidelity of the dose distribution was confirmed using ArcCheck. The sensitivity of the gating QA procedure was investigated with respect to the following four parameters: PTV size, duration of exhale, baseline drift, and gating window size. Results: The difference between the measured dose to a point in the penumbra and the Eclipse calculated dose was under 2% for small residual motions. The QA procedure was independent of PTV size and duration of exhale. Baseline drift and gating window size, however, significantly affected the penumbral dose measurement, with differences of up to 30% compared to Eclipse. Conclusion: This study described a highly sensitive QA procedure for gated VMAT SABR treatments. The QA outcome was dependent on the gating window size and baseline drift. Analysis of additional patient breathing patterns will be required to determine a clinically relevant gating window size and an appropriate tolerance level for this procedure.« less
Liu, Richard T; Burke, Taylor A; Abramson, Lyn Y; Alloy, Lauren B
2017-11-04
Behavioral Approach System (BAS) sensitivity has been implicated in the development of a variety of different psychiatric disorders. Prominent among these in the empirical literature are bipolar spectrum disorders (BSDs). Given that adolescence represents a critical developmental stage of risk for the onset of BSDs, it is important to clarify the latent structure of BAS sensitivity in this period of development. A statistical approach especially well-suited for delineating the latent structure of BAS sensitivity is taxometric analysis, which is designed to evaluate whether the latent structure of a construct is taxonic (i.e., categorical) or dimensional (i.e., continuous) in nature. The current study applied three mathematically non-redundant taxometric procedures (i.e., MAMBAC, MAXEIG, and L-Mode) to a large community sample of adolescents (n = 12,494) who completed two separate measures of BAS sensitivity: the BIS/BAS Scales Carver and White (Journal of Personality and Social Psychology, 67, 319-333. 1994) and the Sensitivity to Reward and Sensitivity to Punishment Questionnaire (Torrubia et al. Personality and Individual Differences, 31, 837-862. 2001). Given the significant developmental changes in reward sensitivity that occur across adolescence, the current investigation aimed to provide a fine-grained evaluation of the data by performing taxometric analyses at an age-by-age level (14-19 years; n for each age ≥ 883). Results derived from taxometric procedures, across all ages tested, were highly consistent, providing strong evidence that BAS sensitivity is best conceptualized as dimensional in nature. Thus, the findings suggest that BAS-related vulnerability to BSDs exists along a continuum of severity, with no natural cut-point qualitatively differentiating high- and low-risk adolescents. Clinical and research implications for the assessment of BSD-related vulnerability are discussed.
Code of Federal Regulations, 2010 CFR
2010-01-01
... information and proprietary information in the Federal Docket Management System (FDMS)? 11.35 Section 11.35... RULEMAKING PROCEDURES Rulemaking Procedures General § 11.35 Does FAA include sensitive security information and proprietary information in the Federal Docket Management System (FDMS)? (a) Sensitive security...
Code of Federal Regulations, 2014 CFR
2014-01-01
... information and proprietary information in the Federal Docket Management System (FDMS)? 11.35 Section 11.35... RULEMAKING PROCEDURES Rulemaking Procedures General § 11.35 Does FAA include sensitive security information and proprietary information in the Federal Docket Management System (FDMS)? (a) Sensitive security...
Code of Federal Regulations, 2012 CFR
2012-01-01
... information and proprietary information in the Federal Docket Management System (FDMS)? 11.35 Section 11.35... RULEMAKING PROCEDURES Rulemaking Procedures General § 11.35 Does FAA include sensitive security information and proprietary information in the Federal Docket Management System (FDMS)? (a) Sensitive security...
Code of Federal Regulations, 2013 CFR
2013-01-01
... information and proprietary information in the Federal Docket Management System (FDMS)? 11.35 Section 11.35... RULEMAKING PROCEDURES Rulemaking Procedures General § 11.35 Does FAA include sensitive security information and proprietary information in the Federal Docket Management System (FDMS)? (a) Sensitive security...
Code of Federal Regulations, 2011 CFR
2011-01-01
... information and proprietary information in the Federal Docket Management System (FDMS)? 11.35 Section 11.35... RULEMAKING PROCEDURES Rulemaking Procedures General § 11.35 Does FAA include sensitive security information and proprietary information in the Federal Docket Management System (FDMS)? (a) Sensitive security...
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less
Makhni, Eric C; Lamba, Nayan; Swart, Eric; Steinhaus, Michael E; Ahmad, Christopher S; Romeo, Anthony A; Verma, Nikhil N
2016-09-01
To compare the cost-effectiveness of arthroscopic revision instability repair and Latarjet procedure in treating patients with recurrent instability after initial arthroscopic instability repair. An expected-value decision analysis of revision arthroscopic instability repair compared with Latarjet procedure for recurrent instability followed by failed repair attempt was modeled. Inputs regarding procedure cost, clinical outcomes, and health utilities were derived from the literature. Compared with revision arthroscopic repair, Latarjet was less expensive ($13,672 v $15,287) with improved clinical outcomes (43.78 v 36.76 quality-adjusted life-years). Both arthroscopic repair and Latarjet were cost-effective compared with nonoperative treatment (incremental cost-effectiveness ratios of 3,082 and 1,141, respectively). Results from sensitivity analyses indicate that under scenarios of high rates of stability postoperatively, along with improved clinical outcome scores, revision arthroscopic repair becomes increasingly cost-effective. Latarjet procedure for failed instability repair is a cost-effective treatment option, with lower costs and improved clinical outcomes compared with revision arthroscopic instability repair. However, surgeons must still incorporate clinical judgment into treatment algorithm formation. Level IV, expected value decision analysis. Copyright © 2016. Published by Elsevier Inc.
Emotional Reactivity and Parenting Sensitivity Interact to Predict Cortisol Output in Toddlers
ERIC Educational Resources Information Center
Blair, Clancy; Ursache, Alexandra; Mills-Koonce, Roger; Stifter, Cynthia; Voegtline, Kristin; Granger, Douglas A.
2015-01-01
Cortisol output in response to emotion induction procedures was examined at child age 24 months in a prospective longitudinal sample of 1,292 children and families in predominantly low-income and nonurban communities in two regions of high poverty in the United States. Multilevel analysis indicated that observed emotional reactivity to a mask…
Toward Best Practices in Analyzing Datasets with Missing Data: Comparisons and Recommendations
ERIC Educational Resources Information Center
Johnson, David R.; Young, Rebekah
2011-01-01
Although several methods have been developed to allow for the analysis of data in the presence of missing values, no clear guide exists to help family researchers in choosing among the many options and procedures available. We delineate these options and examine the sensitivity of the findings in a regression model estimated in three random…
Predictor sort sampling, tight t`s, and the analysis of covariance : theory, tables, and examples
S. P. Verrill; D. W. Green
In recent years wood strength researchers have begun to replace experimental unit allocation via random sampling with allocation via sorts based on nondestructive measurements of strength predictors such as modulus of elasticity and specific gravity. Although this procedure has the potential of greatly increasing experimental sensitivity, as currently implemented it...
Factor analysis of social skills inventory responses of Italians and Americans.
Galeazzi, Aldo; Franceschina, Emilio; Holmes, George R
2002-06-01
The Social Skills Inventory is a 90-item self-report procedure designed to measure social and communication skills. The inventory measures six dimensions, namely, Emotional Expressivity, Emotional Sensitivity, Emotional Control, Social Expressivity, Social Sensitivity, and Social Control. The Italian version was administered in several cities in Northern Italy to 500 Italian participants ranging in age from 15 to 59 years. Factor analysis appears to confirm the adequacy of the inventory for the Italian adult population. Results indicate strong similarities between the Italian and American populations with respect to the measure of social skills. Indexes of internal reliability and test-retest reliability are good for almost all subscales of the inventory, which should encourage the use of this inventory with Italian samples.
Solar energy system economic evaluation for IBM System 3, Glendo, Wyoming
NASA Technical Reports Server (NTRS)
1980-01-01
This analysis was based on the technical and economic models in f-chart design procedures with inputs based on the characteristics of the parameters of present worth of system cost over a projected twenty year life: life cycle savings, year of positive savings, and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainties in constituent system and economic variables was also investigated.
Bernal-Martinez, L.; Castelli, M. V.; Rodriguez-Tudela, J. L.; Cuenca-Estrella, M.
2014-01-01
A retrospective analysis of real-time PCR (RT-PCR) results for 151 biopsy samples obtained from 132 patients with proven invasive fungal diseases was performed. PCR-based techniques proved to be fast and sensitive and enabled definitive diagnosis in all cases studied, with detection of a total of 28 fungal species. PMID:24574295
Insulation Cork Boards-Environmental Life Cycle Assessment of an Organic Construction Material.
Silvestre, José D; Pargana, Nuno; de Brito, Jorge; Pinheiro, Manuel D; Durão, Vera
2016-05-20
Envelope insulation is a relevant technical solution to cut energy consumption and reduce environmental impacts in buildings. Insulation Cork Boards (ICB) are a natural thermal insulation material whose production promotes the recycling of agricultural waste. The aim of this paper is to determine and evaluate the environmental impacts of the production, use, and end-of-life processing of ICB. A "cradle-to-cradle" environmental Life Cycle Assessment (LCA) was performed according to International LCA standards and the European standards on the environmental evaluation of buildings. These results were based on site-specific data and resulted from a consistent methodology, fully described in the paper for each life cycle stage: Cork oak tree growth, ICB production, and end-of-life processing-modeling of the carbon flows ( i.e. , uptakes and emissions), including sensitivity analysis of this procedure; at the production stage-the modeling of energy processes and a sensitivity analysis of the allocation procedures; during building operation-the expected service life of ICB; an analysis concerning the need to consider the thermal diffusivity of ICB in the comparison of the performance of insulation materials. This paper presents the up-to-date "cradle-to-cradle" environmental performance of ICB for the environmental categories and life-cycle stages defined in European standards.
Insulation Cork Boards—Environmental Life Cycle Assessment of an Organic Construction Material
Silvestre, José D.; Pargana, Nuno; de Brito, Jorge; Pinheiro, Manuel D.; Durão, Vera
2016-01-01
Envelope insulation is a relevant technical solution to cut energy consumption and reduce environmental impacts in buildings. Insulation Cork Boards (ICB) are a natural thermal insulation material whose production promotes the recycling of agricultural waste. The aim of this paper is to determine and evaluate the environmental impacts of the production, use, and end-of-life processing of ICB. A “cradle-to-cradle” environmental Life Cycle Assessment (LCA) was performed according to International LCA standards and the European standards on the environmental evaluation of buildings. These results were based on site-specific data and resulted from a consistent methodology, fully described in the paper for each life cycle stage: Cork oak tree growth, ICB production, and end-of-life processing-modeling of the carbon flows (i.e., uptakes and emissions), including sensitivity analysis of this procedure; at the production stage—the modeling of energy processes and a sensitivity analysis of the allocation procedures; during building operation—the expected service life of ICB; an analysis concerning the need to consider the thermal diffusivity of ICB in the comparison of the performance of insulation materials. This paper presents the up-to-date “cradle-to-cradle” environmental performance of ICB for the environmental categories and life-cycle stages defined in European standards. PMID:28773516
2D-electrophoresis and the urine proteome map: where do we stand?
Candiano, Giovanni; Santucci, Laura; Petretto, Andrea; Bruschi, Maurizio; Dimuccio, Veronica; Urbani, Andrea; Bagnasco, Serena; Ghiggeri, Gian Marco
2010-03-10
The discovery of urinary biomarkers is a main topic in clinical medicine. The development of proteomics has rapidly changed the knowledge on urine protein composition and probably will modify it again. Two-dimensional electrophoresis (2D-PAGE) coupled with mass spectrometry has represented for years the technique of choice for the analysis of urine proteins and it is time to draw some conclusions. This review will focus on major methodological aspects related to urine sample collection, storage and analysis by 2D-PAGE and attempt to define an advanced normal urine protein map. Overall, 1118 spots were reproducibly found in normal urine samples but only 275 were characterized as isoforms of 82 proteins. One-hundred height spots belonging to 30 proteins were also detected in plasma and corresponded to typical plasma components. The identity of most of the proteins found in normal urine by 2D-PAGE remains to be determined, the majority being low-molecular weight proteins (<30 kDa). Equalization procedures would also enhance sensitivity of the analysis and allow low abundance proteins to be characterized. Therefore, we are still on the way to define the normal urine composition. Technology advancements in concentrating procedure will improve sensitivity and give the possibility to purify proteins for mass spectrometry. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Karge, Lukas; Gilles, Ralph
2017-01-01
An improved data-reduction procedure is proposed and demonstrated for small-angle neutron scattering (SANS) measurements. Its main feature is the correction of geometry- and wavelength-dependent intensity variations on the detector in a separate step from the different pixel sensitivities: the geometric and wavelength effects can be corrected analytically, while pixel sensitivities have to be calibrated to a reference measurement. The geometric effects are treated for position-sensitive 3He proportional counter tubes, where they are anisotropic owing to the cylindrical geometry of the gas tubes. For the calibration of pixel sensitivities, a procedure is developed that is valid for isotropic and anisotropic signals. The proposed procedure can save a significant amount of beamtime which has hitherto been used for calibration measurements. PMID:29021734
Pickl, Karin E; Adamek, Viktor; Gorges, Roland; Sinner, Frank M
2011-07-15
Due to increased regulatory requirements, the interaction of active pharmaceutical ingredients with various surfaces and solutions during production and storage is gaining interest in the pharmaceutical research field, in particular with respect to development of new formulations, new packaging material and the evaluation of cleaning processes. Experimental adsorption/absorption studies as well as the study of cleaning processes require sophisticated analytical methods with high sensitivity for the drug of interest. In the case of 2,6-diisopropylphenol - a small lipophilic drug which is typically formulated as lipid emulsion for intravenous injection - a highly sensitive method in the concentration range of μg/l suitable to be applied to a variety of different sample matrices including lipid emulsions is needed. We hereby present a headspace-solid phase microextraction (HS-SPME) approach as a simple cleanup procedure for sensitive 2,6-diisopropylphenol quantification from diverse matrices choosing a lipid emulsion as the most challenging matrix with regard to complexity. By combining the simple and straight forward HS-SPME sample pretreatment with an optimized GC-MS quantification method a robust and sensitive method for 2,6-diisopropylphenol was developed. This method shows excellent sensitivity in the low μg/l concentration range (5-200μg/l), good accuracy (94.8-98.8%) and precision (intraday-precision 0.1-9.2%, inter-day precision 2.0-7.7%). The method can be easily adapted to other, less complex, matrices such as water or swab extracts. Hence, the presented method holds the potential to serve as a single and simple analytical procedure for 2,6-diisopropylphenol analysis in various types of samples such as required in, e.g. adsorption/absorption studies which typically deal with a variety of different surfaces (steel, plastic, glass, etc.) and solutions/matrices including lipid emulsions. Copyright © 2011 Elsevier B.V. All rights reserved.
Poch, G K; Klette, K L; Anderson, C
2000-04-01
This paper compares the potential forensic application of two sensitive and rapid procedures (liquid chromatography-mass spectrometry and liquid chromatography-ion trap mass spectrometry) for the detection and quantitation of 2-oxo-3-hydroxy lysergic acid diethylamide (O-H-LSD) a major LSD metabolite. O-H-LSD calibration curves for both procedures were linear over the concentration range 0-8,000 pg/mL with correlation coefficients (r2) greater than 0.99. The observed limit of detection (LOD) and limit of quantitation (LOQ) for O-H-LSD in both procedures was 400 pg/mL. Sixty-eight human urine specimens that had previously been found to contain LSD by gas chromatography-mass spectrometry were reanalyzed by both procedures for LSD and O-H-LSD. These specimens contained a mean concentration of O-H-LSD approximately 16 times higher than the LSD concentration. Because both LC methods produce similar results, either procedure can be readily adapted to O-H-LSD analysis for use in high-volume drug-testing laboratories. In addition, the possibility of significantly increasing the LSD detection time window by targeting this major LSD metabolite for analysis may influence other drug-free workplace programs to test for LSD.
NASA Technical Reports Server (NTRS)
Foss, W. E., Jr.
1979-01-01
The takeoff and approach performance of an aircraft is calculated in accordance with the airworthiness standards of the Federal Aviation Regulations. The aircraft and flight constraints are represented in sufficient detail to permit realistic sensitivity studies in terms of either configuration modifications or changes in operational procedures. The program may be used to investigate advanced operational procedures for noise alleviation such as programmed throttle and flap controls. Extensive profile time history data are generated and are placed on an interface file which can be input directly to the NASA aircraft noise prediction program (ANOPP).
Markov models of genome segmentation
NASA Astrophysics Data System (ADS)
Thakur, Vivek; Azad, Rajeev K.; Ramaswamy, Ram
2007-01-01
We introduce Markov models for segmentation of symbolic sequences, extending a segmentation procedure based on the Jensen-Shannon divergence that has been introduced earlier. Higher-order Markov models are more sensitive to the details of local patterns and in application to genome analysis, this makes it possible to segment a sequence at positions that are biologically meaningful. We show the advantage of higher-order Markov-model-based segmentation procedures in detecting compositional inhomogeneity in chimeric DNA sequences constructed from genomes of diverse species, and in application to the E. coli K12 genome, boundaries of genomic islands, cryptic prophages, and horizontally acquired regions are accurately identified.
Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III
1996-01-01
Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... authority to order use of procedures for access by potential parties to certain sensitive unclassified... authority to order use of procedures for access by potential parties to certain sensitive unclassified... Commission or the presiding officer. (b) If this part does not prescribe a time limit for an action to be...
[Sensitization to chymopapain in patients treated with chemonucleolysis].
García-Ortega, P; Ramírez Ferreiras, W; Sancho, A; Urías, S; Cisteró, A
1991-03-23
Chemonucleolysis (intradisk administration of chymopapain) is a procedure to treat intervertebral disk hernia. Recently, its use has been questioned due to the development of anaphylactic reactions in patients sensitized to chymopapain. The prevalence of sensitization to chymopapain has been evaluated before and after chemonucleolysis, and the possibility to establish risk groups through the allergy history has been assessed. 104 consecutive patients who were candidates to chemonucleolysis were evaluated with an allergy questionnaire, cutaneous tests to aeroallergens and to chymopapain, and chymopapain-specific IgE. The two latter tests were repeated one month after chemonucleolysis. Only 2 patients (1.9%) showed evidence of chymopapain sensitization before the procedure. Sixteen patients (16%) were sensitized after chemonucleolysis. None of the possible risk factors evaluated in the allergy questionnaire (atopy, drug allergy, papaya occupational exposure or use of additives, cosmetics or drugs containing papaine) were significantly related with the risk of sensitization to chymopapain. The prevalence of chymopapain sensitization in the study group was low. The allergy questionnaire (atopy, drug allergy, use of papaya, occupational history did not identify sensitized patients. Cutaneous tests and specific IgE are the best method to detect chymopapain sensitization. The remarkable rate of sensitization after chemonucleolysis may partially limit the usefulness of the procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
Higashi, Tatsuya; Ogawa, Shoujiro
2016-09-01
Sensitive and specific methods for the detection, characterization and quantification of endogenous steroids in body fluids or tissues are necessary for the diagnosis, pathological analysis and treatment of many diseases. Recently, liquid chromatography/electrospray ionization-tandem mass spectrometry (LC/ESI-MS/MS) has been widely used for these purposes due to its specificity and versatility. However, the ESI efficiency and fragmentation behavior of some steroids are poor, which lead to a low sensitivity. Chemical derivatization is one of the most effective methods to improve the detection characteristics of steroids in ESI-MS/MS. Based on this background, this article reviews the recent advances in chemical derivatization for the trace quantification of steroids in biological samples by LC/ESI-MS/MS. The derivatization in ESI-MS/MS is based on tagging a proton-affinitive or permanently charged moiety on the target steroid. Introduction/formation of a fragmentable moiety suitable for the selected reaction monitoring by the derivatization also enhances the sensitivity. The stable isotope-coded derivatization procedures for the steroid analysis are also described. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Petroselli, A.; Grimaldi, S.; Romano, N.
2012-12-01
The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.
Numerical simulation of supersonic inlets using a three-dimensional viscous flow analysis
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Towne, C. E.
1980-01-01
A three dimensional fully viscous computer analysis was evaluated to determine its usefulness in the design of supersonic inlets. This procedure takes advantage of physical approximations to limit the high computer time and storage associated with complete Navier-Stokes solutions. Computed results are presented for a Mach 3.0 supersonic inlet with bleed and a Mach 7.4 hypersonic inlet. Good agreement was obtained between theory and data for both inlets. Results of a mesh sensitivity study are also shown.
NASA Astrophysics Data System (ADS)
Gass, S. I.
1982-05-01
The theoretical and applied state of the art of oil and gas supply models was discussed. The following areas were addressed: the realities of oil and gas supply, prediction of oil and gas production, problems in oil and gas modeling, resource appraisal procedures, forecasting field size and production, investment and production strategies, estimating cost and production schedules for undiscovered fields, production regulations, resource data, sensitivity analysis of forecasts, econometric analysis of resource depletion, oil and gas finding rates, and various models of oil and gas supply.
Howard Evan Canfield; Vicente L. Lopes
2000-01-01
A process-based, simulation model for evaporation, soil water and streamflow (BROOK903) was used to estimate soil moisture change on a semiarid rangeland watershed in southeastern Arizona. A sensitivity analysis was performed to select parameters affecting ET and soil moisture for calibration. Automatic parameter calibration was performed using a procedure based on a...
A practical and sensitive method to assess volatile organic compounds (VOCs) from JP-8 jet fuel in human whole blood was developed by modifying previously established liquid-liquid extraction procedures, optimizing extraction times, solvent volume, specific sample processing te...
2016-11-09
the model does not become a full probabilistic attack graph analysis of the network , whose data requirements are currently unrealistic. The second...flow. – Untrustworthy persons may intentionally try to exfiltrate known sensitive data to ex- ternal networks . People may also unintentionally leak...section will provide details on the components, procedures, data requirements, and parameters required to instantiate the network porosity model. These
NASA Astrophysics Data System (ADS)
Kim, G.; Jeon, S.
2016-12-01
Fatty acids are one of the important compound classes in the polar organic fraction of ambient aerosols. Among them, short chain fatty acids play a significant role in the atmospheric transformation processes. For short-chain acids, the bottleneck of analysis has been the difficulty of sample preparation due to the high solubility and volatility. To overcome this problem, derivatization of polar organic fraction is widely used with silylation reagents to increase the resolution and sensitivity. Two different derivatization procedures; (1) using the tertbutyldimethylsilyl (TBDMS) derivatization and (2) the headspace-solid phase microextraction (HS-SPME) with in-fiber derivatization are compared using gas chromatography-mass spectrometry (GC-MS). At the second method, simultaneous derivatization and extraction were performed by a poly acrylate (PA) coated fiber doped with pyrenyldiazomethane (PDAM). We investigated the chromatographic property and relative sensitivities of each individual short chain acids according to two different derivatization procedures. For the method validation, the linearity, recovery and method detection limit (MDL) were compared. Also, two derivatization methods were applied to the ambient aerosol samples and evaluated with respect to the effectiveness.
da Silva Magalhães, Ticiane; Reis, Boaventura F
2017-09-01
In this work, a multicommuted flow analysis procedure is proposed for the spectrophotometric determination of cobalt in fresh water, employing an instrument setup of downsized dimension and improved cost-effectiveness. The method is based on the catalytic effect of Co(II) on the Tiron oxidation by hydrogen peroxide in alkaline medium, forming a complex that absorbs radiation at 425 nm. The photometric detection was accomplished using a homemade light-emitting-diode (LED)-based photometer designed to use a flow cell with an optical path-length of 100 mm to improve sensitivity. After selecting adequate values for the flow system variables, adherence to the Beer-Lambert-Bouguer law was observed for standard solution concentrations in the range of 0.13-1.5 µg L -1 Co(II). Other useful features including a relative standard deviation of 2.0% (n = 11) for a sample with 0.49 µg L -1 Co(II), a detection limit of 0.06 µg L -1 Co(II) (n = 20), an analytical frequency of 42 sample determinations per hour, and waste generation of 1.5 mL per determination were achieved.
13C labeling analysis of sugars by high resolution-mass spectrometry for metabolic flux analysis.
Acket, Sébastien; Degournay, Anthony; Merlier, Franck; Thomasset, Brigitte
2017-06-15
Metabolic flux analysis is particularly complex in plant cells because of highly compartmented metabolism. Analysis of free sugars is interesting because it provides data to define fluxes around hexose, pentose, and triose phosphate pools in different compartment. In this work, we present a method to analyze the isotopomer distribution of free sugars labeled with carbon 13 using a liquid chromatography-high resolution mass spectrometry, without derivatized procedure, adapted for Metabolic flux analysis. Our results showed a good sensitivity, reproducibility and better accuracy to determine isotopic enrichments of free sugars compared to our previous methods [5, 6]. Copyright © 2017 Elsevier Inc. All rights reserved.
Biosensing Technologies for Mycobacterium tuberculosis Detection: Status and New Developments
Zhou, Lixia; He, Xiaoxiao; He, Dinggeng; Wang, Kemin; Qin, Dilan
2011-01-01
Biosensing technologies promise to improve Mycobacterium tuberculosis (M. tuberculosis) detection and management in clinical diagnosis, food analysis, bioprocess, and environmental monitoring. A variety of portable, rapid, and sensitive biosensors with immediate “on-the-spot” interpretation have been developed for M. tuberculosis detection based on different biological elements recognition systems and basic signal transducer principles. Here, we present a synopsis of current developments of biosensing technologies for M. tuberculosis detection, which are classified on the basis of basic signal transducer principles, including piezoelectric quartz crystal biosensors, electrochemical biosensors, and magnetoelastic biosensors. Special attention is paid to the methods for improving the framework and analytical parameters of the biosensors, including sensitivity and analysis time as well as automation of analysis procedures. Challenges and perspectives of biosensing technologies development for M. tuberculosis detection are also discussed in the final part of this paper. PMID:21437177
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2015-04-01
Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol, Morris, etc.) are actually limiting cases of our approach under specific conditions. Multiple case studies are used to demonstrate the value of the new framework. The results show that the new framework provides a fundamental understanding of the underlying sensitivities for any given problem, while requiring orders of magnitude fewer model runs.
THE CHANDRA SURVEY OF THE COSMOS FIELD. II. SOURCE DETECTION AND PHOTOMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puccetti, S.; Vignali, C.; Cappelluti, N.
2009-12-01
The Chandra COSMOS Survey (C-COSMOS) is a large, 1.8 Ms, Chandra program that covers the central contiguous {approx}0.92 deg{sup 2} of the COSMOS field. C-COSMOS is the result of a complex tiling, with every position being observed in up to six overlapping pointings (four overlapping pointings in most of the central {approx}0.45 deg{sup 2} area with the best exposure, and two overlapping pointings in most of the surrounding area, covering an additional {approx}0.47 deg{sup 2}). Therefore, the full exploitation of the C-COSMOS data requires a dedicated and accurate analysis focused on three main issues: (1) maximizing the sensitivity when themore » point-spread function (PSF) changes strongly among different observations of the same source (from {approx}1 arcsec up to {approx}10 arcsec half-power radius); (2) resolving close pairs; and (3) obtaining the best source localization and count rate. We present here our treatment of four key analysis items: source detection, localization, photometry, and survey sensitivity. Our final procedure consists of a two step procedure: (1) a wavelet detection algorithm to find source candidates and (2) a maximum likelihood PSF fitting algorithm to evaluate the source count rates and the probability that each source candidate is a fluctuation of the background. We discuss the main characteristics of this procedure, which was the result of detailed comparisons between different detection algorithms and photometry tools, calibrated with extensive and dedicated simulations.« less
Quantitative image analysis of immunohistochemical stains using a CMYK color model
Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W
2007-01-01
Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824
da Costa, Márcia Gisele Santos; Santos, Marisa da Silva; Sarti, Flávia Mori; Senna, Kátia Marie Simões e.; Tura, Bernardo Rangel; Goulart, Marcelo Correia
2014-01-01
Objectives The study performs a cost-effectiveness analysis of procedures for atrial septal defects occlusion, comparing conventional surgery to septal percutaneous implant. Methods A model of analytical decision was structured with symmetric branches to estimate cost-effectiveness ratio between the procedures. The decision tree model was based on evidences gathered through meta-analysis of literature, and validated by a panel of specialists. The lower number of surgical procedures performed for atrial septal defects occlusion at each branch was considered as the effectiveness outcome. Direct medical costs and probabilities for each event were inserted in the model using data available from Brazilian public sector database system and information extracted from the literature review, using micro-costing technique. Sensitivity analysis included price variations of percutaneous implant. Results The results obtained from the decision model demonstrated that the percutaneous implant was more cost effective in cost-effectiveness analysis at a cost of US$8,936.34 with a reduction in the probability of surgery occurrence in 93% of the cases. Probability of atrial septal communication occlusion and cost of the implant are the determinant factors of cost-effectiveness ratio. Conclusions The proposal of a decision model seeks to fill a void in the academic literature. The decision model proposed includes the outcomes that present major impact in relation to the overall costs of the procedure. The atrial septal defects occlusion using percutaneous implant reduces the physical and psychological distress to the patients in relation to the conventional surgery, which represent intangible costs in the context of economic evaluation. PMID:25302806
da Costa, Márcia Gisele Santos; Santos, Marisa da Silva; Sarti, Flávia Mori; Simões e Senna, Kátia Marie; Tura, Bernardo Rangel; Correia, Marcelo Goulart; Goulart, Marcelo Correia
2014-01-01
The study performs a cost-effectiveness analysis of procedures for atrial septal defects occlusion, comparing conventional surgery to septal percutaneous implant. A model of analytical decision was structured with symmetric branches to estimate cost-effectiveness ratio between the procedures. The decision tree model was based on evidences gathered through meta-analysis of literature, and validated by a panel of specialists. The lower number of surgical procedures performed for atrial septal defects occlusion at each branch was considered as the effectiveness outcome. Direct medical costs and probabilities for each event were inserted in the model using data available from Brazilian public sector database system and information extracted from the literature review, using micro-costing technique. Sensitivity analysis included price variations of percutaneous implant. The results obtained from the decision model demonstrated that the percutaneous implant was more cost effective in cost-effectiveness analysis at a cost of US$8,936.34 with a reduction in the probability of surgery occurrence in 93% of the cases. Probability of atrial septal communication occlusion and cost of the implant are the determinant factors of cost-effectiveness ratio. The proposal of a decision model seeks to fill a void in the academic literature. The decision model proposed includes the outcomes that present major impact in relation to the overall costs of the procedure. The atrial septal defects occlusion using percutaneous implant reduces the physical and psychological distress to the patients in relation to the conventional surgery, which represent intangible costs in the context of economic evaluation.
Validity of diagnoses, procedures, and laboratory data in Japanese administrative data.
Yamana, Hayato; Moriwaki, Mutsuko; Horiguchi, Hiromasa; Kodan, Mariko; Fushimi, Kiyohide; Yasunaga, Hideo
2017-10-01
Validation of recorded data is a prerequisite for studies that utilize administrative databases. The present study evaluated the validity of diagnoses and procedure records in the Japanese Diagnosis Procedure Combination (DPC) data, along with laboratory test results in the newly-introduced Standardized Structured Medical Record Information Exchange (SS-MIX) data. Between November 2015 and February 2016, we conducted chart reviews of 315 patients hospitalized between April 2014 and March 2015 in four middle-sized acute-care hospitals in Shizuoka, Kochi, Fukuoka, and Saga Prefectures and used them as reference standards. The sensitivity and specificity of DPC data in identifying 16 diseases and 10 common procedures were identified. The accuracy of SS-MIX data for 13 laboratory test results was also examined. The specificity of diagnoses in the DPC data exceeded 96%, while the sensitivity was below 50% for seven diseases and variable across diseases. When limited to primary diagnoses, the sensitivity and specificity were 78.9% and 93.2%, respectively. The sensitivity of procedure records exceeded 90% for six procedures, and the specificity exceeded 90% for nine procedures. Agreement between the SS-MIX data and the chart reviews was above 95% for all 13 items. The validity of diagnoses and procedure records in the DPC data and laboratory results in the SS-MIX data was high in general, supporting their use in future studies. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.
Validity and consistency assessment of accident analysis methods in the petroleum industry.
Ahmadi, Omran; Mortazavi, Seyed Bagher; Khavanin, Ali; Mokarami, Hamidreza
2017-11-17
Accident analysis is the main aspect of accident investigation. It includes the method of connecting different causes in a procedural way. Therefore, it is important to use valid and reliable methods for the investigation of different causal factors of accidents, especially the noteworthy ones. This study aimed to prominently assess the accuracy (sensitivity index [SI]) and consistency of the six most commonly used accident analysis methods in the petroleum industry. In order to evaluate the methods of accident analysis, two real case studies (process safety and personal accident) from the petroleum industry were analyzed by 10 assessors. The accuracy and consistency of these methods were then evaluated. The assessors were trained in the workshop of accident analysis methods. The systematic cause analysis technique and bowtie methods gained the greatest SI scores for both personal and process safety accidents, respectively. The best average results of the consistency in a single method (based on 10 independent assessors) were in the region of 70%. This study confirmed that the application of methods with pre-defined causes and a logic tree could enhance the sensitivity and consistency of accident analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harry, T; Yaddanapudi, S; Mutic, S
Purpose: New techniques and materials have recently been developed to expedite the conventional Linac Acceptance Testing Procedure (ATP). The new ATP method uses the Electronic Portal Imaging Device (EPID) for data collection and is presented separately. This new procedure is meant to be more efficient then conventional methods. While not clinically implemented yet, a prospective risk assessment is warranted for any new techniques. The purpose of this work is to investigate the risks and establish the pros and cons between the conventional approach and the new ATP method. Methods: ATP tests that were modified and performed with the EPID weremore » analyzed. Five domain experts (Medical Physicists) comprised the core analysis team. Ranking scales were adopted from previous publications related to TG 100. The number of failure pathways for each ATP test procedure were compared as well as the number of risk priority numbers (RPN’s) greater than 100 were compared. Results: There were fewer failure pathways with the new ATP compared to the conventional, 262 and 556, respectively. There were fewer RPN’s > 100 in the new ATP compared to the conventional, 41 and 115. Failure pathways and RPN’s > 100 for individual ATP tests on average were 2 and 3.5 times higher in the conventional ATP compared to the new, respectively. The pixel sensitivity map of the EPID was identified as a key hazard to the new ATP procedure with an RPN of 288 for verifying beam parameters. Conclusion: The significant decrease in failure pathways and RPN’s >100 for the new ATP mitigates the possibilities of a catastrophic error occurring. The Pixel Sensitivity Map determining the response and inherent characteristics of the EPID is crucial as all data and hence results are dependent on that process. Grant from Varian Medical Systems Inc.« less
Dhooria, Sahajal; Aggarwal, Ashutosh N; Gupta, Dheeraj; Behera, Digambar; Agarwal, Ritesh
2015-07-01
The use of endoscopic ultrasound with bronchoscope-guided fine-needle aspiration (EUS-B-FNA) has been described in the evaluation of mediastinal lymphadenopathy. Herein, we conduct a meta-analysis to estimate the overall diagnostic yield and safety of EUS-B-FNA combined with endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA), in the diagnosis of mediastinal lymphadenopathy. The PubMed and EmBase databases were searched for studies reporting the outcomes of EUS-B-FNA in diagnosis of mediastinal lymphadenopathy. The study quality was assessed using the QualSyst tool. The yield of EBUS-TBNA alone and the combined procedure (EBUS-TBNA and EUS-B-FNA) were analyzed by calculating the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each study, and pooling the study results using a random effects model. Heterogeneity and publication bias were assessed for individual outcomes. The additional diagnostic gain of EUS-B-FNA over EBUS-TBNA was calculated using proportion meta-analysis. Our search yielded 10 studies (1,080 subjects with mediastinal lymphadenopathy). The sensitivity of the combined procedure was significantly higher than EBUS-TBNA alone (91% vs 80%, P = .004), in staging of lung cancer (4 studies, 465 subjects). The additional diagnostic gain of EUS-B-FNA over EBUS-TBNA was 7.6% in the diagnosis of mediastinal adenopathy. No serious complication of EUS-B-FNA procedure was reported. Clinical and statistical heterogeneity was present without any evidence of publication bias. Combining EBUS-TBNA and EUS-B-FNA is an effective and safe method, superior to EBUS-TBNA alone, in the diagnosis of mediastinal lymphadenopathy. Good quality randomized controlled trials are required to confirm the results of this systematic review. Copyright © 2015 by Daedalus Enterprises.
Krolewiecki, Alejandro J; Koukounari, Artemis; Romano, Miryam; Caro, Reynaldo N; Scott, Alan L; Fleitas, Pedro; Cimino, Ruben; Shiff, Clive J
2018-06-01
For epidemiological work with soil transmitted helminths the recommended diagnostic approaches are to examine fecal samples for microscopic evidence of the parasite. In addition to several logistical and processing issues, traditional diagnostic approaches have been shown to lack the sensitivity required to reliably identify patients harboring low-level infections such as those associated with effective mass drug intervention programs. In this context, there is a need to rethink the approaches used for helminth diagnostics. Serological methods are now in use, however these tests are indirect and depend on individual immune responses, exposure patterns and the nature of the antigen. However, it has been demonstrated that cell-free DNA from pathogens and cancers can be readily detected in patient's urine which can be collected in the field, filtered in situ and processed later for analysis. In the work presented here, we employ three diagnostic procedures-stool examination, serology (NIE-ELISA) and PCR-based amplification of parasite transrenal DNA from urine-to determine their relative utility in the diagnosis of S. stercoralis infections from 359 field samples from an endemic area of Argentina. Bayesian Latent Class analysis was used to assess the relative performance of the three diagnostic procedures. The results underscore the low sensitivity of stool examination and support the idea that the use of serology combined with parasite transrenal DNA detection may be a useful strategy for sensitive and specific detection of low-level strongyloidiasis.
Qi, Ping; Lin, Zhihao; Li, Jiaxu; Wang, ChengLong; Meng, WeiWei; Hong, Hong; Zhang, Xuewu
2014-12-01
In this work, a simple, rapid and sensitive analytical method for the determination of rhodamine B in chili-containing foodstuffs is described. The dye is extracted from samples with methanol and analysed without further cleanup procedure by high-performance liquid chromatography (HPLC) coupled to fluorescence detection (FLD). The influence of matrix fluorescent compounds (capsaicin and dihydrocapsaicin) on the analysis was overcome by the optimisation of mobile-phase composition. The limit of determination (LOD) and limit of quantification (LOQ) were 3.7 and 10 μg/kg, respectively. Validation data show a good repeatability and within-lab reproducibility with relative standard deviations <10%. The overall recoveries are in the range of 98-103% in chili powder and in the range of 87-100% in chili oil depending on the concentration of rhodamine B in foodstuffs. This method is suitable for the routine analysis of rhodamine B due to its sensitivity, simplicity, reasonable time and cost. Copyright © 2014 Elsevier Ltd. All rights reserved.
Determination of 232Th in urine by ICP-MS for individual monitoring purposes.
Baglan, N; Cossonnet, C; Ritt, J
2001-07-01
Thorium is naturally occurring in various ores used for industrial purposes and has numerous applications. This paper sets out to investigate urine analysis as a suitable monitoring approach for workers potentially exposed to thorium. Due to its biokinetic behavior and its low solubility, urinary concentrations are generally very low, requiring therefore high sensitivity analytical methods. An analytical procedure has been developed for detecting 232Th concentrations of below 1 mBq L(-1) quickly and easily. Due to the long half-life (1.41 x 10(10) y) of 232Th, the potential of a procedure based on urine sample dilution and ICP-MS (inductively coupled plasma-mass spectrometry) measurement was investigated first. Two dilution factors were chosen: 100, which is more suitable for long-term measurement trials, and 20, which increases sensitivity. It has been shown that a 100-fold dilution can be used to measure concentrations of below 1 mBq L(-1), whereas a 20-fold one can be used to reach concentrations of below 0.06 mBq L(-1). Then, on the basis of the limitation of the procedure based on urine dilution, the suitable field of application for the different procedures (100-fold and 20-fold dilution and also a chemical purification followed by an ICP-MS measurement) was determined in relation to monitoring objectives.
Yu, Hye-Weon; Jang, Am; Kim, Lan Hee; Kim, Sung-Jo; Kim, In S
2011-09-15
Due to the increased occurrence of cyanobacterial blooms and their toxins in drinking water sources, effective management based on a sensitive and rapid analytical method is in high demand for security of safe water sources and environmental human health. Here, a competitive fluorescence immunoassay of microcystin-LR (MCYST-LR) is developed in an attempt to improve the sensitivity, analysis time, and ease-of-manipulation of analysis. To serve this aim, a bead-based suspension assay was introduced based on two major sensing elements: an antibody-conjugated quantum dot (QD) detection probe and an antigen-immobilized magnetic bead (MB) competitor. The assay was composed of three steps: the competitive immunological reaction of QD detection probes against analytes and MB competitors, magnetic separation and washing, and the optical signal generation of QDs. The fluorescence intensity was found to be inversely proportional to the MCYST-LR concentration. Under optimized conditions, the proposed assay performed well for the identification and quantitative analysis of MCYST-LR (within 30 min in the range of 0.42-25 μg/L, with a limit of detection of 0.03 μg/L). It is thus expected that this enhanced assay can contribute both to the sensitive and rapid diagnosis of cyanotoxin risk in drinking water and effective management procedures.
Liwarska-Bizukojc, Ewa; Biernacki, Rafal
2010-10-01
In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.
Barboni, Mirella Telles Salgueiro; Feitosa-Santana, Claudia; Barreto Junior, Jackson; Lago, Marcos; Bechara, Samir Jacob; Alves, Milton Ruiz; Ventura, Dora Fix
2013-10-01
The present study aimed to compare the postoperative contrast sensitivity functions between wavefront-guided LASIK eyes and their contralateral wavefront-guided PRK eyes. The participants were 11 healthy subjects (mean age=32.4 ± 6.2 years) who had myopic astigmatism. The spatial contrast sensitivity functions were measured before and three times after the surgery. Psycho and a Cambridge graphic board (VSG 2/4) were used to measure luminance, red-green, and blue-yellow spatial contrast sensitivity functions (from 0.85 to 13.1 cycles/degree). Longitudinal analysis and comparison between surgeries were performed. There was no significant contrast sensitivity change during the one-year follow-up measurements neither for LASIK nor for PRK eyes. The comparison between procedures showed no differences at 12 months postoperative. The present data showed similar contrast sensitivities during one-year follow-up of wave-front guided refractive surgeries. Moreover, one year postoperative data showed no differences in the effects of either wavefront-guided LASIK or wavefront-guided PRK on the luminance and chromatic spatial contrast sensitivity functions.
Recent development of electrochemiluminescence sensors for food analysis.
Hao, Nan; Wang, Kun
2016-10-01
Food quality and safety are closely related to human health. In the face of unceasing food safety incidents, various analytical techniques, such as mass spectrometry, chromatography, spectroscopy, and electrochemistry, have been applied in food analysis. High sensitivity usually requires expensive instruments and complicated procedures. Although these modern analytical techniques are sensitive enough to ensure food safety, sometimes their applications are limited because of the cost, usability, and speed of analysis. Electrochemiluminescence (ECL) is a powerful analytical technique that is attracting more and more attention because of its outstanding performance. In this review, the mechanisms of ECL and common ECL luminophores are briefly introduced. Then an overall review of the principles and applications of ECL sensors for food analysis is provided. ECL can be flexibly combined with various separation techniques. Novel materials (e.g., various nanomaterials) and strategies (e.g., immunoassay, aptasensors, and microfluidics) have been progressively introduced into the design of ECL sensors. By illustrating some selected representative works, we summarize the state of the art in the development of ECL sensors for toxins, heavy metals, pesticides, residual drugs, illegal additives, viruses, and bacterias. Compared with other methods, ECL can provide rapid, low-cost, and sensitive detection for various food contaminants in complex matrixes. However, there are also some limitations and challenges. Improvements suited to the characteristics of food analysis are still necessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brickstad, B.; Bergman, M.
A computerized procedure has been developed that predicts the growth of an initial circumferential surface crack through a pipe and further on to failure. The crack growth mechanism can either be fatigue or stress corrosion. Consideration is taken to complex crack shapes and for the through-wall cracks, crack opening areas and leak rates are also calculated. The procedure is based on a large number of three-dimensional finite element calculations of cracked pipes. The results from these calculations are stored in a database from which the PC-program, denoted LBBPIPE, reads all necessary information. In this paper, a sensitivity analysis is presentedmore » for cracked pipes subjected to both stress corrosion and vibration fatigue.« less
Differentiation of Leishmania species by FT-IR spectroscopy
NASA Astrophysics Data System (ADS)
Aguiar, Josafá C.; Mittmann, Josane; Ferreira, Isabelle; Ferreira-Strixino, Juliana; Raniero, Leandro
2015-05-01
Leishmaniasis is a parasitic infectious disease caused by protozoa that belong to the genus Leishmania. It is transmitted by the bite of an infected female Sand fly. The disease is endemic in 88 countries Desjeux (2001) [1] (16 developed countries and 72 developing countries) on four continents. In Brazil, epidemiological data show the disease is present in all Brazilian regions, with the highest incidences in the North and Northeast. There are several methods used to diagnose leishmaniasis, but these procedures have many limitations, are time consuming, have low sensitivity, and are expensive. In this context, Fourier Transform Infrared Spectroscopy (FT-IR) analysis has the potential to provide rapid results and may be adapted for a clinical test with high sensitivity and specificity. In this work, FT-IR was used as a tool to investigate the promastigotes of Leishmaniaamazonensis, Leishmaniachagasi, and Leishmaniamajor species. The spectra were analyzed by cluster analysis and deconvolution procedure base on spectra second derivatives. Results: cluster analysis found four specific regions that are able to identify the Leishmania species. The dendrogram representation clearly indicates the heterogeneity among Leishmania species. The band deconvolution done by the curve fitting in these regions quantitatively differentiated the polysaccharides, amide III, phospholipids, proteins, and nucleic acids. L. chagasi and L. major showed a greater biochemistry similarity and have three bands that were not registered in L. amazonensis. The L. amazonensis presented three specific bands that were not recorded in the other two species. It is evident that the FT-IR method is an indispensable tool to discriminate these parasites. The high sensitivity and specificity of this technique opens up the possibilities for further studies about characterization of other microorganisms.
Delaloge, Suzette; Bonastre, Julia; Borget, Isabelle; Garbay, Jean-Rémi; Fontenay, Rachel; Boinon, Diane; Saghatchian, Mahasti; Mathieu, Marie-Christine; Mazouni, Chafika; Rivera, Sofia; Uzan, Catherine; André, Fabrice; Dromain, Clarisse; Boyer, Bruno; Pistilli, Barbara; Azoulay, Sandy; Rimareix, Françoise; Bayou, El-Hadi; Sarfati, Benjamin; Caron, Hélène; Ghouadni, Amal; Leymarie, Nicolas; Canale, Sandra; Mons, Muriel; Arfi-Rouche, Julia; Arnedos, Monica; Suciu, Voichita; Vielh, Philippe; Balleyguier, Corinne
2016-10-01
Rapid diagnosis is a key issue in modern oncology, for which one-stop breast clinics are a model. We aimed to assess the diagnosis accuracy and procedure costs of a large-scale one-stop breast clinic. A total of 10,602 individuals with suspect breast lesions attended the Gustave Roussy's regional one-stop breast clinic between 2004 and 2012. The multidisciplinary clinic uses multimodal imaging together with ultrasonography-guided fine needle aspiration for masses and ultrasonography-guided and stereotactic biopsies as needed. Diagnostic accuracy was assessed by comparing one-stop diagnosis to the consolidated diagnosis obtained after surgery or biopsy or long-term monitoring. The medical cost per patient of the care pathway was assessed from patient-level data collected prospectively. Sixty-nine percent of the patients had masses, while 31% had micro-calcifications or other non-mass lesions. In 75% of the cases (87% of masses), an exact diagnosis could be given on the same day. In the base-case analysis (i.e. considering only benign and malignant lesions at one-stop and at consolidated diagnoses), the sensitivity of the one-stop clinic was 98.4%, specificity 99.8%, positive and negative predictive values 99.7% and 99.0%. In the sensitivity analysis (reclassification of suspect, atypical and undetermined lesions), diagnostic sensitivity varied from 90.3% to 98.5% and specificity varied from 94.3% to 99.8%. The mean medical cost per patient of one-stop diagnostic procedure was €420. One-stop breast clinic can provide timely and cost-efficient delivery of highly accurate diagnoses and serve as models of care for multiple settings, including rapid screening-linked diagnosis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Suh, Hae Sun; Song, Hyun Jin; Jang, Eun Jin; Kim, Jung-Sun; Choi, Donghoon; Lee, Sang Moo
2013-07-01
The goal of this study was to perform an economic analysis of a primary stenting with drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients with acute myocardial infarction (AMI) admitted through an emergency room (ER) visit in Korea using population-based data. We employed a cost-minimization method using a decision analytic model with a two-year time period. Model probabilities and costs were obtained from a published systematic review and population-based data from which a retrospective database analysis of the national reimbursement database of Health Insurance Review and Assessment covering 2006 through 2010 was performed. Uncertainty was evaluated using one-way sensitivity analyses and probabilistic sensitivity analyses. Among 513 979 cases with AMI during 2007 and 2008, 24 742 cases underwent stenting procedures and 20 320 patients admitted through an ER visit with primary stenting were identified in the base model. The transition probabilities of DES-to-DES, DES-to-BMS, DES-to-coronary artery bypass graft, and DES-to-balloon were 59.7%, 0.6%, 4.3%, and 35.3%, respectively, among these patients. The average two-year costs of DES and BMS in 2011 Korean won were 11 065 528 won/person and 9 647 647 won/person, respectively. DES resulted in higher costs than BMS by 1 417 882 won/person. The model was highly sensitive to the probability and costs of having no revascularization. Primary stenting with BMS for AMI with an ER visit was shown to be a cost-saving procedure compared with DES in Korea. Caution is needed when applying this finding to patients with a higher level of severity in health status.
Whitty, Jennifer A; Crosland, Paul; Hewson, Kaye; Narula, Rajan; Nathan, Timothy R; Campbell, Peter A; Keller, Andrew; Scuffham, Paul A
2014-03-01
To compare the costs of photoselective vaporisation (PVP) and transurethral resection of the prostate (TURP) for management of symptomatic benign prostatic hyperplasia (BPH) from the perspective of a Queensland public hospital provider. A decision-analytic model was used to compare the costs of PVP and TURP. Cost inputs were sourced from an audit of patients undergoing PVP or TURP across three hospitals. The probability of re-intervention was obtained from secondary literature sources. Probabilistic and multi-way sensitivity analyses were used to account for uncertainty and test the impact of varying key assumptions. In the base case analysis, which included equipment, training and re-intervention costs, PVP was AU$ 739 (95% credible interval [CrI] -12 187 to 14 516) more costly per patient than TURP. The estimate was most sensitive to changes in procedural costs, fibre costs and the probability of re-intervention. Sensitivity analyses based on data from the most favourable site or excluding equipment and training costs reduced the point estimate to favour PVP (incremental cost AU$ -684, 95% CrI -8319 to 5796 and AU$ -100, 95% CrI -13 026 to 13 678, respectively). However, CrIs were wide for all analyses. In this cost minimisation analysis, there was no significant cost difference between PVP and TURP, after accounting for equipment, training and re-intervention costs. However, PVP was associated with a shorter length of stay and lower procedural costs during audit, indicating PVP potentially provides comparatively good value for money once the technology is established. © 2013 The Authors. BJU International © 2013 BJU International.
Grasso, Marina; Boon, Elles M.J.; Filipovic-Sadic, Stela; van Bunderen, Patrick A.; Gennaro, Elena; Cao, Ru; Latham, Gary J.; Hadd, Andrew G.; Coviello, Domenico A.
2015-01-01
Fragile X syndrome and associated disorders are characterized by the number of CGG repeats and methylation status of the FMR1 gene for which Southern blot (SB) historically has been required for analysis. This study describes a simple PCR-only workflow (mPCR) to replace SB analysis, that incorporates novel procedural controls, treatment of the DNA in separate control and methylation-sensitive restriction endonuclease reactions, amplification with labeled primers, and two-color amplicon sizing by capillary electrophoresis. mPCR was evaluated in two independent laboratories with 76 residual clinical samples that represented typical and challenging fragile X alleles in both males and females. mPCR enabled superior size resolution and analytical sensitivity for size and methylation mosaicism compared to SB. Full mutation mosaicism was detected down to 1% in a background of 99% normal allele with 50- to 100-fold less DNA than required for SB. A low level of full mutation mosaicism in one sample was detected using mPCR but not observed using SB. Overall, the sensitivity for detection of full mutation alleles was 100% (95% CI: 89%–100%) with an accuracy of 99% (95% CI: 93%–100%). mPCR analysis of DNA from individuals with Klinefelter and Turner syndromes, and DNA from sperm and blood, were consistent with SB. As such, mPCR enables accurate, sensitive, and standardized methods of FMR1 analysis that can harmonize results across different laboratories. PMID:24177047
Salmonella testing of pooled pre-enrichment broth cultures for screening multiple food samples.
Price, W R; Olsen, R A; Hunter, J E
1972-04-01
A method has been described for testing multiple food samples for Salmonella without loss in sensitivity. The method pools multiple pre-enrichment broth cultures into single enrichment broths. The subsequent stages of the Salmonella analysis are not altered. The method was found applicable to several dry food materials including nonfat dry milk, dried egg albumin, cocoa, cottonseed flour, wheat flour, and shredded coconut. As many as 25 pre-enrichment broth cultures were pooled without apparent loss in the sensitivity of Salmonella detection as compared to individual sample analysis. The procedure offers a simple, yet effective, way to increase sample capacity in the Salmonella testing of foods, particularly where a large proportion of samples ordinarily is negative. It also permits small portions of pre-enrichment broth cultures to be retained for subsequent individual analysis if positive tests are found. Salmonella testing of pooled pre-enrichment broths provides increased consumer protection for a given amount of analytical effort as compared to individual sample analysis.
Rapid microfluidic analysis of a Y-STR multiplex for screening of forensic samples.
Gibson-Daw, Georgiana; Albani, Patricia; Gassmann, Marcus; McCord, Bruce
2017-02-01
In this paper, we demonstrate a rapid analysis procedure for use with a small set of rapidly mutating Y chromosomal short tandem repeat (Y-STR) loci that combines both rapid polymerase chain reaction (PCR) and microfluidic separation elements. The procedure involves a high-speed polymerase and a rapid cycling protocol to permit PCR amplification in 16 min. The resultant amplified sample is next analysed using a short 1.8-cm microfluidic electrophoresis system that permits a four-locus Y-STR genotype to be produced in 80 s. The entire procedure takes less than 25 min from sample collection to result. This paper describes the rapid amplification protocol as well as studies of the reproducibility and sensitivity of the procedure and its optimisation. The amplification process utilises a small high-speed thermocycler, microfluidic device and compact laptop, making it portable and potentially useful for rapid, inexpensive on-site genotyping. The four loci used for the multiplex were selected due to their rapid mutation rates and should proved useful in preliminary screening of samples and suspects. Overall, this technique provides a method for rapid sample screening of suspect and crime scene samples in forensic casework. Graphical abstract ᅟ.
NASA Technical Reports Server (NTRS)
1992-01-01
The papers presented at the symposium cover aerodynamics, design applications, propulsion systems, high-speed flight, structures, controls, sensitivity analysis, optimization algorithms, and space structures applications. Other topics include helicopter rotor design, artificial intelligence/neural nets, and computational aspects of optimization. Papers are included on flutter calculations for a system with interacting nonlinearities, optimization in solid rocket booster application, improving the efficiency of aerodynamic shape optimization procedures, nonlinear control theory, and probabilistic structural analysis of space truss structures for nonuniform thermal environmental effects.
Hassan, Wafaa S; El-Henawee, Magda M; Gouda, Ayman A
2008-01-01
Two rapid, simple and sensitive extractive specrophotometric methods has been developed for the determination of three histamine H1-antagonists drugs, e.g., chlorphenoxamine hydrochloride (CPX), diphenhydramine hydrochloride (DPH) and clemastine (CMT) in bulk and in their pharmaceutical formulations. The first method depend upon the reaction of molybdenum(V) thiocyanate ions (Method A) with the cited drugs to form stable ion-pair complexes which extractable with methylene chloride, the orange red color complex was determined colorimetrically at lambda(max) 470nm. The second method is based on the formation of an ion-association complex with alizarin red S as chromogenic reagents in acidic medium (Method B), which is extracted into chloroform. The complexes have a maximum absorbance at 425 and 426nm for (DPH or CMT) and CPX, respectively. Regression analysis of Beer-Lambert plots showed a good correlation in the concentration ranges of 5.0-40 and 5-70microgmL(-1) for molybdenum(V) thiocyanate (Method A) and alizarin red S (Method B), respectively. For more accurate analysis, Ringbom optimum concentration ranges were calculated. The molar absorptivity, Sandell sensitivity, detection and quantification limits were calculated. Applications of the procedure to the analysis of various pharmaceutical preparations gave reproducible and accurate results. Further, the validity of the procedure was confirmed by applying the standard addition technique and the results obtained in good agreement well with those obtained by the official method.
NASA Astrophysics Data System (ADS)
Tsuchiya, Yuichiro; Kodera, Yoshie
2006-03-01
In the picture archiving and communication system (PACS) environment, it is important that all images be stored in the correct location. However, if information such as the patient's name or identification number has been entered incorrectly, it is difficult to notice the error. The present study was performed to develop a system of patient collation automatically for dynamic radiogram examination by a kinetic analysis, and to evaluate the performance of the system. Dynamic chest radiographs during respiration were obtained by using a modified flat panel detector system. Our computer algorithm developed in this study was consisted of two main procedures, kinetic map imaging processing, and collation processing. Kinetic map processing is a new algorithm to visualize a movement for dynamic radiography; direction classification of optical flows and intensity-density transformation technique was performed. Collation processing consisted of analysis with an artificial neural network (ANN) and discrimination for Mahalanobis' generalized distance, those procedures were performed to evaluate a similarity of combination for the same person. Finally, we investigated the performance of our system using eight healthy volunteers' radiographs. The performance was shown as a sensitivity and specificity. The sensitivity and specificity for our system were shown 100% and 100%, respectively. This result indicated that our system has excellent performance for recognition of a patient. Our system will be useful in PACS management for dynamic chest radiography.
NASA Astrophysics Data System (ADS)
Hassan, Wafaa S.; El-Henawee, Magda M.; Gouda, Ayman A.
2008-01-01
Two rapid, simple and sensitive extractive specrophotometric methods has been developed for the determination of three histamine H1-antagonists drugs, e.g., chlorphenoxamine hydrochloride (CPX), diphenhydramine hydrochloride (DPH) and clemastine (CMT) in bulk and in their pharmaceutical formulations. The first method depend upon the reaction of molybdenum(V) thiocyanate ions (Method A) with the cited drugs to form stable ion-pair complexes which extractable with methylene chloride, the orange red color complex was determined colorimetrically at λmax 470 nm. The second method is based on the formation of an ion-association complex with alizarin red S as chromogenic reagents in acidic medium (Method B), which is extracted into chloroform. The complexes have a maximum absorbance at 425 and 426 nm for (DPH or CMT) and CPX, respectively. Regression analysis of Beer-Lambert plots showed a good correlation in the concentration ranges of 5.0-40 and 5-70 μg mL -1 for molybdenum(V) thiocyanate (Method A) and alizarin red S (Method B), respectively. For more accurate analysis, Ringbom optimum concentration ranges were calculated. The molar absorptivity, Sandell sensitivity, detection and quantification limits were calculated. Applications of the procedure to the analysis of various pharmaceutical preparations gave reproducible and accurate results. Further, the validity of the procedure was confirmed by applying the standard addition technique and the results obtained in good agreement well with those obtained by the official method.
Wright, Jonathan L; Wessells, Hunter; Nathens, Avery B; Hollingworth, Will
2006-05-01
Direct vision internal urethrotomy (DVIU) and urethroplasty are the primary methods of managing urethral stricture disease. Using decision analysis, we determine the cost-effectiveness of different management strategies for short, bulbar urethral strictures 1 to 2 cm in length. A decision tree was constructed, with the number of planned possible DVIUs before attempting urethroplasty defined for each primary branch point. Success rates were obtained from published reports. Costs were estimated from a societal perspective and included the costs of the procedures and office visits and lost wages from convalescence. Sensitivity analyses were conducted, varying the success rates of the procedures and cost estimates. The most cost-effective approach was one DVIU before urethroplasty. The incremental cost of performing a second DVIU before attempting urethroplasty was $141,962 for each additional successfully voiding patient. In the sensitivity analysis, urethroplasty as the primary therapy was cost-effective only when the expected success rate of the first DVIU was less than 35%. The most cost-effective strategy for the management of short, bulbar urethral strictures is to reserve urethroplasty for patients in whom a single endoscopic attempt fails. For longer strictures for which the success rate of DVIU is expected to be less than 35%, urethroplasty as primary therapy is cost-effective. Future prospective, multicenter studies of DVIU and urethroplasty outcomes would help enhance the accuracy of our model.
Systems Architectures for a Tactical Naval Command and Control System
2009-03-01
Supplement TST Time-sensitive Targeting TTP Tactics, Techniques, and Procedures WTP Weapons-target pairing xix GLOSSARY Analysis...target pairings ( WTPs ) and are presented to OTC [a]. 24. OTC conducts risk assessment of engagement options [a]. 25. OTC orders confirmed surface...engagement options are generated through weapon- target pairings ( WTPs ) and are presented to OTC [a]. 24. OTC conducts risk assessment of engagement
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models
NASA Technical Reports Server (NTRS)
Jones, William T.; Lazzara, David; Haimes, Robert
2010-01-01
The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.
Liu, Hsu-Chuan; Den, Walter; Chan, Shu-Fei; Kin, Kuan Tzu
2008-04-25
The present study was aimed to develop a procedure modified from the conventional solid-phase extraction (SPE) method for the analysis of trace concentration of phthalate esters in industrial ultrapure water (UPW). The proposed procedure allows UPW sample to be drawn through a sampling tube containing hydrophobic sorbent (Tenax TA) to concentrate the aqueous phthalate esters. The solid trap was then demoisturized by two-stage gas drying before subjecting to thermal desorption and analysis by gas chromatography-mass spectrometry. This process removes the solvent extraction procedure necessary for the conventional SPE method, and permits automation of the analytical procedure for high-volume analyses. Several important parameters, including desorption temperature and duration, packing quantity and demoisturizing procedure, were optimized in this study based on the analytical sensitivity for a standard mixture containing five different phthalate esters. The method detection limits for the five phthalate esters were between 36 ng l(-1) and 95 ng l(-1) and recovery rates between 15% and 101%. Dioctyl phthalate (DOP) was not recovered adequately because the compound was both poorly adsorbed and desorbed on and off Tenax TA sorbents. Furthermore, analyses of material leaching from poly(vinyl chloride) (PVC) tubes as well as the actual water samples showed that di-n-butyl phthalate (DBP) and di(2-ethylhexyl) phthalate (DEHP) were the common contaminants detected from PVC contaminated UPW and the actual UPW, as well as in tap water. The reduction of DEHP in the production processes of actual UPW was clearly observed, however a DEHP concentration of 0.20 microg l(-1) at the point of use was still being quantified, suggesting that the contamination of phthalate esters could present a barrier to the future cleanliness requirement of UPW. The work demonstrated that the proposed modified SPE procedure provided an effective method for rapid analysis and contamination identification in UPW production lines.
NASA Astrophysics Data System (ADS)
Giardina, G.; Mandaglio, G.; Nasirov, A. K.; Anastasi, A.; Curciarello, F.; Fazio, G.
2018-02-01
Experimental and theoretical results of the PCN fusion probability of reactants in the entrance channel and the Wsur survival probability against fission at deexcitation of the compound nucleus formed in heavy-ion collisions are discussed. The theoretical results for a set of nuclear reactions leading to formation of compound nuclei (CNs) with the charge number Z = 102- 122 reveal a strong sensitivity of PCN to the characteristics of colliding nuclei in the entrance channel, dynamics of the reaction mechanism, and excitation energy of the system. We discuss the validity of assumptions and procedures for analysis of experimental data, and also the limits of validity of theoretical results obtained by the use of phenomenological models. The comparison of results obtained in many investigated reactions reveals serious limits of validity of the data analysis and calculation procedures.
Real, Ruben G. L.; Kotchoubey, Boris; Kübler, Andrea
2014-01-01
This study aimed at evaluating the performance of the Studentized Continuous Wavelet Transform (t-CWT) as a method for the extraction and assessment of event-related brain potentials (ERP) in data from a single subject. Sensitivity, specificity, positive (PPV) and negative predictive values (NPV) of the t-CWT were assessed and compared to a variety of competing procedures using simulated EEG data at six low signal-to-noise ratios. Results show that the t-CWT combines high sensitivity and specificity with favorable PPV and NPV. Applying the t-CWT to authentic EEG data obtained from 14 healthy participants confirmed its high sensitivity. The t-CWT may thus be well suited for the assessment of weak ERPs in single-subject settings. PMID:25309308
Real, Ruben G L; Kotchoubey, Boris; Kübler, Andrea
2014-01-01
This study aimed at evaluating the performance of the Studentized Continuous Wavelet Transform (t-CWT) as a method for the extraction and assessment of event-related brain potentials (ERP) in data from a single subject. Sensitivity, specificity, positive (PPV) and negative predictive values (NPV) of the t-CWT were assessed and compared to a variety of competing procedures using simulated EEG data at six low signal-to-noise ratios. Results show that the t-CWT combines high sensitivity and specificity with favorable PPV and NPV. Applying the t-CWT to authentic EEG data obtained from 14 healthy participants confirmed its high sensitivity. The t-CWT may thus be well suited for the assessment of weak ERPs in single-subject settings.
Korenblit, Jason; Tholey, Danielle M.; Tolin, Joanna; Loren, David; Kowalski, Thomas; Adler, Douglas G.; Davolos, Julie; Siddiqui, Ali A.
2016-01-01
Background and Objectives: Recent reports have indicated that the time of day may impact the detection rate of abnormal cytology on gynecologic cytology samples. The aim of this study was to determine if procedure time or queue position affected the performance characteristics of endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) for diagnosing solid pancreatic malignancies. Patients and Methods: We conducted a retrospective study evaluating patients with solid pancreatic lesions in whom EUS-FNA was performed. Three timing variables were evaluated as surrogate markers for endoscopist fatigue: Procedure start times, morning versus afternoon procedures, and endoscopy queue position. Statistical analyses were performed to determine whether the timing variables predicted performance characteristics of EUS-FNA. Results: We identified 609 patients (mean age: 65.8 years, 52.1% males) with solid pancreatic lesions who underwent EUS-FNA. The sensitivity of EUS-FNA was 100% for procedures that started at 7 AM while cases that started at 4 PM had a sensitivity of 81%. Using start time on a continuous scale, each elapsed hour was associated with a 1.9% decrease in EUS-FNA sensitivity (P = 0.003). Similarly, a 10% reduction in EUS-FNA sensitivity was detected between morning and afternoon procedures (92% vs. 82% respectively, P = 0.0006). A linear regression comparing the procedure start time and diagnostic accuracy revealed a decrease of approximately 1.7% in procedure accuracy for every hour later a procedure was started. A 16% reduction in EUS-FNA accuracy was detected between morning and afternoon procedures (100% vs. 84% respectively, P = 0.0009). When the queue position was assessed, a 2.4% reduction in accuracy was noted for each increase in the queue position (P = 0.013). Conclusion: Sensitivity and diagnostic accuracy of EUS-FNA for solid pancreatic lesions decline with progressively later EUS starting times and increasing numbers of procedures before a given EUS, potentially from endoscopist fatigue and cytotechnologist fatigue. PMID:27080605
Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments
Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed
2013-01-01
In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135
Patrick, Hannah; Sims, Andrew; Burn, Julie; Bousfield, Derek; Colechin, Elaine; Reay, Christopher; Alderson, Neil; Goode, Stephen; Cunningham, David; Campbell, Bruce
2013-03-01
New devices and procedures are often introduced into health services when the evidence base for their efficacy and safety is limited. The authors sought to assess the availability and accuracy of routinely collected Hospital Episodes Statistics (HES) data in the UK and their potential contribution to the monitoring of new procedures. Four years of HES data (April 2006-March 2010) were analysed to identify episodes of hospital care involving a sample of 12 new interventional procedures. HES data were cross checked against other relevant sources including national or local registers and manufacturers' information. HES records were available for all 12 procedures during the entire study period. Comparative data sources were available from national (5), local (2) and manufacturer (2) registers. Factors found to affect comparisons were miscoding, alternative coding and inconsistent use of subsidiary codes. The analysis of provider coverage showed that HES is sensitive at detecting centres which carry out procedures, but specificity is poor in some cases. Routinely collected HES data have the potential to support quality improvements and evidence-based commissioning of devices and procedures in health services but achievement of this potential depends upon the accurate coding of procedures.
A Primer of Covert Sensitization
ERIC Educational Resources Information Center
Kearney, Albert J.
2006-01-01
Covert sensitization is the first of a family of behavior therapy procedures called covert conditioning initially developed by Joseph Cautela in the 1960s and 1970s. The covert conditioning procedures involve the use of visualized imagery and are designed to work according to operant conditioning principles. When working with cooperative clients…
Forreryd, Andy; Johansson, Henrik; Albrekt, Ann-Sofie; Lindstedt, Malin
2014-05-16
Allergic contact dermatitis (ACD) develops upon exposure to certain chemical compounds termed skin sensitizers. To reduce the occurrence of skin sensitizers, chemicals are regularly screened for their capacity to induce sensitization. The recently developed Genomic Allergen Rapid Detection (GARD) assay is an in vitro alternative to animal testing for identification of skin sensitizers, classifying chemicals by evaluating transcriptional levels of a genomic biomarker signature. During assay development and biomarker identification, genome-wide expression analysis was applied using microarrays covering approximately 30,000 transcripts. However, the microarray platform suffers from drawbacks in terms of low sample throughput, high cost per sample and time consuming protocols and is a limiting factor for adaption of GARD into a routine assay for screening of potential sensitizers. With the purpose to simplify assay procedures, improve technical parameters and increase sample throughput, we assessed the performance of three high throughput gene expression platforms--nCounter®, BioMark HD™ and OpenArray®--and correlated their performance metrics against our previously generated microarray data. We measured the levels of 30 transcripts from the GARD biomarker signature across 48 samples. Detection sensitivity, reproducibility, correlations and overall structure of gene expression measurements were compared across platforms. Gene expression data from all of the evaluated platforms could be used to classify most of the sensitizers from non-sensitizers in the GARD assay. Results also showed high data quality and acceptable reproducibility for all platforms but only medium to poor correlations of expression measurements across platforms. In addition, evaluated platforms were superior to the microarray platform in terms of cost efficiency, simplicity of protocols and sample throughput. We evaluated the performance of three non-array based platforms using a limited set of transcripts from the GARD biomarker signature. We demonstrated that it was possible to achieve acceptable discriminatory power in terms of separation between sensitizers and non-sensitizers in the GARD assay while reducing assay costs, simplify assay procedures and increase sample throughput by using an alternative platform, providing a first step towards the goal to prepare GARD for formal validation and adaption of the assay for industrial screening of potential sensitizers.
2014-01-01
Background The purpose of this analysis was to determine whether in office diagnostic needle arthroscopy (Visionscope Imaging System [VSI]) can provide for improved diagnostic assessment and; more cost effective care. Methods Data on arthroscopy procedures in the US for deep seated pathology in the knee and shoulder were used (Calendar Year 2012). These procedures represent approximately 25-30% of all arthroscopic procedures performed annually. Sensitivities, specificities, positive predictive, and negative predictive values for MRI analysis of this deep seated pathology from systematic reviews and meta-analyses were used in assessing for false positive and false negative MRI findings. The costs of performing diagnostic and surgical arthroscopy procedures (using 2013 Medicare reimbursement amounts); costs associated with false negative findings; and the costs for treating associated complications arising from diagnostic and therapeutic arthroscopy procedures were then assessed. Results In patients presenting with medial meniscal pathology (ICD9CM diagnosis 836.0 over 540,000 procedures in CY 2012); use of the VSI system in place of MRI assessment (standard of care) resulted in a net cost savings to the system of $151 million. In patients presenting with rotator cuff pathology (ICD9CM 840.4 over 165,000 procedures in CY2012); use of VSI in place of MRI similarly saved $59 million. These savings were realized along with more appropriate care as; fewer patients were exposed to higher risk surgical arthroscopic procedures. Conclusions The use of an in-office arthroscopy system can: possibly save the US healthcare system money; shorten the diagnostic odyssey for patients; potentially better prepare clinicians for arthroscopic surgery (when needed) and; eliminate unnecessary outpatient arthroscopy procedures, which commonly result in surgical intervention. PMID:24885678
Jin, Yulong; Huang, Yanyan; Liu, Guoquan; Zhao, Rui
2013-09-21
A novel quartz crystal microbalance (QCM) sensor for rapid, highly selective and sensitive detection of copper ions was developed. As a signal amplifier, gold nanoparticles (Au NPs) were self-assembled onto the surface of the sensor. A simple dip-and-dry method enabled the whole detection procedure to be accomplished within 20 min. High selectivity of the sensor towards copper ions is demonstrated by both individual and coexisting assays with interference ions. This gold nanoparticle mediated amplification allowed a detection limit down to 3.1 μM. Together with good repeatability and regeneration, the QCM sensor was also applied to the analysis of copper contamination in drinking water. This work provides a flexible method for fabricating QCM sensors for the analysis of important small molecules in environmental and biological samples.
Tinnitus sensitization: a neurophysiological pathway of chronic complex tinnitus.
Zenner, Hans P
2006-01-01
A novel neuro- and psychophysiological pathway for central cognition of tinnitus, i.e. tinnitus sensitization, is presented here. As a complement to the neurophysiological pathway for the conditioned reflex according to Jastreboff, which permits therapeutic procedures to bring about an extinction of the tinnitus (e.g. by the acoustic tinnitus retraining therapy), sensitization can be treated with procedures that act at the cognitive level. Since on the one hand therapeutic extinction procedures (e.g. the therapeutic application of sound) are still to be proven effective in controlled studies, while on the other cognitive interventions such as cognitive behavioral therapies have in fact acquired evidence level IIa in prospective studies, it is indeed appropriate to discuss whether the earlier neurophysiological model of a conditioned reflex is sufficient on its own, and whether in fact it needs to be complemented with the sensitization model.
DSOD Procedures for Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Howard, J. K.; Fraser, W. A.
2005-12-01
DSOD, which has jurisdiction over more than 1200 dams in California, routinely evaluates their dynamic stability using seismic shaking input ranging from simple pseudostatic coefficients to spectrally matched earthquake time histories. Our seismic hazard assessments assume maximum earthquake scenarios of nearest active and conditionally active seismic sources. Multiple earthquake scenarios may be evaluated depending on sensitivity of the design analysis (e.g., to certain spectral amplitudes, duration of shaking). Active sources are defined as those with evidence of movement within the last 35,000 years. Conditionally active sources are those with reasonable expectation of activity, which are treated as active until demonstrated otherwise. The Division's Geology Branch develops seismic hazard estimates using spectral attenuation formulas applicable to California. The formulas were selected, in part, to achieve a site response model similar to the 2000 IBC's for rock, soft rock, and stiff soil sites. The level of dynamic loading used in the stability analysis (50th, 67th, or 84th percentile ground shaking estimates) is determined using a matrix that considers consequence of dam failure and fault slip rate. We account for near-source directivity amplification along such faults by adjusting target response spectra and developing appropriate design earthquakes for analysis of structures sensitive to long-period motion. Based on in-house studies, the orientation of the dam analysis section relative to the fault-normal direction is considered for strike-slip earthquakes, but directivity amplification is assumed in any orientation for dip-slip earthquakes. We do not have probabilistic standards, but we evaluate the probability of our ground shaking estimates using hazard curves constructed from the USGS Interactive De-Aggregation website. Typically, return periods for our design loads exceed 1000 years. Excessive return periods may warrant a lower design load. Minimum shaking levels are provided for sites far from active faulting. Our procedures and standards are presented at the DSOD website http://damsafety.water.ca.gov/. We review our methods and tools periodically under the guidance of our Consulting Board for Earthquake Analysis (and expect to make changes pending NGA completion), mindful that frequent procedural changes can interrupt design evaluations.
Interpreting the results of chemical stone analysis in the era of modern stone analysis techniques
Gilad, Ron; Williams, James C.; Usman, Kalba D.; Holland, Ronen; Golan, Shay; Ruth, Tor; Lifshitz, David
2017-01-01
Introduction and Objective Stone analysis should be performed in all first-time stone formers. The preferred analytical procedures are Fourier-transform infrared spectroscopy (FT-IR) or X-ray diffraction (XRD). However, due to limited resources, chemical analysis (CA) is still in use throughout the world. The aim of the study was to compare FT-IR and CA in well matched stone specimens and characterize the pros and cons of CA. Methods In a prospective bi-center study, urinary stones were retrieved from 60 consecutive endoscopic procedures. In order to assure that identical stone samples were sent for analyses, the samples were analyzed initially by micro-computed tomography to assess uniformity of each specimen before submitted for FTIR and CA. Results Overall, the results of CA did not match with the FTIR results in 56% of the cases. In 16% of the cases CA missed the major stone component and in 40% the minor stone component. 37 of the 60 specimens contained CaOx as major component by FTIR, and CA reported major CaOx in 47/60, resulting in high sensitivity, but very poor specificity. CA was relatively accurate for UA and cystine. CA missed struvite and calcium phosphate as a major component in all cases. In mixed stones the sensitivity of CA for the minor component was poor, generally less than 50%. Conclusions Urinary stone analysis using CA provides only limited data that should be interpreted carefully. Urinary stone analysis using CA is likely to result in clinically significant errors in its assessment of stone composition. Although the monetary costs of CA are relatively modest, this method does not provide the level of analytical specificity required for proper management of patients with metabolic stones. PMID:26956131
Jastrzębska, Aneta; Piasta, Anna; Szłyk, Edward
2014-01-01
A simple and useful method for the determination of biogenic amines in beverage samples based on isotachophoretic separation is described. The proposed procedure permitted simultaneous analysis of histamine, tyramine, cadaverine, putrescine, tryptamine, 2-phenylethylamine, spermine and spermidine. The data presented demonstrate the utility, simplicity, flexibility, sensitivity and environmentally friendly character of the proposed method. The precision of the method expressed as coefficient of variations varied from 0.1% to 5.9% for beverage samples, whereas recoveries varied from 91% to 101%. The results for the determination of biogenic amines were compared with an HPLC procedure based on a pre-column derivatisation reaction of biogenic amines with dansyl chloride. Furthermore, the derivatisation procedure was optimised by verification of concentration and pH of the buffer, the addition of organic solvents, reaction time and temperature.
Sensitivity analysis of machine-learning models of hydrologic time series
NASA Astrophysics Data System (ADS)
O'Reilly, A. M.
2017-12-01
Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.
Integrated Data Collection Analysis (IDCA) program--KClO 4/Dodecane Mixture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.
The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of a mixture of KClO 4 and dodecane—KClO 4/dodecane mixture. This material was selected because of the challenge of performing SSST testing of a mixture of solid and liquid materials. The mixture was found to: 1) be less sensitive to impact than RDX, and PETN, 2) less sensitive to friction than RDX and PETN, and 3) less sensitive to spark thanmore » RDX and PETN. The thermal analysis showed little or no exothermic features suggesting that the dodecane volatilized at low temperatures. A prominent endothermic feature was observed and assigned to a phase transition of KClO 4. This effort, funded by the Department of Homeland Security (DHS), ultimately will put the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study has the potential to suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. Note, however, the test procedures differ among the laboratories. The results are compared among the laboratories and then compared to historical data from various sources. The testing performers involved for the KClO 4/dodecane mixture are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to understand how to compare results when these testing variables cannot be made consistent.« less
Pumping tests in non-uniform aquifers - the linear strip case
Butler, J.J.; Liu, W.Z.
1991-01-01
Many pumping tests are performed in geologic settings that can be conceptualized as a linear infinite strip of one material embedded in a matrix of differing flow properties. A semi-analytical solution is presented to aid the analysis of drawdown data obtained from pumping tests performed in settings that can be represented by such a conceptual model. Integral transform techniques are employed to obtain a solution in transform space that can be numerically inverted to real space. Examination of the numerically transformed solution reveals several interesting features of flow in this configuration. If the transmissivity of the strip is much higher than that of the matrix, linear and bilinear flow are the primary flow regimes during a pumping test. If the contrast between matrix and strip properties is not as extreme, then radial flow should be the primary flow mechanism. Sensitivity analysis is employed to develop insight into the controls on drawdown in this conceptual model and to demonstrate the importance of temporal and spatial placement of observations. Changes in drawdown are sensitive to the transmissivity of the strip for a limited time duration. After that time, only the total drawdown remains a function of strip transmissivity. In the case of storativity, both the total drawdown and changes in drawdown are sensitive to the storativity of the strip for a time of quite limited duration. After that time, essentially no information can be gained about the storage properties of the strip from drawdown data. An example analysis is performed using data previously presented in the literature to demonstrate the viability of the semi-analytical solution and to illustrate a general procedure for analysis of drawdown data in complex geologic settings. This example reinforces the importance of observation well placement and the time of data collection in constraining parameter correlation, a major source of the uncertainty that arises in the parameter estimation procedure. ?? 1991.
Hutsell, Blake A; Negus, S Stevens; Banks, Matthew L
2015-01-01
We have previously demonstrated reductions in cocaine choice produced by either continuous 14-day phendimetrazine and d-amphetamine treatment or removing cocaine availability under a cocaine vs. food choice procedure in rhesus monkeys. The aim of the present investigation was to apply the concatenated generalized matching law (GML) to cocaine vs. food choice dose-effect functions incorporating sensitivity to both the relative magnitude and price of each reinforcer. Our goal was to determine potential behavioral mechanisms underlying pharmacological treatment efficacy to decrease cocaine choice. A multi-model comparison approach was used to characterize dose- and time-course effects of both pharmacological and environmental manipulations on sensitivity to reinforcement. GML models provided an excellent fit of the cocaine choice dose-effect functions in individual monkeys. Reductions in cocaine choice by both pharmacological and environmental manipulations were principally produced by systematic decreases in sensitivity to reinforcer price and non-systematic changes in sensitivity to reinforcer magnitude. The modeling approach used provides a theoretical link between the experimental analysis of choice and pharmacological treatments being evaluated as candidate 'agonist-based' medications for cocaine addiction. The analysis suggests that monoamine releaser treatment efficacy to decrease cocaine choice was mediated by selectively increasing the relative price of cocaine. Overall, the net behavioral effect of these pharmacological treatments was to increase substitutability of food pellets, a nondrug reinforcer, for cocaine. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume
1998-01-01
We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.
Grant, Kerry-Ann; McMahon, Catherine; Reilly, Nicole; Austin, Marie-Paule
2010-12-01
Animal studies have demonstrated the interactive effects of prenatal stress exposure and postnatal rearing style on offspring capacity to manage stress. However, little is known about how parenting quality impacts the association between maternal prenatal anxiety and stress reactivity in human infants. This prospective study examined the impact of prenatal anxiety disorder and maternal caregiving sensitivity on infants' responses to a standardised interactive stressor (still-face procedure). Eighty-four women completed a clinical interview during pregnancy to assess anxiety symptoms meeting DSM-IV diagnostic criteria. At infant age 7 months, maternal sensitivity to infant distress and infant negative affect were observed and coded during the still-face procedure. Maternal postnatal (concurrent) anxiety and depression were also assessed at this time. Results indicated a negative association between maternal sensitivity to infant distress and infant negative affect responses to the still-face procedure. An unexpected finding was a positive association between parity and infant reactivity. The main effect for sensitivity was qualified by a significant interaction, p<.05, suggesting that the impact of sensitivity was particularly marked among infants of women who experienced an anxiety disorder during pregnancy. This finding is consistent with a cumulative risk model suggesting that maternal prenatal anxiety and quality of maternal care act in concert to shape infant outcomes. Copyright © 2010 Elsevier Inc. All rights reserved.
Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.
Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro; Abgrall, Remi
2014-11-01
Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.
Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.
Kowalski, Piotr A; Kusy, Maciej
2018-05-01
In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.
Yost, Fred; Hosking, Floyd M.; Jellison, James L.; Short, Bruce; Giversen, Terri; Reed, Jimmy R.
1998-01-01
A new test method to quantify capillary flow solderability on a printed wiring board surface finish. The test is based on solder flow from a pad onto narrow strips or lines. A test procedure and video image analysis technique were developed for conducting the test and evaluating the data. Feasibility tests revealed that the wetted distance was sensitive to the ratio of pad radius to line width (l/r), solder volume, and flux predry time.
Schmitt, Michael; Heib, Florian
2013-10-07
Drop shape analysis is one of the most important and frequently used methods to characterise surfaces in the scientific and industrial communities. An especially large number of studies, which use contact angle measurements to analyse surfaces, are characterised by incorrect or misdirected conclusions such as the determination of surface energies from poorly performed contact angle determinations. In particular, the characterisation of surfaces, which leads to correlations between the contact angle and other effects, must be critically validated for some publications. A large number of works exist concerning the theoretical and thermodynamic aspects of two- and tri-phase boundaries. The linkage between theory and experiment is generally performed by an axisymmetric drop shape analysis, that is, simulations of the theoretical drop profiles by numerical integration onto a number of points of the drop meniscus (approximately 20). These methods work very well for axisymmetric profiles such as those obtained by pendant drop measurements, but in the case of a sessile drop onto real surfaces, additional unknown and misunderstood effects on the dependence of the surface must be considered. We present a special experimental and practical investigation as another way to transition from experiment to theory. This procedure was developed to be especially sensitive to small variations in the dependence of the dynamic contact angle on the surface; as a result, this procedure will allow the properties of the surface to be monitored with a higher precession and sensitivity. In this context, water drops onto a 111 silicon wafer are dynamically measured by video recording and by inclining the surface, which results in a sequence of non-axisymmetric drops. The drop profiles are analysed by commercial software and by the developed and presented high-precision drop shape analysis. In addition to the enhanced sensitivity for contact angle determination, this analysis technique, in combination with innovative fit algorithms and data presentations, can result in enhanced reproducibility and comparability of the contact angle measurements in terms of the material characterisation in a comprehensible way.
NASA Astrophysics Data System (ADS)
Schmitt, Michael; Heib, Florian
2013-10-01
Drop shape analysis is one of the most important and frequently used methods to characterise surfaces in the scientific and industrial communities. An especially large number of studies, which use contact angle measurements to analyse surfaces, are characterised by incorrect or misdirected conclusions such as the determination of surface energies from poorly performed contact angle determinations. In particular, the characterisation of surfaces, which leads to correlations between the contact angle and other effects, must be critically validated for some publications. A large number of works exist concerning the theoretical and thermodynamic aspects of two- and tri-phase boundaries. The linkage between theory and experiment is generally performed by an axisymmetric drop shape analysis, that is, simulations of the theoretical drop profiles by numerical integration onto a number of points of the drop meniscus (approximately 20). These methods work very well for axisymmetric profiles such as those obtained by pendant drop measurements, but in the case of a sessile drop onto real surfaces, additional unknown and misunderstood effects on the dependence of the surface must be considered. We present a special experimental and practical investigation as another way to transition from experiment to theory. This procedure was developed to be especially sensitive to small variations in the dependence of the dynamic contact angle on the surface; as a result, this procedure will allow the properties of the surface to be monitored with a higher precession and sensitivity. In this context, water drops onto a 111 silicon wafer are dynamically measured by video recording and by inclining the surface, which results in a sequence of non-axisymmetric drops. The drop profiles are analysed by commercial software and by the developed and presented high-precision drop shape analysis. In addition to the enhanced sensitivity for contact angle determination, this analysis technique, in combination with innovative fit algorithms and data presentations, can result in enhanced reproducibility and comparability of the contact angle measurements in terms of the material characterisation in a comprehensible way.
Vellmer, Sebastian; Tonoyan, Aram S; Suter, Dieter; Pronin, Igor N; Maximov, Ivan I
2018-02-01
Diffusion magnetic resonance imaging (dMRI) is a powerful tool in clinical applications, in particular, in oncology screening. dMRI demonstrated its benefit and efficiency in the localisation and detection of different types of human brain tumours. Clinical dMRI data suffer from multiple artefacts such as motion and eddy-current distortions, contamination by noise, outliers etc. In order to increase the image quality of the derived diffusion scalar metrics and the accuracy of the subsequent data analysis, various pre-processing approaches are actively developed and used. In the present work we assess the effect of different pre-processing procedures such as a noise correction, different smoothing algorithms and spatial interpolation of raw diffusion data, with respect to the accuracy of brain glioma differentiation. As a set of sensitive biomarkers of the glioma malignancy grades we chose the derived scalar metrics from diffusion and kurtosis tensor imaging as well as the neurite orientation dispersion and density imaging (NODDI) biophysical model. Our results show that the application of noise correction, anisotropic diffusion filtering, and cubic-order spline interpolation resulted in the highest sensitivity and specificity for glioma malignancy grading. Thus, these pre-processing steps are recommended for the statistical analysis in brain tumour studies. Copyright © 2017. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Samadi-Maybodi, Abdolraouf; Darzi, S. K. Hassani Nejad
2008-10-01
Resolution of binary mixtures of vitamin B12, methylcobalamin and B12 coenzyme with minimum sample pre-treatment and without analyte separation has been successfully achieved by methods of partial least squares algorithm with one dependent variable (PLS1), orthogonal signal correction/partial least squares (OSC/PLS), principal component regression (PCR) and hybrid linear analysis (HLA). Data of analysis were obtained from UV-vis spectra. The UV-vis spectra of the vitamin B12, methylcobalamin and B12 coenzyme were recorded in the same spectral conditions. The method of central composite design was used in the ranges of 10-80 mg L -1 for vitamin B12 and methylcobalamin and 20-130 mg L -1 for B12 coenzyme. The models refinement procedure and validation were performed by cross-validation. The minimum root mean square error of prediction (RMSEP) was 2.26 mg L -1 for vitamin B12 with PLS1, 1.33 mg L -1 for methylcobalamin with OSC/PLS and 3.24 mg L -1 for B12 coenzyme with HLA techniques. Figures of merit such as selectivity, sensitivity, analytical sensitivity and LOD were determined for three compounds. The procedure was successfully applied to simultaneous determination of three compounds in synthetic mixtures and in a pharmaceutical formulation.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2001-01-01
This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.
Rodrigues, C; Portugal, F C M; Nogueira, J M F
2012-01-30
Static headspace sorptive extraction using polyurethane foams (HSSE(PU)) followed by gas chromatography coupled to mass spectrometry is proposed for volatile analysis. The application of this novel analytical approach to characterize the volatiles profile from roasted coffee samples, selected as model system, revealed remarkable advantages under convenient experimental conditions. The comparison of HSSE(PU) with other well-established procedures, such as headspace sorptive extraction using polydimethylsiloxane (HSSE(PDMS)) and headspace solid phase microextraction using carboxen/polydimethylsiloxane fibers (HS-SPME(CAR/PDMS)), showed that the former presented much higher capacity, sensitivity and even selectivity, where larger abundance and number of roasted coffee volatile compounds (e.g. furans, pyrazines, ketones, acids and pyrroles) could be achieved, under similar experimental conditions. The data presented herein proved, for the first time, that PU foams present great performance for static headspace sorption-based procedures, showing to be an alternative polymeric phase for volatile analysis. Copyright © 2011 Elsevier B.V. All rights reserved.
High-Pressure Oxygen Test Evaluations
NASA Technical Reports Server (NTRS)
Schwinghamer, R. J.; Key, C. F.
1974-01-01
The relevance of impact sensitivity testing to the development of the space shuttle main engine is discussed in the light of the special requirements for the engine. The background and history of the evolution of liquid and gaseous oxygen testing techniques and philosophy is discussed also. The parameters critical to reliable testing are treated in considerable detail, and test apparatus and procedures are described and discussed. Materials threshold sensitivity determination procedures are considered and a decision logic diagram for sensitivity threshold determination was plotted. Finally, high-pressure materials sensitivity test data are given for selected metallic and nonmetallic materials.
ERIC Educational Resources Information Center
Totura, Christine M. Wienke; Kutash, Krista; Labouliere, Christa D.; Karver, Marc S.
2017-01-01
Background: Suicide is the second leading cause of death for adolescents. Whereas school-based prevention programs are effective, obtaining active consent for youth participation in public health programming concerning sensitive topics is challenging. We explored several active consent procedures for improving participation rates. Methods: Five…
A Comparison of Procedures for Content-Sensitive Item Selection in Computerized Adaptive Tests.
ERIC Educational Resources Information Center
Kingsbury, G. Gage; Zara, Anthony R.
1991-01-01
This simulation investigated two procedures that reduce differences between paper-and-pencil testing and computerized adaptive testing (CAT) by making CAT content sensitive. Results indicate that the price in terms of additional test items of using constrained CAT for content balancing is much smaller than that of using testlets. (SLD)
Sensitivity calculations for iteratively solved problems
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1985-01-01
The calculation of sensitivity derivatives of solutions of iteratively solved systems of algebraic equations is investigated. A modified finite difference procedure is presented which improves the accuracy of the calculated derivatives. The procedure is demonstrated for a simple algebraic example as well as an element-by-element preconditioned conjugate gradient iterative solution technique applied to truss examples.
Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.
Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi
2017-05-01
Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.
Retinoid quantification by HPLC/MS(n)
NASA Technical Reports Server (NTRS)
McCaffery, Peter; Evans, James; Koul, Omanand; Volpert, Amy; Reid, Kevin; Ullman, M. David
2002-01-01
Retinoic acid (RA) mediates most of the biological effects of vitamin A that are essential for vertebrate survival. It acts through binding to receptors that belong to the nuclear receptor transcription factor superfamily (Mangelsdorf et al. 1994). It is also a highly potent vertebrate teratogen. To determine the function and effects of endogenous and exogenous RA, it is important to have a highly specific, sensitive, accurate, and precise analytical procedure. Current analyses of RA and other retinoids are labor intensive, of poor sensitivity, have limited specificity, or require compatibility with RA reporter cell lines (Chen et al. 1995. BIOCHEM: Pharmacol. 50: 1257-1264; Creech Kraft et al. 1994. BIOCHEM: J. 301: 111-119; Lanvers et al. 1996. J. Chromatogr. B Biomed. Appl. 685: 233-240; Maden et al. 1998. DEVELOPMENT: 125: 4133-4144; Wagner et al. 1992. DEVELOPMENT: 116: 55-66). This paper describes an HPLC/mass spectrometry/mass spectrometry product ion scan (HPLC/MS(n)) procedure for the analysis of retinoids that employs atmospheric pressure chemical ionization MS. The retinoids are separated by normal-phase column chromatography with a linear hexane-isopropanol-dioxane gradient. Each retinoid is detected by a unique series of MS(n) functions set at optimal collision-induced dissociation energy (30% to 32%) for all MS(n) steps. The scan events are divided into three segments, based on HPLC elution order, to maximize the mass spectrometer duty cycle. The all-trans, 9-cis, and 13-cis RA isomers are separated, if desired, by an isocratic hexane-dioxane-isopropanol mobile phase. This paper describes an HPLC/MS(n) procedure possessing high sensitivity and specificity for retinoids.
Sensitive, Rapid Detection of Bacterial Spores
NASA Technical Reports Server (NTRS)
Kern, Roger G.; Venkateswaran, Kasthuri; Chen, Fei; Pickett, Molly; Matsuyama, Asahi
2009-01-01
A method of sensitive detection of bacterial spores within delays of no more than a few hours has been developed to provide an alternative to a prior three-day NASA standard culture-based assay. A capability for relatively rapid detection of bacterial spores would be beneficial for many endeavors, a few examples being agriculture, medicine, public health, defense against biowarfare, water supply, sanitation, hygiene, and the food-packaging and medical-equipment industries. The method involves the use of a commercial rapid microbial detection system (RMDS) that utilizes a combination of membrane filtration, adenosine triphosphate (ATP) bioluminescence chemistry, and analysis of luminescence images detected by a charge-coupled-device camera. This RMDS has been demonstrated to be highly sensitive in enumerating microbes (it can detect as little as one colony-forming unit per sample) and has been found to yield data in excellent correlation with those of culture-based methods. What makes the present method necessary is that the specific RMDS and the original protocols for its use are not designed for discriminating between bacterial spores and other microbes. In this method, a heat-shock procedure is added prior to an incubation procedure that is specified in the original RMDS protocols. In this heat-shock procedure (which was also described in a prior NASA Tech Briefs article on enumerating sporeforming bacteria), a sample is exposed to a temperature of 80 C for 15 minutes. Spores can survive the heat shock, but nonspore- forming bacteria and spore-forming bacteria that are not in spore form cannot survive. Therefore, any colonies that grow during incubation after the heat shock are deemed to have originated as spores.
Ruiz-Tovar, Jaime; Muñoz, Jose Luis; Gonzalez, Juan; Garcia, Alejandro; Ferrigni, Carlos; Jimenez, Montiel; Duran, Manuel
2017-12-01
The performance of most bariatric procedures within an Enhanced Recovery After Surgery program has resulted in significant advantages, including a reduction in the length of hospital stay to 2-3 days. However, some postoperative complications may appear after the patient has been discharged. The aim of this study was to investigate the efficacy of various acute-phase parameters determined 24 h after a laparoscopic sleeve gastrectomy for predicting staple line leak in the postoperative course. A prospective study of 208 morbidly obese patients undergoing laparoscopic sleeve gastrectomy as bariatric procedure between 2012 and 2015 was performed. Blood analysis was performed 24 h after surgery. Acute-phase parameters (C-reactive protein, procalcitonin, fibrinogen, and White Blood Cell count) were investigated. Staple line leak appeared in eight patients (3.8%). Using receiver operating characteristic analysis at 24 h postoperatively, a cutoff level of CRP at 9 mg/dL achieved 85% sensitivity and 90% specificity for predicting staple line leak, a cutoff level of procalcitonin at 0.85 ng/mL achieved 70% sensitivity and 90% specificity, and a cutoff level of fibrinogen at 600 mg/dL achieved 80% sensitivity and 87.5% specificity. An elevation of CRP > 9 mg/dL, procalcitonin > 0.85 ng/mL and fibrinogen > 600 mg/dL should alert the surgeon the possibility of occurrence of postoperative staple line leak.
Ducrot, Virginie; Teixeira-Alves, Mickaël; Lopes, Christelle; Delignette-Muller, Marie-Laure; Charles, Sandrine; Lagadic, Laurent
2010-10-01
Long-term effects of endocrine disruptors (EDs) on aquatic invertebrates remain difficult to assess, mainly due to the lack of appropriate sensitive toxicity test methods and relevant data analysis procedures. This study aimed at identifying windows of sensitivity to EDs along the life-cycle of the freshwater snail Lymnaea stagnalis, a candidate species for the development of forthcoming test guidelines. Juveniles, sub-adults, young adults and adults were exposed for 21 days to the fungicide vinclozolin (VZ). Survival, growth, onset of reproduction, fertility and fecundity were monitored weekly. Data were analyzed using standard statistical analysis procedures and mixed-effect models. No deleterious effect on survival and growth occurred in snails exposed to VZ at environmentally relevant concentrations. A significant impairment of the male function occurred in young adults, leading to infertility at concentrations exceeding 0.025 μg/L. Furthermore, fecundity was impaired in adults exposed to concentrations exceeding 25 μg/L. Biological responses depended on VZ concentration, exposure duration and on their interaction, leading to complex response patterns. The use of a standard statistical approach to analyze those data led to underestimation of VZ effects on reproduction, whereas effects could reliably be analyzed by mixed-effect models. L. stagnalis may be among the most sensitive invertebrate species to VZ, a 21-day reproduction test allowing the detection of deleterious effects at environmentally relevant concentrations of the fungicide. These results thus reinforce the relevance of L. stagnalis as a good candidate species for the development of guidelines devoted to the risk assessment of EDs.
Deem, J F; Manning, W H; Knack, J V; Matesich, J S
1989-09-01
A program for the automatic extraction of jitter (PAEJ) was developed for the clinical measurement of pitch perturbations using a microcomputer. The program currently includes 12 implementations of an algorithm for marking the boundary criteria for a fundamental period of vocal fold vibration. The relative sensitivity of these extraction procedures for identifying the pitch period was compared using sine waves. Data obtained to date provide information for each procedure concerning the effects of waveform peakedness and slope, sample duration in cycles, noise level of the analysis system with both direct and tape recorded input, and the influence of interpolation. Zero crossing extraction procedures provided lower jitter values regardless of sine wave frequency or sample duration. The procedures making use of positive- or negative-going zero crossings with interpolation provided the lowest measures of jitter with the sine wave stimuli. Pilot data obtained with normal-speaking adults indicated that jitter measures varied as a function of the speaker, vowel, and sample duration.
Improving Small Signal Stability through Operating Point Adjustment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; Zhou, Ning; Tuffner, Francis K.
2010-09-30
ModeMeter techniques for real-time small signal stability monitoring continue to mature, and more and more phasor measurements are available in power systems. It has come to the stage to bring modal information into real-time power system operation. This paper proposes to establish a procedure for Modal Analysis for Grid Operations (MANGO). Complementary to PSS’s and other traditional modulation-based control, MANGO aims to provide suggestions such as increasing generation or decreasing load for operators to mitigate low-frequency oscillations. Different from modulation-based control, the MANGO procedure proactively maintains adequate damping for all time, instead of reacting to disturbances when they occur. Effectmore » of operating points on small signal stability is presented in this paper. Implementation with existing operating procedures is discussed. Several approaches for modal sensitivity estimation are investigated to associate modal damping and operating parameters. The effectiveness of the MANGO procedure is confirmed through simulation studies of several test systems.« less
Facts and Fallacies of Kidd Antibodies: Experience in a Tertiary Care Hospital in North India.
Makroo, R N; Nayak, Sweta; Chowdhry, Mohit; Karna, Prashant
2017-06-01
We have analyzed the method used in our laboratory to detect the most elusive, clinically significant alloantibody: the Kidd alloantibodies and find the most convenient procedure. A retrospective analysis of the method used in our laboratory for determining Kidd alloantibodies from January 2013 to May 2015 was conducted. The details of the event that sensitized the patient for red cell antibody formation and procedure used to detect the alloantibody were retrieved from the departmental records. Of 405 red cell antibody identification cases, 24 (5.9 %) had Kidd antibody (anti-Jka in 12: 50 % cases; anti-Jkb in 4: 16.7 % cases; multiple antibodies in 8: 32 % cases). Thirteen of 24 patients (54.2 %) had autocontrol positive of which 6 cases needed adsorption procedures whereas antibody/ies could be identified without adsorption procedure in the remaining 7 cases. All the 7 cases had autocontrol of 1+ strength. Of the 11 patients (45.8 %) with autocontrol negative, the antibody was identified using solid phase in 7 cases whereas tube panels were also used in the remaining 4 cases. Kidd alloantibodies though deceptive can be identified by sensitive techniques like the solid phase and simple but laborious techniques using the tube cell panels. Depending upon the reaction strength of the autocontrol, the routine autoadsorption process may be skipped and tube cell enzyme treated cells or solid phase techniques be used to get the results.
Self-consistent adjoint analysis for topology optimization of electromagnetic waves
NASA Astrophysics Data System (ADS)
Deng, Yongbo; Korvink, Jan G.
2018-05-01
In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.
Determining the best treatment for simple bone cyst: a decision analysis.
Lee, Seung Yeol; Chung, Chin Youb; Lee, Kyoung Min; Sung, Ki Hyuk; Won, Sung Hun; Choi, In Ho; Cho, Tae-Joon; Yoo, Won Joon; Yeo, Ji Hyun; Park, Moon Seok
2014-03-01
The treatment of simple bone cysts (SBC) in children varies significantly among physicians. This study examined which procedure is better for the treatment of SBC, using a decision analysis based on current published evidence. A decision tree focused on five treatment modalities of SBC (observation, steroid injection, autologous bone marrow injection, decompression, and curettage with bone graft) were created. Each treatment modality was further branched, according to the presence and severity of complications. The probabilities of all cases were obtained by literature review. A roll back tool was utilized to determine the most preferred treatment modality. One-way sensitivity analysis was performed to determine the threshold value of the treatment modalities. Two-way sensitivity analysis was utilized to examine the joint impact of changes in probabilities of two parameters. The decision model favored autologous bone marrow injection. The expected value of autologous bone marrow injection was 0.9445, while those of observation, steroid injection, decompression, and curettage and bone graft were 0.9318, 0.9400, 0.9395, and 0.9342, respectively. One-way sensitivity analysis showed that autologous bone marrow injection was better than that of decompression for the expected value when the rate of pathologic fracture, or positive symptoms of SBC after autologous bone marrow injection, was lower than 20.4%. In our study, autologous bone marrow injection was found to be the best choice of treatment of SBC. However, the results were sensitive to the rate of pathologic fracture after treatment of SBC. Physicians should consider the possibility of pathologic fracture when they determine a treatment method for SBC.
Marzulli, F; Maguire, H C
1982-02-01
Several guinea-pig predictive test methods were evaluated by comparison of results with those obtained with human predictive tests, using ten compounds that have been used in cosmetics. The method involves the statistical analysis of the frequency with which guinea-pig tests agree with the findings of tests in humans. In addition, the frequencies of false positive and false negative predictive findings are considered and statistically analysed. The results clearly demonstrate the superiority of adjuvant tests (complete Freund's adjuvant) in determining skin sensitizers and the overall superiority of the guinea-pig maximization test in providing results similar to those obtained by human testing. A procedure is suggested for utilizing adjuvant and non-adjuvant test methods for characterizing compounds as of weak, moderate or strong sensitizing potential.
Di Girolamo, Francesco; Masotti, Andrea; Salvatori, Guglielmo; Scapaticci, Margherita; Muraca, Maurizio; Putignani, Lorenza
2014-01-01
She-donkey’s milk (DM) and goat’s milk (GM) are commonly used in newborn and infant feeding because they are less allergenic than other milk types. It is, therefore, mandatory to avoid adulteration and contamination by other milk allergens, developing fast and efficient analytical methods to assess the authenticity of these precious nutrients. In this experimental work, a sensitive and robust matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) profiling was designed to assess the genuineness of DM and GM milks. This workflow allows the identification of DM and GM adulteration at levels of 0.5%, thus, representing a sensitive tool for milk adulteration analysis, if compared with other laborious and time-consuming analytical procedures. PMID:25110863
Guide to the economic analysis of community energy systems
NASA Astrophysics Data System (ADS)
Pferdehirt, W. P.; Croke, K. G.; Hurter, A. P.; Kennedy, A. S.; Lee, C.
1981-08-01
This guidebook provides a framework for the economic analysis of community energy systems. The analysis facilitates a comparison of competing configurations in community energy systems, as well as a comparison with conventional energy systems. Various components of costs and revenues to be considered are discussed in detail. Computational procedures and accompanying worksheets are provided for calculating the net present value, straight and discounted payback periods, the rate of return, and the savings to investment ratio for the proposed energy system alternatives. These computations are based on a projection of the system's costs and revenues over its economic lifetimes. The guidebook also discusses the sensitivity of the results of this economic analysis to changes in various parameters and assumptions.
Novel risk score of contrast-induced nephropathy after percutaneous coronary intervention.
Ji, Ling; Su, XiaoFeng; Qin, Wei; Mi, XuHua; Liu, Fei; Tang, XiaoHong; Li, Zi; Yang, LiChuan
2015-08-01
Contrast-induced nephropathy (CIN) post-percutaneous coronary intervention (PCI) is a major cause of acute kidney injury. In this study, we established a comprehensive risk score model to assess risk of CIN after PCI procedure, which could be easily used in a clinical environment. A total of 805 PCI patients, divided into analysis cohort (70%) and validation cohort (30%), were enrolled retrospectively in this study. Risk factors for CIN were identified using univariate analysis and multivariate logistic regression in the analysis cohort. Risk score model was developed based on multiple regression coefficients. Sensitivity and specificity of the new risk score system was validated in the validation cohort. Comparisons between the new risk score model and previous reported models were applied. The incidence of post-PCI CIN in the analysis cohort (n = 565) was 12%. Considerably high CIN incidence (50%) was observed in patients with chronic kidney disease (CKD). Age >75, body mass index (BMI) >25, myoglobin level, cardiac function level, hypoalbuminaemia, history of chronic kidney disease (CKD), Intra-aortic balloon pump (IABP) and peripheral vascular disease (PVD) were identified as independent risk factors of post-PCI CIN. A novel risk score model was established using multivariate regression coefficients, which showed highest sensitivity and specificity (0.917, 95%CI 0.877-0.957) compared with previous models. A new post-PCI CIN risk score model was developed based on a retrospective study of 805 patients. Application of this model might be helpful to predict CIN in patients undergoing PCI procedure. © 2015 Asian Pacific Society of Nephrology.
Murphy, J R; Wasserman, S S; Baqar, S; Schlesinger, L; Ferreccio, C; Lindberg, A A; Levine, M M
1989-01-01
Experiments were performed in Baltimore, Maryland and in Santiago, Chile, to determine the level of Salmonella typhi antigen-driven in vitro lymphocyte replication response which signifies specific acquired immunity to this bacterium and to determine the best method of data analysis and form of data presentation. Lymphocyte replication was measured as incorporation of 3H-thymidine into desoxyribonucleic acid. Data (ct/min/culture) were analyzed in raw form and following log transformation, by non-parametric and parametric statistical procedures. A preference was developed for log-transformed data and discriminant analysis. Discriminant analysis of log-transformed data revealed 3H-thymidine incorporation rates greater than 3,433 for particulate S. typhi, Ty2 antigen stimulated cultures signified acquired immunity at a sensitivity and specificity of 82.7; for soluble S. typhi O polysaccharide antigen-stimulated cultures, ct/min/culture values of greater than 1,237 signified immunity (sensitivity and specificity 70.5%). PMID:2702777
Guo, C; Hu, J-Y; Chen, X-Y; Li, J-Z
2008-02-01
An analytical method for the determination imazaquin residues in soybeans was developed. The developed liquid/liquid partition and strong anion exchange solid-phase extraction procedures provide the effective cleanup, removing the greatest number of sample matrix interferences. By optimizing mobile-phase pH water/acetonitrile conditions with phosphoric acid, using a C-18 reverse-phase chromatographic column and employing ultraviolet detection, excellent peak resolution was achieved. The combined cleanup and chromatographic method steps reported herein were sensitive and reliable for determining the imazaquin residues in soybean samples. This method is characterized by recovery >88.4%, precision <6.7% CV, and sensitivity of 0.005 ppm, in agreement with directives for method validation in residue analysis. Imazaquin residues in soybeans were further confirmed by high performance liquid chromatography-mass spectrometry (LC-MS). The proposed method was successfully applied to the analysis of imazaquin residues in soybean samples grown in an experimental field after treatments of imazaquin formulation.
Adherence to infection control guidelines in surgery on MRSA positive patients : A cost analysis.
Saegeman, V; Schuermans, A
2016-09-01
In surgical units, similar to other healthcare departments, guidelines are used to curb transmission of methicillin resistant Staphylococcus aureus (MRSA). The aim of this study was to calculate the extra costs for material and extra working hours for compliance to MRSA infection control guidelines in the operating rooms of a University Hospital. The study was based on observations of surgeries on MRSA positive patients. The average cost per surgery was calculated utilizing local information on unit costs. Robustness of the calculations was evaluated with a sensitivity analysis. The total extra costs of adherence to MRSA infection control guidelines averaged € 340.46 per surgical procedure (range € 207.76- € 473.15). A sensitivity analysis based on a standardized operating room hourly rate reached a cost of € 366.22. The extra costs of adherence to infection control guidelines are considerable. To reduce costs, the logistical planning of surgeries could be improved by for instance a dedicated room.
Metal speciation of environmental samples using SPE and SFC-AED analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, S.C.; Burford, M.D.; Robson, M.
1995-12-31
Due to growing public concern over heavy metals in the environment, soil, water and air particulate samples azre now routinely screened for their metal content. Conventional metal analysis typically involves acid digestion extraction and results in the generation of large aqueous and organic solvent waste. This harsh extraction process is usually used to obtain the total metal content of the sample, the extract being analysed by atomic emission or absorption spectroscoply techniques. A more selective method of metal extraction has been investigated which uses a supercritical fluid modified with a complexing agent. The relatively mild extraction method enables both organometallicmore » and inorganic metal species to be recovered intact. The various components from the supercritical fluid extract can be chromatographically separated using supercritical fluid chromatography (SFC) and positive identification of the metals achieved using atomic emission detection (AED). The aim of the study is to develop an analytical extraction procedure which enables a rapid, sensitive and quantitative analysis of metals in environmental samples, using just one extraction (eg SFE) and one analysis (eg SFC-AED) procedure.« less
NASA Technical Reports Server (NTRS)
Woeller, F. H.; Kojiro, D. R.; Carle, G. C.
1984-01-01
The present investigation is concerned with a miniature metastable ionization detector featuring an unconventional electrode configuration, whose performance characteristics parallel those of traditional design. The ionization detector is to be incorporated in a flight gas chromatograph (GC) for use in the Space Shuttle. The design of the detector is discussed, taking into account studies which verified the sensitivity of the detector. The triaxial design of the detector is compared with a flat-plate style. The obtained results show that the principal goal of developing a miniature, highly sensitive ionization detector for flight applications was achieved. Improved fabrication techniques will utilize glass-to-metal seals and brazing procedures.
NASA Astrophysics Data System (ADS)
Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.
2018-04-01
Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.
Tafiadis, Dionysios; Chronopoulos, Spyridon K; Kosma, Evangelia I; Voniati, Louiza; Raptis, Vasilis; Siafaka, Vasiliki; Ziavra, Nausica
2017-07-11
Voice performance is an inextricable key factor of everyday life. Obviously, the deterioration of voice quality can cause various problems to human communication and can therefore reduce the performance of social skills (relevant to voice). The deterioration could be originated from changes inside the system of the vocal tract and larynx. Various prognostic methods exist, and among them is the Voice Handicap Index (VHI). This tool includes self-reported questionnaires, used for determining the cutoff points of total score and of its three domains relevant to young male Greek smokers. The interpretation of the calculated cutoff points can serve as a strong indicator of imminent or future evaluation by a clinician. Consistent with previous calculation, the VHI can also act as a feedback for smokers' voice condition and as monitoring procedure toward smoking cessation. Specifically, the sample consisted of 130 male nondysphonic smokers (aged 18-33 years) who all participated in the VHI test procedure. The test results (through receiver operating characteristic analysis) concluded to a total cutoff point score of 19.50 (sensitivity: 0.838, 1-specificity: 0). Also, in terms of constructs, the Functional domain was equal to 7.50 (sensitivity: 0.676, 1-specificity: 0.032), the Physical domain was equal to 7.50 (sensitivity: 0.706, 1-specificity: 0.032), and the Emotional domain was equal to 6.50 (sensitivity: 0.809, 1-specificity: 0.048). Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
NASA Astrophysics Data System (ADS)
Weng, Hanli; Li, Youping
2017-04-01
The working principle, process device and test procedure of runner static balancing test method by weighting with three-pivot pressure transducers are introduced in this paper. Based on an actual instance of a V hydraulic turbine runner, the error and sensitivity of the three-pivot pressure transducer static balancing method are analysed. Suggestions about improving the accuracy and the application of the method are also proposed.
Yost, F.; Hosking, F.M.; Jellison, J.L.; Short, B.; Giversen, T.; Reed, J.R.
1998-10-27
A new test method to quantify capillary flow solderability on a printed wiring board surface finish. The test is based on solder flow from a pad onto narrow strips or lines. A test procedure and video image analysis technique were developed for conducting the test and evaluating the data. Feasibility tests revealed that the wetted distance was sensitive to the ratio of pad radius to line width (l/r), solder volume, and flux predry time. 11 figs.
Houdi, A A; Crooks, P A; Van Loon, G R; Schubert, C A
1987-05-01
The determination of picomolar levels of histamine and its major metabolite, N tau-methylhistamine, in biological fluids was achieved using reversed-phase liquid chromatography coupled with electrochemical detection. A simple sample purification procedure for blood and urine samples was carried out prior to analysis using an Amberlite CG-50 cation-exchange resin, which afforded an excellent recovery of both compounds.
Elly E. Holcombe; Duane G. Moore; Richard L. Fredriksen
1986-01-01
A modification of the macro-Kjeldahl method that provides increased sensitivity was developed for determining very low levels of nitrogen in forest streams and in rain-water. The method is suitable as a routine laboratory procedure. Analytical range of the method is 0.02 to 1.5 mg/L with high recovery and excellent precision and ac-curacy. The range can be increased to...
Portella, Claudio Elidio; Silva, Julio Guilherme; Bastos, Victor Hugo; Machado, Dionis; Cunha, Marlo; Cagy, Maurício; Basile, Luis; Piedade, Roberto; Ribeiro, Pedro
2006-06-01
The objective of the present study was to evaluate attentional, motor and electroencephalographic (EEG) parameters during a procedural task when subjects have ingested 6 mg of bromazepam. The sample consisted of 26 healthy subjects, male or female, between 19 and 36 years of age. The control (placebo) and experimental (bromazepam 6 mg) groups were submitted to a typewriting task in a randomized, double-blind design. The findings did not show significant differences in attentional and motor measures between groups. Coherence measures (qEEG) were evaluated between scalp regions, in theta, alpha and beta bands. A first analysis revealed a main effect for condition (Anova 2-way--condition versus blocks). A second Anova 2-way (condition versus scalp regions) showed a main effect for both factors. The coherence measure was not a sensitive tool at demonstrating differences between cortical areas as a function of procedural learning.
A rapid liquid chromatography determination of free formaldehyde in cod.
Storey, Joseph M; Andersen, Wendy C; Heise, Andrea; Turnipseed, Sherri B; Lohne, Jack; Thomas, Terri; Madson, Mark
2015-01-01
A rapid method for the determination of free formaldehyde in cod is described. It uses a simple water extraction of formaldehyde which is then derivatised with 2,4-dinitrophenylhydrazine (DNPH) to form a sensitive and specific chromophore for high-performance liquid chromatography (HPLC) detection. Although this formaldehyde derivative has been widely used in past tissue analysis, this paper describes an improved derivatisation procedure. The formation of the DNPH formaldehyde derivative has been shortened to 2 min and a stabilising buffer has been added to the derivative to increase its stability. The average recovery of free formaldehyde in spiked cod was 63% with an RSD of 15% over the range of 25-200 mg kg(-1) (n = 48). The HPLC procedure described here was also compared to a commercial qualitative procedure - a swab test for the determination of free formaldehyde in fish. Several positive samples were compared by both methods.
Cusumano, Davide; Fumagalli, Maria L; Marchetti, Marcello; Fariselli, Laura; De Martin, Elena
2015-01-01
Aim of this study is to examine the feasibility of using the new Gafchromic EBT3 film in a high-dose stereotactic radiosurgery and radiotherapy quality assurance procedure. Owing to the reduced dimensions of the involved lesions, the feasibility of scanning plan verification films on the scanner plate area with the best uniformity rather than using a correction mask was evaluated. For this purpose, signal values dispersion and reproducibility of film scans were investigated. Uniformity was then quantified in the selected area and was found to be within 1.5% for doses up to 8 Gy. A high-dose threshold level for analyses using this procedure was established evaluating the sensitivity of the irradiated films. Sensitivity was found to be of the order of centiGray for doses up to 6.2 Gy and decreasing for higher doses. The obtained results were used to implement a procedure comparing dose distributions delivered with a CyberKnife system to planned ones. The procedure was validated through single beam irradiation on a Gafchromic film. The agreement between dose distributions was then evaluated for 13 patients (brain lesions, 5 Gy/die prescription isodose ~80%) using gamma analysis. Results obtained using Gamma test criteria of 5%/1 mm show a pass rate of 94.3%. Gamma frequency parameters calculation for EBT3 films showed to strongly depend on subtraction of unexposed film pixel values from irradiated ones. In the framework of the described dosimetric procedure, EBT3 films proved to be effective in the verification of high doses delivered to lesions with complex shapes and adjacent to organs at risk. Copyright © 2015 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cusumano, Davide, E-mail: davide.cusumano@unimi.it; Fumagalli, Maria L.; Marchetti, Marcello
2015-10-01
Aim of this study is to examine the feasibility of using the new Gafchromic EBT3 film in a high-dose stereotactic radiosurgery and radiotherapy quality assurance procedure. Owing to the reduced dimensions of the involved lesions, the feasibility of scanning plan verification films on the scanner plate area with the best uniformity rather than using a correction mask was evaluated. For this purpose, signal values dispersion and reproducibility of film scans were investigated. Uniformity was then quantified in the selected area and was found to be within 1.5% for doses up to 8 Gy. A high-dose threshold level for analyses usingmore » this procedure was established evaluating the sensitivity of the irradiated films. Sensitivity was found to be of the order of centiGray for doses up to 6.2 Gy and decreasing for higher doses. The obtained results were used to implement a procedure comparing dose distributions delivered with a CyberKnife system to planned ones. The procedure was validated through single beam irradiation on a Gafchromic film. The agreement between dose distributions was then evaluated for 13 patients (brain lesions, 5 Gy/die prescription isodose ~80%) using gamma analysis. Results obtained using Gamma test criteria of 5%/1 mm show a pass rate of 94.3%. Gamma frequency parameters calculation for EBT3 films showed to strongly depend on subtraction of unexposed film pixel values from irradiated ones. In the framework of the described dosimetric procedure, EBT3 films proved to be effective in the verification of high doses delivered to lesions with complex shapes and adjacent to organs at risk.« less
Schmid, Karen Barros; Scherer, Luciene; Barcellos, Regina Bones; Kuhleis, Daniele; Prestes, Isaías Valente; Steffen, Ricardo Ewbank; Dalla Costa, Elis Regina; Rossetti, Maria Lucia Rosa
2014-12-16
Prison conditions can favor the spread of tuberculosis (TB). This study aimed to evaluate in a Brazilian prison: the performance and accuracy of smear, culture and Detect-TB; performance of smear plus culture and smear plus Detect-TB, according to different TB prevalence rates; and the cost-effectiveness of these procedures for pulmonary tuberculosis (PTB) diagnosis. This paper describes a cost-effectiveness study. A decision analytic model was developed to estimate the costs and cost-effectiveness of five routine diagnostic procedures for diagnosis of PTB using sputum specimens: a) Smear alone, b) Culture alone, c) Detect-TB alone, d) Smear plus culture and e) Smear plus Detect-TB. The cost-effectiveness ratio of costs were evaluated per correctly diagnosed TB case and all procedures costs were attributed based on the procedure costs adopted by the Brazilian Public Health System. A total of 294 spontaneous sputum specimens from patients suspected of having TB were analyzed. The sensibility and specificity were calculated to be 47% and 100% for smear; 93% and 100%, for culture; 74% and 95%, for Detect-TB; 96% and 100%, for smear plus culture; and 86% and 95%, for smear plus Detect-TB. The negative and positive predictive values for smear plus Detect-TB, according to different TB prevalence rates, ranged from 83 to 99% and 48 to 96%, respectively. In a cost-effectiveness analysis, smear was both less costly and less effective than the other strategies. Culture and smear plus culture were more effective but more costly than the other strategies. Smear plus Detect-TB was the most cost-effective method. The Detect-TB evinced to be sensitive and effective for the PTB diagnosis when applied with smear microscopy. Diagnostic methods should be improved to increase TB case detection. To support rational decisions about the implementation of such techniques, cost-effectiveness studies are essential, including in prisons, which are known for health care assessment problems.
Wei, Binnian; McGuffey, James E; Blount, Benjamin C; Wang, Lanqing
2016-01-01
Maternal exposure to marijuana during the lactation period-either active or passive-has prompted concerns about transmission of cannabinoids to breastfed infants and possible subsequent adverse health consequences. Assessing these health risks requires a sensitive analytical approach that is able to quantitatively measure trace-level cannabinoids in breast milk. Here, we describe a saponification-solid phase extraction approach combined with ultra-high-pressure liquid chromatography-tandem mass spectrometry for simultaneously quantifying Δ9-tetrahydrocannabinol (THC), cannabidiol (CBD), and cannabinol (CBN) in breast milk. We demonstrate for the first time that constraints on sensitivity can be overcome by utilizing alkaline saponification of the milk samples. After extensively optimizing the saponification procedure, the validated method exhibited limits of detections of 13, 4, and 66 pg/mL for THC, CBN, and CBD, respectively. Notably, the sensitivity achieved was significantly improved, for instance, the limits of detection for THC is at least 100-fold more sensitive compared to that previously reported in the literature. This is essential for monitoring cannabinoids in breast milk resulting from passive or nonrecent active maternal exposure. Furthermore, we simultaneously acquired multiple reaction monitoring transitions for 12 C- and 13 C-analyte isotopes. This combined analysis largely facilitated data acquisition by reducing the repetitive analysis rate for samples exceeding the linear limits of 12 C-analytes. In addition to high sensitivity and broad quantitation range, this method delivers excellent accuracy (relative error within ±10%), precision (relative standard deviation <10%), and efficient analysis. In future studies, we expect this method to play a critical role in assessing infant exposure to cannabinoids through breastfeeding.
2011-01-01
Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043
Malaei, Reyhane; Ramezani, Amir M; Absalan, Ghodratollah
2018-05-04
A sensitive and reliable ultrasound-assisted dispersive liquid-liquid microextraction (UA-DLLME) procedure was developed and validated for extraction and analysis of malondialdehyde (MDA) as an important lipids-peroxidation biomarker in human plasma. In this methodology, to achieve an applicable extraction procedure, the whole optimization processes were performed in human plasma. To convert MDA into readily extractable species, it was derivatized to hydrazone structure-base by 2,4-dinitrophenylhydrazine (DNPH) at 40 °C within 60 min. Influences of experimental variables on the extraction process including type and volume of extraction and disperser solvents, amount of derivatization agent, temperature, pH, ionic strength, sonication and centrifugation times were evaluated. Under the optimal experimental conditions, the enhancement factor and extraction recovery were 79.8 and 95.8%, respectively. The analytical signal linearly (R 2 = 0.9988) responded over a concentration range of 5.00-4000 ng mL -1 with a limit of detection of 0.75 ng mL -1 (S/N = 3) in the plasma sample. To validate the developed procedure, the recommend guidelines of Food and Drug Administration for bioanalytical analysis have been employed. Copyright © 2018. Published by Elsevier B.V.
Traeger, Adrian C; Skinner, Ian W; Hübscher, Markus; Lee, Hopin; Moseley, G Lorimer; Nicholas, Michael K; Henschke, Nicholas; Refshauge, Kathryn M; Blyth, Fiona M; Main, Chris J; Hush, Julia M; Pearce, Garry; Lo, Serigne; McAuley, James H
Statistical analysis plans increase the transparency of decisions made in the analysis of clinical trial results. The purpose of this paper is to detail the planned analyses for the PREVENT trial, a randomized, placebo-controlled trial of patient education for acute low back pain. We report the pre-specified principles, methods, and procedures to be adhered to in the main analysis of the PREVENT trial data. The primary outcome analysis will be based on Mixed Models for Repeated Measures (MMRM), which can test treatment effects at specific time points, and the assumptions of this analysis are outlined. We also outline the treatment of secondary outcomes and planned sensitivity analyses. We provide decisions regarding the treatment of missing data, handling of descriptive and process measure data, and blinded review procedures. Making public the pre-specified statistical analysis plan for the PREVENT trial minimizes the potential for bias in the analysis of trial data, and in the interpretation and reporting of trial results. ACTRN12612001180808 (https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?ACTRN=12612001180808). Copyright © 2017 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Zenner, Hans P; Pfister, Markus; Birbaumer, Niels
2006-12-01
Acquired centralized tinnitus (ACT) is the most frequent form of chronic tinnitus. The proposed ACT sensitization (ACTS) assumes a peripheral initiation of tinnitus whereby sensitizing signals from the auditory system establish new neuronal connections in the brain. Consequently, permanent neurophysiological malfunction within the information-processing modules results. Successful treatment has to target these malfunctioning information processing. We present in this study the neurophysiological and psychophysiological aspects of a recently suggested neurophysiological model, which may explain the symptoms caused by central cognitive tinnitus sensitization. Although conditioned reflexes, as a causal agent of chronic tinnitus, respond to extinction procedures, sensitization may initiate a vicious circle of overexcitation of the auditory system, resisting extinction and habituation. We used the literature database as indicated under "References" covering English and German works. For the ACTS model we extracted neurophysiological hypotheses of the auditory stimulus processing and the neuronal connections of the central auditory system with other brain regions to explain the malfunctions of auditory information processing. The model does not assume information-processing changes specific for tinnitus but treats the processing of tinnitus signals comparable with the processing of other external stimuli. The model uses the extensive knowledge available on sensitization of perception and memory processes and highlights the similarities of tinnitus with central neuropathic pain. Quality, validity, and comparability of the extracted data were evaluated by peer reviewing. Statistical techniques were not used. According to the tinnitus sensitization model, a tinnitus signal originates (as a type I-IV tinnitus) in the cochlea. In the brain, concerned with perception and cognition, the 1) conditioned associations, as postulated by the tinnitus model of Jastreboff, and the 2) unconditioned sensitized stimulus responses, as postulated in the present ACTS model, are actively connected with and attributed to the tinnitus signal. Attention to the tinnitus constitutes a typical undesired sensitized response. Some of the tinnitus-associated attributes may be called essential, unconditioned sensitization attributes. By a process called facilitation, the tinnitus' essential attributes are suggested to activate the tinnitus response. The result is an undesired increase in responsivity, such as an increase in attentional focus to the eliciting tinnitus stimulus. The mechanisms underlying sensitization are known as a specific nonassociative learning process producing a structural fixation of long-term facilitation at the synaptic level. This sensitization model may be important for the development of a sensitization-specific treatment if extinction procedures alone do not lead to satisfactory outcome. Inasmuch as this model considers sensitization as a nonassociative learning process based on cortical plasticity, it is reasonable to assume that this learning process can be altered by counteracting learning procedures. These counteracting learning procedures may consist of tinnitus-specific cognitive and behavioral procedures.
Brennan, Kristen M; Graugnard, Daniel E; Spry, Malinda L; Brewster-Barnes, Tammy; Smith, Allison C; Schaeffer, Rachel E; Urschel, Kristine L
2015-10-01
To determine effects of a microalgae nutritional product on insulin sensitivity in horses. 8 healthy mature horses. PROCEDURES :Horses (n = 4/group) received a basal diet without (control diet) or with docosahexaenoic acid-rich microalgae meal (150 g/d) for 49 days (day 0 = first day of diet). On day 28, an isoglycemic hyperinsulinemic clamp procedure was performed. Horses then received dexamethasone (0.04 mg/kg/d) for 21 days. On day 49, the clamp procedure was repeated. After a 60-day washout, horses received the alternate diet, and procedures were repeated. Plasma fatty acid, glucose, and insulin concentrations and glucose and insulin dynamics during the clamp procedure were measured on days 28 and 49. Two estimates of insulin sensitivity (reciprocal of the square root of the insulin concentration and the modified insulin-to-glucose ratio for ponies) were calculated. Baseline glucose and insulin concentrations or measures of insulin sensitivity on day 28 did not differ between horses when fed the control diet or the basal diet plus microalgae meal. On day 49 (ie, after dexamethasone administration), the microalgae meal was associated with lower baseline insulin and glucose concentrations and an improved modified insulin-to-glucose ratio for ponies, compared with results for the control diet. Although the microalgae meal had no effect on clamp variables following dexamethasone treatment, it was associated with improved plasma glucose and insulin concentrations and insulin sensitivity estimates. A role for microalgae in the nutritional management of insulin-resistant horses warrants investigation.
On Learning Cluster Coefficient of Private Networks
Wang, Yue; Wu, Xintao; Zhu, Jun; Xiang, Yang
2013-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as clustering coefficient or modularity often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we treat a graph statistics as a function f and develop a divide and conquer approach to enforce differential privacy. The basic procedure of this approach is to first decompose the target computation f into several less complex unit computations f1, …, fm connected by basic mathematical operations (e.g., addition, subtraction, multiplication, division), then perturb the output of each fi with Laplace noise derived from its own sensitivity value and the distributed privacy threshold εi, and finally combine those perturbed fi as the perturbed output of computation f. We examine how various operations affect the accuracy of complex computations. When unit computations have large global sensitivity values, we enforce the differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We illustrate our approach by using clustering coefficient, which is a popular statistics used in social network analysis. Empirical evaluations on five real social networks and various synthetic graphs generated from three random graph models show the developed divide and conquer approach outperforms the direct approach. PMID:24429843
Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN
NASA Technical Reports Server (NTRS)
Quinlan, Jesse; McDaniel, James; Baurle, Robert A.
2013-01-01
Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.
NASA Technical Reports Server (NTRS)
Louis, Pascal; Gokhale, Arun M.
1995-01-01
A number of microstructural processes are sensitive to the spatial arrangements of features in microstructure. However, very little attention has been given in the past to the experimental measurements of the descriptors of microstructural distance distributions due to the lack of practically feasible methods. We present a digital image analysis procedure to estimate the micro-structural distance distributions. The application of the technique is demonstrated via estimation of K function, radial distribution function, and nearest-neighbor distribution function of hollow spherical carbon particulates in a polymer matrix composite, observed in a metallographic section.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wall, Andy; Jain, Jinesh; Stewart, Brian
2012-01-01
Recent innovations in multi-collector ICP-mass spectrometry (MC-ICP-MS) have allowed for rapid and precise measurements of isotope ratios in geological samples. Naturally occurring Sr isotopes has the potential for use in Monitoring, Verification, and Accounting (MVA) associated with geologic CO2 storage. Sr isotopes can be useful for: Sensitive tracking of brine migration; Determining seal rock leakage; Studying fluid/rock reactions. We have optimized separation chemistry procedures that will allow operators to prepare samples for Sr isotope analysis off site using rapid, low cost methods.
Ramírez, Manuel; Peréz, Francisco; Regodón, José A.
1998-01-01
A procedure was developed for the hybridization and improvement of homothallic industrial wine yeasts. Killer cycloheximide-sensitive strains were crossed with killer-sensitive cycloheximide-resistant strains to get killer cycloheximide-resistant hybrids, thereby enabling hybrid selection and identification. This procedure also allows backcrossing of spore colonies from the hybrids with parental strains. PMID:9835605
Towards simplification of hydrologic modeling: Identification of dominant processes
Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.
2016-01-01
The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many
Assessment of simple colorimetric procedures to determine smoking status of diabetic subjects.
Smith, R F; Mather, H M; Ellard, G A
1998-02-01
The performance of a simple colorimetric assay for urinary nicotine metabolites to assess smoking status in diabetic subjects (n = 251) was investigated. Several variations of the colorimetric assay and a qualitative extraction procedure were evaluated in comparison with a cotinine immunoassay as the "gold standard." Among these, the best overall performance was achieved with the qualitative test (sensitivity 95%; specificity 100%). The quantitative measurement of total nicotine metabolites performed less well (sensitivity 92%; specificity 97%) but could be improved by incorporating a blank extraction (sensitivity 98%; specificity 98%). Allowance for diuresis appeared to offer no advantage over the other methods. These results support previous findings regarding the use of these colorimetric procedures in nondiabetic subjects and, contrary to other recent observations, their performance was not impaired in diabetic patients.
The Global Modeling and Assimilation Office (GMAO) 4d-Var and its Adjoint-based Tools
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Tremolet, Yannick
2008-01-01
The fifth generation of the Goddard Earth Observing System (GEOS-5) Data Assimilation System (DAS) is a 3d-var system that uses the Grid-point Statistical Interpolation (GSI) system developed in collaboration with NCEP, and a general circulation model developed at Goddard, that includes the finite-volume hydrodynamics of GEOS-4 wrapped in the Earth System Modeling Framework and physical packages tuned to provide a reliable hydrological cycle for the integration of the Modern Era Retrospective-analysis for Research and Applications (MERRA). This MERRA system is essentially complete and the next generation GEOS is under intense development. A prototype next generation system is now complete and has been producing preliminary results. This prototype system replaces the GSI-based Incremental Analysis Update procedure with a GSI-based 4d-var which uses the adjoint of the finite-volume hydrodynamics of GEOS-4 together with a vertical diffusing scheme for simplified physics. As part of this development we have kept the GEOS-5 IAU procedure as an option and have added the capability to experiment with a First Guess at the Appropriate Time (FGAT) procedure, thus allowing for at least three modes of running the data assimilation experiments. The prototype system is a large extension of GEOS-5 as it also includes various adjoint-based tools, namely, a forecast sensitivity tool, a singular vector tool, and an observation impact tool, that combines the model sensitivity tool with a GSI-based adjoint tool. These features bring the global data assimilation effort at Goddard up to date with technologies used in data assimilation systems at major meteorological centers elsewhere. Various aspects of the next generation GEOS will be discussed during the presentation at the Workshop, and preliminary results will illustrate the discussion.
Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.
Elçi, Alper
2017-12-01
Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Calibration of groundwater vulnerability mapping using the generalized reduced gradient method
NASA Astrophysics Data System (ADS)
Elçi, Alper
2017-12-01
Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods.
Detection of SEA-type α-thalassemia in embryo biopsies by digital PCR.
Lee, Ta-Hsien; Hsu, Ya-Chiung; Chang, Chia Lin
2017-08-01
Accurate and efficient pre-implantation genetic diagnosis (PGD) based on the analysis of single or oligo-cells is needed for timely identification of embryos that are affected by deleterious genetic traits in in vitro fertilization (IVF) clinics. Polymerase chain reaction (PCR) is the backbone of modern genetic diagnoses, and a spectrum of PCR-based techniques have been used to detect various thalassemia mutations in prenatal diagnosis (PND) and PGD. Among thalassemias, SEA-type α-thalassemia is the most common variety found in Asia, and can lead to Bart's hydrops fetalis and serious maternal complications. To formulate an efficient digital PCR for clinical diagnosis of SEA-type α-thalassemia in cultured embryos, we conducted a pilot study to detect the α-globin and SEA-type deletion alleles in blastomere biopsies with a highly sensitive microfluidics-based digital PCR method. Genomic DNA from embryo biopsy samples were extracted, and crude DNA extracts were first amplified by a conventional PCR procedure followed by a nested PCR reaction with primers and probes that are designed for digital PCR amplification. Analysis of microfluidics-based PCR reactions showed that robust signals for normal α-globin and SEA-type deletion alleles, together with an internal control gene, can be routinely generated using crude embryo biopsies after a 10 6 -fold dilution of primary PCR products. The SEA-type deletion in cultured embryos can be sensitively diagnosed with the digital PCR procedure in clinics. The adoption of this robust PGD method could prevent the implantation of IVF embryos that are destined to develop Bart's hydrops fetalis in a timely manner. The results also help inform future development of a standard digital PCR procedure for cost-effective PGD of α-thalassemia in a standard IVF clinic. Copyright © 2017. Published by Elsevier B.V.
Jebaseelan, D Davidson; Jebaraj, C; Yoganandan, Narayan; Rajasekaran, S; Kanna, Rishi M
2012-05-01
The objective of the study was to determine the sensitivity of material properties of the juvenile spine to its external and internal responses using a finite element model under compression, and flexion-extension bending moments. The methodology included exercising the 8-year-old juvenile lumbar spine using parametric procedures. The model included the vertebral centrum, growth plates, laminae, pedicles, transverse processes and spinous processes; disc annulus and nucleus; and various ligaments. The sensitivity analysis was conducted by varying the modulus of elasticity for various components. The first simulation was done using mean material properties. Additional simulations were done for each component corresponding to low and high material property variations. External displacement/rotation and internal stress-strain responses were determined under compression and flexion-extension bending. Results indicated that, under compression, disc properties were more sensitive than bone properties, implying an elevated role of the disc under this mode. Under flexion-extension moments, ligament properties were more dominant than the other components, suggesting that various ligaments of the juvenile spine play a key role in modulating bending behaviors. Changes in the growth plate stress associated with ligament properties explained the importance of the growth plate in the pediatric spine with potential implications in progressive deformities.
Tallarico, Lenita de Freitas; Borrely, Sueli Ivone; Hamada, Natália; Grazeffe, Vanessa Siqueira; Ohlweiler, Fernanda Pires; Okazaki, Kayo; Granatelli, Amanda Tosatte; Pereira, Ivana Wuo; Pereira, Carlos Alberto de Bragança; Nakano, Eliana
2014-12-01
A protocol combining acute toxicity, developmental toxicity and mutagenicity analysis in freshwater snail Biomphalaria glabrata for application in ecotoxicological studies is described. For acute toxicity testing, LC50 and EC50 values were determined; dominant lethal mutations induction was the endpoint for mutagenicity analysis. Reference toxicant potassium dichromate (K2Cr2O7) was used to characterize B. glabrata sensitivity for toxicity and cyclophosphamide to mutagenicity testing purposes. Compared to other relevant freshwater species, B. glabrata showed high sensitivity: the lowest EC50 value was obtained with embryos at veliger stage (5.76mg/L). To assess the model applicability for environmental studies, influent and effluent water samples from a wastewater treatment plant were evaluated. Gastropod sensitivity was assessed in comparison to the standardized bioassay with Daphnia similis exposed to the same water samples. Sampling sites identified as toxic to daphnids were also detected by snails, showing a qualitatively similar sensitivity suggesting that B. glabrata is a suitable test species for freshwater monitoring. Holding procedures and protocols implemented for toxicity and developmental bioassays showed to be in compliance with international standards for intra-laboratory precision. Thereby, we are proposing this system for application in ecotoxicological studies. Copyright © 2014 Elsevier Inc. All rights reserved.
Sensitive PCR Detection of Meloidogyne arenaria, M. incognita, and M. javanica Extracted from Soil
Qiu, Jinya Jack; Westerdahl, Becky B.; Anderson, Cindy; Williamson, Valerie M.
2006-01-01
We have developed a simple PCR assay protocol for detection of the root-knot nematode (RKN) species Meloidogyne arenaria, M. incognita, and M. javanica extracted from soil. Nematodes are extracted from soil using Baermann funnels and centrifugal flotation. The nematode-containing fraction is then digested with proteinase K, and a PCR assay is carried out with primers specific for this group of RKN and with universal primers spanning the ITS of rRNA genes. The presence of RKN J2 can be detected among large numbers of other plant-parasitic and free-living nematodes. The procedure was tested with several soil types and crops from different locations and was found to be sensitive and accurate. Analysis of unknowns and spiked soil samples indicated that detection sensitivity was the same as or higher than by microscopic examination. PMID:19259460
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
NASA Astrophysics Data System (ADS)
Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano
2015-04-01
The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.
Microfluidic-Based Enrichment and Retrieval of Circulating Tumor Cells for RT-PCR Analysis.
Gogoi, Priya; Sepehri, Saedeh; Chow, Will; Handique, Kalyan; Wang, Yixin
2017-01-01
Molecular analysis of circulating tumor cells (CTCs) is hindered by low sensitivity and high level of background leukocytes of currently available CTC enrichment technologies. We have developed a novel device to enrich and retrieve CTCs from blood samples by using a microfluidic chip. The Celsee PREP100 device captures CTCs with high sensitivity and allows the captured CTCs to be retrieved for molecular analysis. It uses the microfluidic chip which has approximately 56,320 capture chambers. Based on differences in cell size and deformability, each chamber ensures that small blood escape while larger CTCs of varying sizes are trapped and isolated in the chambers. In this report, we used the Celsee PREP100 to capture cancer cells spiked into normal donor blood samples. We were able to show that the device can capture as low as 10 cells with high reproducibility. The captured CTCs were retrieved from the microfluidic chip. The cell recovery rate of this back-flow procedure is 100% and the level of remaining background leukocytes is very low (about 300-400 cells). RNA from the retrieved cells are extracted and converted to cDNA, and gene expression analysis of selected cancer markers can be carried out by using RT-PCR assays. The sensitive and easy-to-use Celsee PREP100 system represents a promising technology for capturing and molecular characterization of CTCs.
Cost analysis of single-use (Ambu® aScope™) and reusable bronchoscopes in the ICU.
Perbet, S; Blanquet, M; Mourgues, C; Delmas, J; Bertran, S; Longère, B; Boïko-Alaux, V; Chennell, P; Bazin, J-E; Constantin, J-M
2017-12-01
Flexible optical bronchoscopes are essential for management of airways in ICU, but the conventional reusable flexible scopes have three major drawbacks: high cost of repairs, need for decontamination, and possible transmission of infectious agents. The main objective of this study was to measure the cost of bronchoalveolar lavage (BAL) and percutaneous tracheostomy (PT) using reusable bronchoscopes and single-use bronchoscopes in an ICU of an university hospital. The secondary objective was to compare the satisfaction of healthcare professionals with reusable and single-use bronchoscopes. The study was performed between August 2009 and July 2014 in a 16-bed ICU. All BAL and PT procedures were performed by experienced healthcare professionals. Cost analysis was performed considering ICU and hospital organization. Healthcare professional satisfaction with single-use and reusable scopes was determined based on eight factors. Sensitivity analysis was performed by applying discount rates (0, 3, and 5%) and by simulation of six situations based on different assumptions. At a discount rate of 3%, the costs per BAL for the two reusable scopes were 188.86€ (scope 1) and 185.94€ (scope 2), and the costs per PT for the reusable scope 1 and scope 2 and single-use scopes were 1613.84€, 410.24€, and 204.49€, respectively. The cost per procedure for the reusable scopes depended on the number of procedures performed, maintenance costs, and decontamination costs. Healthcare professionals were more satisfied with the third-generation single-use Ambu ® aScope™. The cost per procedure for the single-use scope was not superior to that for reusable scopes. The choice of single-use or reusable bronchoscopes in an ICU should consider the frequency of procedures and the number of bronchoscopes needed.
NASA Astrophysics Data System (ADS)
Vidic, Nataša. J.; TenPas, Jeff D.; Verosub, Kenneth L.; Singer, Michael J.
2000-08-01
Magnetic susceptibility variations in the Chinese loess/palaeosol sequences have been used extensively for palaeoclimatic interpretations. The magnetic signal of these sequences must be divided into lithogenic and pedogenic components because the palaeoclimatic record is primarily reflected in the pedogenic component. In this paper we compare two methods for separating the pedogenic and lithogenic components of the magnetic susceptibility signal: the citrate-bicarbonate-dithionite (CBD) extraction procedure, and a mixing analysis. Both methods yield good estimates of the pedogenic component, especially for the palaeosols. The CBD procedure underestimates the lithogenic component and overestimates the pedogenic component. The magnitude of this effect is moderately high in loess layers but almost negligible in palaeosols. The mixing model overestimates the lithogenic component and underestimates the pedogenic component. Both methods can be adjusted to yield better estimates of both components. The lithogenic susceptibility, as determined by either method, suggests that palaeoclimatic interpretations based only on total susceptibility will be in error and that a single estimate of the average lithogenic susceptibility is not an accurate basis for adjusting the total susceptibility. A long-term decline in lithogenic susceptibility with depth in the section suggests more intense or prolonged periods of weathering associated with the formation of the older palaeosols. The CBD procedure provides the most comprehensive information on the magnitude of the components and magnetic mineralogy of loess and palaeosols. However, the mixing analysis provides a sensitive, rapid, and easily applied alternative to the CBD procedure. A combination of the two approaches provides the most powerful and perhaps the most accurate way of separating the magnetic susceptibility components.
A behavioral audiogram of the red fox (Vulpes vulpes).
Malkemper, E Pascal; Topinka, Václav; Burda, Hynek
2015-02-01
We determined the absolute hearing sensitivity of the red fox (Vulpes vulpes) using an adapted standard psychoacoustic procedure. The animals were tested in a reward-based go/no-go procedure in a semi-anechoic chamber. At 60 dB sound pressure level (SPL) (re 20 μPa) red foxes perceive pure tones between 51 Hz and 48 kHz, spanning 9.84 octaves with a single peak sensitivity of -15 dB at 4 kHz. The red foxes' high-frequency cutoff is comparable to that of the domestic dog while the low-frequency cutoff is comparable to that of the domestic cat and the absolute sensitivity is between both species. The maximal absolute sensitivity of the red fox is among the best found to date in any mammal. The procedure used here allows for assessment of animal auditory thresholds using positive reinforcement outside the laboratory. Copyright © 2014 Elsevier B.V. All rights reserved.
Integrated Data Collection Analysis (IDCA) Program - AN and Bullseye Smokeless Powder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.
The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of ammonium nitrate (AN) mixed with Bullseye® smokeless powder (Gunpowder). The participants found the AN/Gunpowder to: 1) have a range of sensitivity to impact, comparable to or less than RDX, 2) be fairly insensitive to friction as measured by BAM and ABL, 3) have a range for ESD, from insensitive to more sensitive than PETN, and 4) have thermal sensitivity aboutmore » the same as PETN and Gunpowder. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study has the potential to suggest new guidelines and methods and possibly establish the SSST testing accuracies needed when developing safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods. Note, however, the test procedures differ among the laboratories. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent. Keywords: Small-scale safety testing, proficiency test, impact-, friction-, spark discharge-, thermal testing, round-robin test, safety testing protocols, HME, RDX, potassium perchlorate, potassium chlorate, sodium chlorate, sugar, dodecane, PETN, carbon, ammonium nitrate, Gunpowder, Bullseye® smokeless powder.« less
Colon Capsule Endoscopy for the Detection of Colorectal Polyps: An Economic Analysis
Palimaka, Stefan; Blackhouse, Gord; Goeree, Ron
2015-01-01
Background Colorectal cancer is a leading cause of mortality and morbidity in Ontario. Most cases of colorectal cancer are preventable through early diagnosis and the removal of precancerous polyps. Colon capsule endoscopy is a non-invasive test for detecting colorectal polyps. Objectives The objectives of this analysis were to evaluate the cost-effectiveness and the impact on the Ontario health budget of implementing colon capsule endoscopy for detecting advanced colorectal polyps among adult patients who have been referred for computed tomographic (CT) colonography. Methods We performed an original cost-effectiveness analysis to assess the additional cost of CT colonography and colon capsule endoscopy resulting from misdiagnoses. We generated diagnostic accuracy data from a clinical evidence-based analysis (reported separately), and we developed a deterministic Markov model to estimate the additional long-term costs and life-years lost due to false-negative results. We then also performed a budget impact analysis using data from Ontario administrative sources. One-year costs were estimated for CT colonography and colon capsule endoscopy (replacing all CT colonography procedures, and replacing only those CT colonography procedures in patients with an incomplete colonoscopy within the previous year). We conducted this analysis from the payer perspective. Results Using the point estimates of diagnostic accuracy from the head-to-head study between colon capsule endoscopy and CT colonography, we found the additional cost of false-positive results for colon capsule endoscopy to be $0.41 per patient, while additional false-negatives for the CT colonography arm generated an added cost of $116 per patient, with 0.0096 life-years lost per patient due to cancer. This results in an additional cost of $26,750 per life-year gained for colon capsule endoscopy compared with CT colonography. The total 1-year cost to replace all CT colonography procedures with colon capsule endoscopy in Ontario is about $2.72 million; replacing only those CT colonography procedures in patients with an incomplete colonoscopy in the previous year would cost about $740,600 in the first year. Limitations The difference in accuracy between colon capsule endoscopy and CT colonography was not statistically significant for the detection of advanced adenomas (≥ 10 mm in diameter), according to the head-to-head clinical study from which the diagnostic accuracy was taken. This leads to uncertainty in the economic analysis, with results highly sensitive to changes in diagnostic accuracy. Conclusions The cost-effectiveness of colon capsule endoscopy for use in patients referred for CT colonography is $26,750 per life-year, assuming an increased sensitivity of colon capsule endoscopy. Replacement of CT colonography with colon capsule endoscopy is associated with moderate costs to the health care system. PMID:26366240
Qualitative Determination of Nitrate with Triphenylbenzylphosphonium Chloride.
ERIC Educational Resources Information Center
Berry, Donna A.; Cole, Jerry J.
1984-01-01
Discusses two procedures for the identification of nitrate, the standard test ("Brown Ring" test) and a new procedure using triphenylbenzylphosphonium chloride (TPBPC). Effectiveness of both procedures is compared, with the TPBPC test proving to be more sensitive and accurate. (JM)
Irei, Satoshi
2016-01-01
Molecular marker analysis of environmental samples often requires time consuming preseparation steps. Here, analysis of low-volatile nonpolar molecular markers (5-6 ring polycyclic aromatic hydrocarbons or PAHs, hopanoids, and n-alkanes) without the preseparation procedure is presented. Analysis of artificial sample extracts was directly conducted by gas chromatography-mass spectrometry (GC-MS). After every sample injection, a standard mixture was also analyzed to make a correction on the variation of instrumental sensitivity caused by the unfavorable matrix contained in the extract. The method was further validated for the PAHs using the NIST standard reference materials (SRMs) and then applied to airborne particulate matter samples. Tests with the SRMs showed that overall our methodology was validated with the uncertainty of ~30%. The measurement results of airborne particulate matter (PM) filter samples showed a strong correlation between the PAHs, implying the contributions from the same emission source. Analysis of size-segregated PM filter samples showed that their size distributions were found to be in the PM smaller than 0.4 μm aerodynamic diameter. The observations were consistent with our expectation of their possible sources. Thus, the method was found to be useful for molecular marker studies. PMID:27127511
Hofmann, Matthias J.; Koelsch, Patrick
2015-01-01
Vibrational sum-frequency generation (SFG) spectroscopy has become an established technique for in situ surface analysis. While spectral recording procedures and hardware have been optimized, unique data analysis routines have yet to be established. The SFG intensity is related to probing geometries and properties of the system under investigation such as the absolute square of the second-order susceptibility χ(2)2. A conventional SFG intensity measurement does not grant access to the complex parts of χ(2) unless further assumptions have been made. It is therefore difficult, sometimes impossible, to establish a unique fitting solution for SFG intensity spectra. Recently, interferometric phase-sensitive SFG or heterodyne detection methods have been introduced to measure real and imaginary parts of χ(2) experimentally. Here, we demonstrate that iterative phase-matching between complex spectra retrieved from maximum entropy method analysis and fitting of intensity SFG spectra (iMEMfit) leads to a unique solution for the complex parts of χ(2) and enables quantitative analysis of SFG intensity spectra. A comparison between complex parts retrieved by iMEMfit applied to intensity spectra and phase sensitive experimental data shows excellent agreement between the two methods. PMID:26450297
Extensions and applications of a second-order landsurface parameterization
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1983-01-01
Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.
Karageorgou, Eftychia; Christoforidou, Sofia; Ioannidou, Maria; Psomas, Evdoxios; Samouris, Georgios
2018-06-01
The present study was carried out to assess the detection sensitivity of four microbial inhibition assays (MIAs) in comparison with the results obtained by the High Performance Liquid Chromatography with Diode-Array Detection (HPLC-DAD) method for antibiotics of the β-lactam group and chloramphenicol in fortified raw milk samples. MIAs presented fairly good results when detecting β-lactams, whereas none were able to detect chloramphenicol at or above the permissible limits. HPLC analysis revealed high recoveries of examined compounds, whereas all detection limits observed were lower than their respective maximum residue limits (MRL) values. The extraction and clean-up procedure of antibiotics was performed by a modified matrix solid phase dispersion procedure using a mixture of Plexa by Agilent and QuEChERS as a sorbent. The HPLC method developed was validated, determining the accuracy, precision, linearity, decision limit, and detection capability. Both methods were used to monitor raw milk samples of several cows and sheep, obtained from producers in different regions of Greece, for the presence of examined antibiotic residues. Results obtained showed that MIAs could be used effectively and routinely to detect antibiotic residues in several milk types. However, in some cases, spoilage of milk samples revealed that the kits' sensitivity could be strongly affected, whereas this fact does not affect the effectiveness of HPLC-DAD analysis.
Muñoz, José Luis; Ruiz-Tovar, Jaime; Miranda, Elena; Berrio, Diana Lorena; Moya, Pedro; Gutiérrez, Manuel; Flores, Raquel; Picó, Carlos; Pérez, Ana
2016-05-01
The performance of most bariatric procedures within an Enhanced Recovery After Surgery (ERAS) programs has resulted in considerable advantages, including a reduction in the length of hospital stay to 2 to 3 days. However, some postoperative complications can appear after the patient has been discharged. The aim of this study was to investigate the efficacy of various acute-phase parameters determined 24 and 48 hours after laparoscopic sleeve gastrectomy (LSG) as bariatric procedure, for predicting septic complications, such a surgical site infection (SSI), in the postoperative course. A prospective study of 115 morbidly obese patients who underwent LSG within an ERAS program between 2012 and 2015 was conducted. Blood analysis was performed 24 and 48 hours after surgery. Acute-phase parameters (C-reactive protein [CRP], procalcitonin, and fibrinogen) and WBC count were investigated. Septic complications were observed in 13 patients (11.3%). Using receiver operating characteristic analysis at 24 hours postoperatively, a cutoff level of CRP at 70 mg/L achieved 85% sensitivity and 90% specificity for predicting SSI, and a cutoff level of procalcitonin at 0.2 ng/mL achieved 70% sensitivity and 90% specificity. At 48 hours postoperatively, a cutoff level of CRP at 150 mg/L and procalcitonin at 0.95 ng/mL achieved 100% sensitivity and 100% specificity for predicting SSI. The use of CRP and procalcitonin in the first day and especially in the second day postoperative can predict septic complications after LSG. This is most useful for patients within an ERAS program who will be discharged early. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Kokina, Aija; Pugajeva, Iveta; Bartkevics, Vadims
2016-01-01
A novel and sensitive method utilising high-performance liquid chromatography coupled to triple quadrupole-linear ion trap mass spectrometry (LC-QqQLIT-MS/MS) was developed in order to analyse the content of ochratoxin A (OTA) in coffee samples. The introduction of the triple-stage MS scanning mode (MS(3)) has been shown to increase greatly sensitivity and selectivity by eliminating the high chromatographic baseline caused by interference of complex coffee matrices. The analysis included the sample preparation procedure involving extraction of OTA using a methanol-water mixture and clean-up by immunoaffinity columns and detection using the MS(3) scanning mode of LC-QqQLIT-MS/MS. The proposed method offered a good linear correlation (r(2) > 0.998), excellent precision (RSD < 2.9%) and recovery (94%). The limit of quantification (LOQ) for coffee beans and espresso beverages was 0.010 and 0.003 µg kg(-1), respectively. The developed procedure was compared with traditional methods employing liquid chromatography coupled to fluorescent and tandem quadrupole detectors in conjunction with QuEChERS and solid-phase extraction. The proposed method was successfully applied to the determination of OTA in 15 samples of coffee beans and in 15 samples of espresso coffee beverages obtained from the Latvian market. OTA was found in 10 samples of coffee beans and in two samples of espresso in the ranges of 0.018-1.80 µg kg(-1) and 0.020-0.440 µg l(-1), respectively. No samples exceeded the maximum permitted level of OTA in the European Union (5.0 µg kg(-1)).
Solar energy system economic evaluation: IBM System 4, Clinton, Mississippi
NASA Technical Reports Server (NTRS)
1980-01-01
An economic analysis of the solar energy system was developed for five sites, typical of a wide range of environmental and economic conditions in the continental United States. The analysis was based on the technical and economic models in the F-chart design procedure, with inputs based on the characteristic of the installed system and local conditions. The results are of the economic parameters of present worth of system cost over a 20 year time span: life cycle savings, year of positive savings and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainties in constituent system and economic variables is also investigated.
Cost-Effectiveness Analysis of Diagnostic Options for Pneumocystis Pneumonia (PCP)
Harris, Julie R.; Marston, Barbara J.; Sangrujee, Nalinee; DuPlessis, Desiree; Park, Benjamin
2011-01-01
Background Diagnosis of Pneumocystis jirovecii pneumonia (PCP) is challenging, particularly in developing countries. Highly sensitive diagnostic methods are costly, while less expensive methods often lack sensitivity or specificity. Cost-effectiveness comparisons of the various diagnostic options have not been presented. Methods and Findings We compared cost-effectiveness, as measured by cost per life-years gained and proportion of patients successfully diagnosed and treated, of 33 PCP diagnostic options, involving combinations of specimen collection methods [oral washes, induced and expectorated sputum, and bronchoalveolar lavage (BAL)] and laboratory diagnostic procedures [various staining procedures or polymerase chain reactions (PCR)], or clinical diagnosis with chest x-ray alone. Our analyses were conducted from the perspective of the government payer among ambulatory, HIV-infected patients with symptoms of pneumonia presenting to HIV clinics and hospitals in South Africa. Costing data were obtained from the National Institutes of Communicable Diseases in South Africa. At 50% disease prevalence, diagnostic procedures involving expectorated sputum with any PCR method, or induced sputum with nested or real-time PCR, were all highly cost-effective, successfully treating 77–90% of patients at $26–51 per life-year gained. Procedures using BAL specimens were significantly more expensive without added benefit, successfully treating 68–90% of patients at costs of $189–232 per life-year gained. A relatively cost-effective diagnostic procedure that did not require PCR was Toluidine Blue O staining of induced sputum ($25 per life-year gained, successfully treating 68% of patients). Diagnosis using chest x-rays alone resulted in successful treatment of 77% of patients, though cost-effectiveness was reduced ($109 per life-year gained) compared with several molecular diagnostic options. Conclusions For diagnosis of PCP, use of PCR technologies, when combined with less-invasive patient specimens such as expectorated or induced sputum, represent more cost-effective options than any diagnostic procedure using BAL, or chest x-ray alone. PMID:21858013
Neville, David C A; Coquard, Virginie; Priestman, David A; te Vruchte, Danielle J M; Sillence, Daniel J; Dwek, Raymond A; Platt, Frances M; Butters, Terry D
2004-08-15
Interest in cellular glycosphingolipid (GSL) function has necessitated the development of a rapid and sensitive method to both analyze and characterize the full complement of structures present in various cells and tissues. An optimized method to characterize oligosaccharides released from glycosphingolipids following ceramide glycanase digestion has been developed. The procedure uses the fluorescent compound anthranilic acid (2-aminobenzoic acid; 2-AA) to label oligosaccharides prior to analysis using normal-phase high-performance liquid chromatography. The labeling procedure is rapid, selective, and easy to perform and is based on the published method of Anumula and Dhume [Glycobiology 8 (1998) 685], originally used to analyze N-linked oligosaccharides. It is less time consuming than a previously published 2-aminobenzamide labeling method [Anal. Biochem. 298 (2001) 207] for analyzing GSL-derived oligosaccharides, as the fluorescent labeling is performed on the enzyme reaction mixture. The purification of 2-AA-labeled products has been improved to ensure recovery of oligosaccharides containing one to four monosaccharide units, which was not previously possible using the Anumula and Dhume post-derivatization purification procedure. This new approach may also be used to analyze both N- and O-linked oligosaccharides.
Pretty, Iain A; Maupomé, Gerardo
2004-04-01
Dentists are involved in diagnosing disease in every aspect of their clinical practice. A range of tests, systems, guides and equipment--which can be generally referred to as diagnostic procedures--are available to aid in diagnostic decision making. In this era of evidence-based dentistry, and given the increasing demand for diagnostic accuracy and properly targeted health care, it is important to assess the value of such diagnostic procedures. Doing so allows dentists to weight appropriately the information these procedures supply, to purchase new equipment if it proves more reliable than existing equipment or even to discard a commonly used procedure if it is shown to be unreliable. This article, the first in a 6-part series, defines several concepts used to express the usefulness of diagnostic procedures, including reliability and validity, and describes some of their operating characteristics (statistical measures of performance), in particular, specificity and sensitivity. Subsequent articles in the series will discuss the value of diagnostic procedures used in daily dental practice and will compare today's most innovative procedures with established methods.
Kepler AutoRegressive Planet Search
NASA Astrophysics Data System (ADS)
Feigelson, Eric
NASA's Kepler mission is the source of more exoplanets than any other instrument, but the discovery depends on complex statistical analysis procedures embedded in the Kepler pipeline. A particular challenge is mitigating irregular stellar variability without loss of sensitivity to faint periodic planetary transits. This proposal presents a two-stage alternative analysis procedure. First, parametric autoregressive ARFIMA models, commonly used in econometrics, remove most of the stellar variations. Second, a novel matched filter is used to create a periodogram from which transit-like periodicities are identified. This analysis procedure, the Kepler AutoRegressive Planet Search (KARPS), is confirming most of the Kepler Objects of Interest and is expected to identify additional planetary candidates. The proposed research will complete application of the KARPS methodology to the prime Kepler mission light curves of 200,000: stars, and compare the results with Kepler Objects of Interest obtained with the Kepler pipeline. We will then conduct a variety of astronomical studies based on the KARPS results. Important subsamples will be extracted including Habitable Zone planets, hot super-Earths, grazing-transit hot Jupiters, and multi-planet systems. Groundbased spectroscopy of poorly studied candidates will be performed to better characterize the host stars. Studies of stellar variability will then be pursued based on KARPS analysis. The autocorrelation function and nonstationarity measures will be used to identify spotted stars at different stages of autoregressive modeling. Periodic variables with folded light curves inconsistent with planetary transits will be identified; they may be eclipsing or mutually-illuminating binary star systems. Classification of stellar variables with KARPS-derived statistical properties will be attempted. KARPS procedures will then be applied to archived K2 data to identify planetary transits and characterize stellar variability.
Boriani, Filippo; Villani, Riccardo; Morselli, Paolo Giovanni
2014-10-01
Obesity is increasingly frequent in our society and is associated closely with metabolic disorders. As some studies have suggested, removal of fat tissue through liposuction and dermolipectomies may be of some benefit in the improvement of metabolic indices. This article aimed to review the published literature on this topic and to evaluate metabolic variations meta-analytically after liposuction, dermolipectomy, or both. Through a literature search with the PubMed/Medline database, 14 studies were identified. All articles were analyzed, and several metabolic variables were chosen in the attempt to meta-analyze the effect of adipose tissue removal through the various studies. All statistical calculations were performed with Review Manager (RevMan), version 5.0. Several cardiovascular and metabolic variables are described as prone to variations after body-contouring procedures when a significant amount of adipose tissue has been excised. Four of the studies included in the analysis reported improvements in all the parameters examined. Seven articles showed improvement in some variables and no improvement in others, whereas three studies showed no beneficial variation in any of the considered indicators after body-contouring procedures. Fasting plasma insulin was identified as the only variable for which a meta-analysis of five included studies was possible. The meta-analysis showed a statistically significant reduction in fasting plasma insulin resulting from large-volume liposuction in obese healthy women. Many beneficial metabolic effects resulting from dermolipectomy and liposuction procedures are described in the literature. In particular, fasting plasma insulin and thus insulin sensitivity seem to be positively influenced. Further research, including prospective clinical studies, is necessary for better exploration of the effects that body-contouring plastic surgery procedures have on metabolic parameters.
Effectiveness of adverse effects search filters: drugs versus medical devices.
Farrah, Kelly; Mierzwinski-Urban, Monika; Cimon, Karen
2016-07-01
The study tested the performance of adverse effects search filters when searching for safety information on medical devices, procedures, and diagnostic tests in MEDLINE and Embase. The sensitivity of 3 filters was determined using a sample of 631 references from 131 rapid reviews related to the safety of health technologies. The references were divided into 2 sets by type of intervention: drugs and nondrug health technologies. Keyword and indexing analysis were performed on references from the nondrug testing set that 1 or more of the filters did not retrieve. For all 3 filters, sensitivity was lower for nondrug health technologies (ranging from 53%-87%) than for drugs (88%-93%) in both databases. When tested on the nondrug health technologies set, sensitivity was lower in Embase (ranging from 53%-81%) than in MEDLINE (67%-87%) for all filters. Of the nondrug records that 1 or more of the filters missed, 39% of the missed MEDLINE records and 18% of the missed Embase records did not contain any indexing terms related to adverse events. Analyzing the titles and abstracts of nondrug records that were missed by any 1 filter, the most commonly used keywords related to adverse effects were: risk, complications, mortality, contamination, hemorrhage, and failure. In this study, adverse effects filters were less effective at finding information about the safety of medical devices, procedures, and tests compared to information about the safety of drugs.
Henninger, B.; Putzer, D.; Kendler, D.; Uprimny, C.; Virgolini, I.; Gunsilius, E.; Bale, R.
2012-01-01
Aim. The purpose of this study was to evaluate the accuracy of 2-deoxy-2-[fluorine-18]fluoro-D-glucose (FDG) positron emission tomography (PET), computed tomography (CT), and software-based image fusion of both modalities in the imaging of non-Hodgkin's lymphoma (NHL) and Hodgkin's disease (HD). Methods. 77 patients with NHL (n = 58) or HD (n = 19) underwent a FDG PET scan, a contrast-enhanced CT, and a subsequent digital image fusion during initial staging or followup. 109 examinations of each modality were evaluated and compared to each other. Conventional staging procedures, other imaging techniques, laboratory screening, and follow-up data constituted the reference standard for comparison with image fusion. Sensitivity and specificity were calculated for CT and PET separately. Results. Sensitivity and specificity for detecting malignant lymphoma were 90% and 76% for CT and 94% and 91% for PET, respectively. A lymph node region-based analysis (comprising 14 defined anatomical regions) revealed a sensitivity of 81% and a specificity of 97% for CT and 96% and 99% for FDG PET, respectively. Only three of 109 image fusion findings needed further evaluation (false positive). Conclusion. Digital fusion of PET and CT improves the accuracy of staging, restaging, and therapy monitoring in patients with malignant lymphoma and may reduce the need for invasive diagnostic procedures. PMID:22654631
Price, Charlotte; Stallard, Nigel; Creton, Stuart; Indans, Ian; Guest, Robert; Griffiths, David; Edwards, Philippa
2010-01-01
Acute inhalation toxicity of chemicals has conventionally been assessed by the median lethal concentration (LC50) test (organisation for economic co-operation and development (OECD) TG 403). Two new methods, the recently adopted acute toxic class method (ATC; OECD TG 436) and a proposed fixed concentration procedure (FCP), have recently been considered, but statistical evaluations of these methods did not investigate the influence of differential sensitivity between male and female rats on the outcomes. This paper presents an analysis of data from the assessment of acute inhalation toxicity for 56 substances. Statistically significant differences between the LC50 for males and females were found for 16 substances, with greater than 10-fold differences in the LC50 for two substances. The paper also reports a statistical evaluation of the three test methods in the presence of unanticipated gender differences. With TG 403, a gender difference leads to a slightly greater chance of under-classification. This is also the case for the ATC method, but more pronounced than for TG 403, with misclassification of nearly all substances from Globally Harmonised System (GHS) class 3 into class 4. As the FCP uses females only, if females are more sensitive, the classification is unchanged. If males are more sensitive, the procedure may lead to under-classification. Additional research on modification of the FCP is thus proposed. PMID:20488841
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Weiland, Kenneth E.
2003-01-01
This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.
Development of a drift-correction procedure for a direct-reading spectrometer
NASA Technical Reports Server (NTRS)
Chapman, G. B., II; Gordon, W. A.
1977-01-01
A procedure which provides automatic correction for drifts in the radiometric sensitivity of each detector channel in a direct-reading emission spectrometer is described. Such drifts are customarily controlled by the regular analyses of standards, which provide corrections for changes in the excitational, optical, and electronic components of the instrument. This standardization procedure, however, corrects for the optical and electronic drifts. It is a step that must be taken if the time, effort, and cost of processing standards is to be minimized. This method of radiometric drift correction uses a 1,000-W tungsten-halogen reference lamp to illuminate each detector through the same optical path as that traversed during sample analysis. The responses of the detector channels to this reference light are regularly compared with channel response to the same light intensity at the time of analytical calibration in order to determine and correct for drift. Except for placing the lamp in position, the procedure is fully automated and compensates for changes in spectral intensity due to variations in lamp current. A discussion of the implementation of this drift-correction system is included.
Folded concave penalized learning in identifying multimodal MRI marker for Parkinson’s disease
Liu, Hongcheng; Du, Guangwei; Zhang, Lijun; Lewis, Mechelle M.; Wang, Xue; Yao, Tao; Li, Runze; Huang, Xuemei
2016-01-01
Background Brain MRI holds promise to gauge different aspects of Parkinson’s disease (PD)-related pathological changes. Its analysis, however, is hindered by the high-dimensional nature of the data. New method This study introduces folded concave penalized (FCP) sparse logistic regression to identify biomarkers for PD from a large number of potential factors. The proposed statistical procedures target the challenges of high-dimensionality with limited data samples acquired. The maximization problem associated with the sparse logistic regression model is solved by local linear approximation. The proposed procedures then are applied to the empirical analysis of multimodal MRI data. Results From 45 features, the proposed approach identified 15 MRI markers and the UPSIT, which are known to be clinically relevant to PD. By combining the MRI and clinical markers, we can enhance substantially the specificity and sensitivity of the model, as indicated by the ROC curves. Comparison to existing methods We compare the folded concave penalized learning scheme with both the Lasso penalized scheme and the principle component analysis-based feature selection (PCA) in the Parkinson’s biomarker identification problem that takes into account both the clinical features and MRI markers. The folded concave penalty method demonstrates a substantially better clinical potential than both the Lasso and PCA in terms of specificity and sensitivity. Conclusions For the first time, we applied the FCP learning method to MRI biomarker discovery in PD. The proposed approach successfully identified MRI markers that are clinically relevant. Combining these biomarkers with clinical features can substantially enhance performance. PMID:27102045
Sensitivity of tire response to variations in material and geometric parameters
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Tanner, John A.; Peters, Jeanne M.
1992-01-01
A computational procedure is presented for evaluating the analytic sensitivity derivatives of the tire response with respect to material and geometric parameters of the tire. The tire is modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The computational procedure is applied to the case of uniform inflation pressure on the Space Shuttle nose-gear tire when subjected to uniform inflation pressure. Numerical results are presented showing the sensitivity of the different response quantities to variations in the material characteristics of both the cord and the rubber.
Vaez, Savil Costa; Faria-e-Silva, André Luís; Loguércio, Alessandro Dourado; Fernandes, Micaelle Tenório Guedes; Nahsan, Flávia Pardo Salata
2018-01-01
Abstract Purpose This study determined the effectiveness of the preemptive administration of etodolac on risk and intensity of tooth sensitivity and the bleaching effect caused by in-office bleaching using 35% hydrogen peroxide. Material and methods Fifty patients were selected for this tripleblind, randomized, crossover, and placebo-controlled clinical trial. Etodolac (400 mg) or placebo was administrated in a single-dose 1 hour prior to the bleaching procedure. The whitening treatment with 35% hydrogen peroxide was carried out in two sessions with a 7-day interval. Tooth sensitivity was assessed before, during, and 24 hours after the procedure using the analog visual scale and the verbal rating scale. Color alteration was assessed by a bleach guide scale, 7 days after each session. Relative risk of sensitivity was calculated and adjusted by session, while overall risk was compared by the McNemar's test. Data on the sensitivity level of both scales and color shade were subjected to Friedman, Wilcoxon, and Mann-Whitney tests, respectively (α=0.05). Results The preemptive administration of etodolac did not affect the risk of tooth sensitivity and the level of sensitivity reported, regardless of the time of evaluation and scale used. The sequence of treatment allocation did not affect bleaching effectiveness, while the second session resulted in additional color modification. The preemptive administration of etodolac in a single dose 1 hour prior to in-office tooth bleaching did not alter tooth color, and the risk and intensity of tooth sensitivity reported by patients. Conclusion A single-dose preemptive administration of 400 mg of etodolac did not affect either risk of tooth sensitivity or level of sensitivity reported by patients, during or after the in-office tooth bleaching procedure. PMID:29412363
Vaez, Savil Costa; Faria-E-Silva, André Luís; Loguércio, Alessandro Dourado; Fernandes, Micaelle Tenório Guedes; Nahsan, Flávia Pardo Salata
2018-02-01
This study determined the effectiveness of the preemptive administration of etodolac on risk and intensity of tooth sensitivity and the bleaching effect caused by in-office bleaching using 35% hydrogen peroxide. Fifty patients were selected for this tripleblind, randomized, crossover, and placebo-controlled clinical trial. Etodolac (400 mg) or placebo was administrated in a single-dose 1 hour prior to the bleaching procedure. The whitening treatment with 35% hydrogen peroxide was carried out in two sessions with a 7-day interval. Tooth sensitivity was assessed before, during, and 24 hours after the procedure using the analog visual scale and the verbal rating scale. Color alteration was assessed by a bleach guide scale, 7 days after each session. Relative risk of sensitivity was calculated and adjusted by session, while overall risk was compared by the McNemar's test. Data on the sensitivity level of both scales and color shade were subjected to Friedman, Wilcoxon, and Mann-Whitney tests, respectively (α=0.05). The preemptive administration of etodolac did not affect the risk of tooth sensitivity and the level of sensitivity reported, regardless of the time of evaluation and scale used. The sequence of treatment allocation did not affect bleaching effectiveness, while the second session resulted in additional color modification. The preemptive administration of etodolac in a single dose 1 hour prior to in-office tooth bleaching did not alter tooth color, and the risk and intensity of tooth sensitivity reported by patients. A single-dose preemptive administration of 400 mg of etodolac did not affect either risk of tooth sensitivity or level of sensitivity reported by patients, during or after the in-office tooth bleaching procedure.
Cost analysis of open radical cystectomy versus robot-assisted radical cystectomy.
Bansal, Sukhchain S; Dogra, Tara; Smith, Peter W; Amran, Maisarah; Auluck, Ishna; Bhambra, Maninder; Sura, Manraj S; Rowe, Edward; Koupparis, Anthony
2018-03-01
To perform a cost analysis comparing the cost of robot-assisted radical cystectomy (RARC) with open RC (ORC) in a UK tertiary referral centre and to identify the key cost drivers. Data on hospital length of stay (LOS), operative time (OT), transfusion rate, and volume and complication rate were obtained from a prospectively updated institutional database for patients undergoing RARC or ORC. A cost decision tree model was created. Sensitivity analysis was performed to find key drivers of overall cost and to find breakeven points with ORC. Monte Carlo analysis was performed to quantify the variability in the dataset. One RARC procedure costs £12 449.87, or £12 106.12 if the robot was donated via charitable funds. In comparison, one ORC procedure costs £10 474.54. RARC is 18.9% more expensive than ORC. The key cost drivers were OT, LOS, and the number of cases performed per annum. High ongoing equipment costs remain a large barrier to the cost of RARC falling. However, minimal improvements in patient quality of life would be required to offset this difference. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.
Riding and handling qualities of light aircraft: A review and analysis
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summery, D. C.; Johnson, W. D.
1972-01-01
Design procedures and supporting data necessary for configuring light aircraft to obtain desired responses to pilot commands and gusts are presented. The procedures employ specializations of modern military and jet transport practice where these provide an improvement over earlier practice. General criteria for riding and handling qualities are discussed in terms of the airframe dynamics. Methods available in the literature for calculating the coefficients required for a linearized analysis of the airframe dynamics are reviewed in detail. The review also treats the relation of spin and stall to airframe geometry. Root locus analysis is used to indicate the sensitivity of airframe dynamics to variations in individual stability derivatives and to variations in geometric parameters. Computer programs are given for finding the frequencies, damping ratios, and time constants of all rigid body modes and for generating time histories of aircraft motions in response to control inputs. Appendices are included presenting the derivation of the linearized equations of motion; the stability derivatives; the transfer functions; approximate solutions for the frequency, damping ratio, and time constants; an indication of methods to be used when linear analysis is inadequate; sample calculations; and an explanation of the use of root locus diagrams and Bode plots.
Tedesco, Giorgia; Faggiano, Francesco C; Leo, Erica; Derrico, Pietro; Ritrovato, Matteo
2016-11-01
Robotic surgery has been proposed as a minimally invasive surgical technique with advantages for both surgeons and patients, but is associated with high costs (installation, use and maintenance). The Health Technology Assessment Unit of the Bambino Gesù Children's Hospital sought to investigate the economic sustainability of robotic surgery, having foreseen its impact on the hospital budget METHODS: Break-even and cost-minimization analyses were performed. A deterministic approach for sensitivity analysis was applied by varying the values of parameters between pre-defined ranges in different scenarios to see how the outcomes might differ. The break-even analysis indicated that at least 349 annual interventions would need to be carried out to reach the break-even point. The cost-minimization analysis showed that robotic surgery was the most expensive procedure among the considered alternatives (in terms of the contribution margin). Robotic surgery is a good clinical alternative to laparoscopic and open surgery (for many pediatric operations). However, the costs of robotic procedures are higher than the equivalent laparoscopic and open surgical interventions. Therefore, in the short run, these findings do not seem to support the decision to introduce a robotic system in our hospital.
Preston, Tom
2014-01-01
This paper discusses some of the recent improvements in instrumentation used for stable isotope tracer measurements in the context of measuring retinol stores, in vivo. Tracer costs, together with concerns that larger tracer doses may perturb the parameter under study, demand that ever more sensitive mass spectrometric techniques are developed. GCMS is the most widely used technique. It has high sensitivity in terms of sample amount and uses high resolution GC, yet its ability to detect low isotope ratios is limited by background noise. LCMSMS may become more accessible for tracer studies. Its ability to measure low level stable isotope tracers may prove superior to GCMS, but it is isotope ratio MS (IRMS) that has been designed specifically for low level stable isotope analysis through accurate analysis of tracer:tracee ratios (the tracee being the unlabelled species). Compound-specific isotope analysis, where GC is interfaced to IRMS, is gaining popularity. Here, individual 13C-labelled compounds are separated by GC, combusted to CO2 and transferred on-line for ratiometric analysis by IRMS at the ppm level. However, commercially-available 13C-labelled retinol tracers are 2 - 4 times more expensive than deuterated tracers. For 2H-labelled compounds, GC-pyrolysis-IRMS has now become more generally available as an operating mode on the same IRMS instrument. Here, individual compounds are separated by GC and pyrolysed to H2 at high temperature for analysis by IRMS. It is predicted that GC-pyrolysis-IRMS will facilitate low level tracer procedures to measure body retinol stores, as has been accomplished in the case of fatty acids and amino acids. Sample size requirements for GC-P-IRMS may exceed those of GCMS, but this paper discusses sample preparation procedures and predicts improvements, particularly in the efficiency of sample introduction.
Osterhoff, Georg; O'Hara, Nathan N; D'Cruz, Jennifer; Sprague, Sheila A; Bansback, Nick; Evaniew, Nathan; Slobogean, Gerard P
2017-03-01
There is ongoing debate regarding the optimal surgical treatment of complex proximal humeral fractures in elderly patients. To evaluate the cost-effectiveness of reverse total shoulder arthroplasty (RTSA) compared with hemiarthroplasty (HA) in the management of complex proximal humeral fractures, using a cost-utility analysis. On the basis of data from published literature, a cost-utility analysis was conducted using decision tree and Markov modeling. A single-payer perspective, with a willingness-to-pay (WTP) threshold of Can$50,000 (Canadian dollars), and a lifetime time horizon were used. The incremental cost-effectiveness ratio (ICER) was used as the study's primary outcome measure. In comparison with HA, the incremental cost per quality-adjusted life-year gained for RTSA was Can$13,679. One-way sensitivity analysis revealed the model to be sensitive to the RTSA implant cost and the RTSA procedural cost. The ICER of Can$13,679 is well below the WTP threshold of Can$50,000, and probabilistic sensitivity analysis demonstrated that 92.6% of model simulations favored RTSA. Our economic analysis found that RTSA for the treatment of complex proximal humeral fractures in the elderly is the preferred economic strategy when compared with HA. The ICER of RTSA is well below standard WTP thresholds, and its estimate of cost-effectiveness is similar to other highly successful orthopedic strategies such as total hip arthroplasty for the treatment of hip arthritis. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Evaluation of a radiation protection cabin for invasive electrophysiological procedures.
Dragusin, Octavian; Weerasooriya, Rukshen; Jaïs, Pierre; Hocini, Mélèze; Ector, Joris; Takahashi, Yoshihide; Haïssaguerre, Michel; Bosmans, Hilde; Heidbüchel, Hein
2007-01-01
Complex invasive electrophysiological procedures may result in high cumulative operator radiation exposure. Classical protection with lead aprons results in discomfort while radioprotection is still incomplete. This study evaluated the usefulness of a radiation protection cabin (RPC) that completely surrounds the operator. The evaluation was performed independently in two electrophysiology laboratories (E1-Leuven, Belgium; E2-Bordeaux, France), comparing operator radiation exposure using the RPC vs. a 0.5 mm lead-equivalent apron (total of 135 procedures). E1 used thermoluminiscent dosimeters (TLDs) placed at 16 positions in and out of the RPC and nine positions in and out of the apron. E2 used more sensitive electronic personal dosimeters (EPD), placed at waist and neck. The sensitivity thresholds of the TLDs and EPDs were 10-20 microSv and 1-1.5 microSv, respectively. All procedures could be performed unimpeded with the RPC. Median TLD dose values outside protected areas were in the range of 57-452 microSv, whereas doses under the apron or inside the RPC were all at the background radiation level, irrespective of procedure and fluoroscopy duration and of radiation energy delivered. In addition, the RPC was protecting the entire body (except the hands), whereas lead apron protection is incomplete. Also with the more sensitive EPDs, the radiation dose within the RPC was at the sensitivity threshold/background level (1.3+/-0.6 microSv). Again, radiation to the head was significantly lower within the RPC (1.9+/-1.2 microSv) than with the apron (102+/-23 microSv, P<0.001). The use of the RPC allows performing catheter ablation procedures without compromising catheter manipulation, and with negligible radiation exposure for the operator.
Peptide biomarkers as a way to determine meat authenticity.
Sentandreu, Miguel Angel; Sentandreu, Enrique
2011-11-01
Meat fraud implies many illegal procedures affecting the composition of meat and meat products, something that is commonly done with the aim to increase profit. These practices need to be controlled by legal authorities by means of robust, accurate and sensitive methodologies capable to assure that fraudulent or accidental mislabelling does not arise. Common strategies traditionally used to assess meat authenticity have been based on methods such as chemometric analysis of a large set of data analysis, immunoassays or DNA analysis. The identification of peptide biomarkers specific of a particular meat species, tissue or ingredient by proteomic technologies constitutes an interesting and promising alternative to existing methodologies due to its high discriminating power, robustness and sensitivity. The possibility to develop standardized protein extraction protocols, together with the considerably higher resistance of peptide sequences to food processing as compared to DNA sequences, would overcome some of the limitations currently existing for quantitative determinations of highly processed food samples. The use of routine mass spectrometry equipment would make the technology suitable for control laboratories. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ferracci, Valerio; Brown, Andrew S; Harris, Peter M; Brown, Richard J C
2015-02-27
The response of a flame ionisation detector (FID) on a gas chromatograph to methane, ethane, propane, i-butane and n-butane in a series of multi-component refinery gas standards was investigated to assess the matrix sensitivity of the instrument. High-accuracy synthetic gas standards, traceable to the International System of Units, were used to minimise uncertainties. The instrument response exhibited a small dependence on the component amount fraction: this behaviour, consistent with that of another FID, was thoroughly characterised over a wide range of component amount fractions and was shown to introduce a negligible bias in the analysis of refinery gas samples, provided a suitable reference standard is employed. No significant effects of the molar volume, density and viscosity of the gas mixtures on the instrument response were observed, indicating that the FID is suitable for the analysis of refinery gas mixtures over a wide range of component amount fractions provided that appropriate drift-correction procedures are employed. Copyright © 2015 Elsevier B.V. All rights reserved.
Grandi, Vieri; Sessa, Maurizio; Pisano, Luigi; Rossi, Riccardo; Galvan, Arturo; Gattai, Riccardo; Mori, Moira; Tiradritti, Luana; Bacci, Stefano; Zuccati, Giuliano; Cappugi, Pietro; Pimpinelli, Nicola
2018-04-15
Photodynamic Therapy is a procedure based on the interaction between a Photo Sensitizer, a light source with a specific wavelength and oxygen. Aim of this Review is to provide a brief and updated analysis of scientific reports of the use of PDT with topical PS in the management of oncological, infectious, and inflammatory disorders involving mucosal and semimucosal areas, with a specific focus on diseases of dermatologic interest. Copyright © 2018. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.
A method to estimate weight and dimensions of large and small gas turbine engines
NASA Technical Reports Server (NTRS)
Onat, E.; Klees, G. W.
1979-01-01
A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.
Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn
2016-12-01
We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.
Development of a screening method for the determination of forty-nine priority pollutants in soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiang, P.H.T.
1985-01-01
An extraction procedure followed by capillary GC-MS analysis was used to determine soil pollutants. Dual pH solutions with methylene chloride were used as extraction solvent system. Both base/neutral and acidic fractions were analyzed on the same fused silica 30 meter SPB-1 (SE-30) column. A GC-FID with a 60 meter wide-bore SPB-1 glass capillary column was used for quantitative analysis due to its larger sample capacity and higher sensitivity. The precision and accuracy for 5.1 ppm (51 ..mu..g/10 g) concentration in zero soil was less than 25% RSD. A headspace technique was also developed for the determination of volatile compounds. Themore » same instrumental conditions and columns were used as in the extraction procedure. The precision and accuracy for 3 grams soil sample spiked with 5.1 ppm (52 ..mu..g/10 mL) pollutant mixture in a 20 mL vial was less than 3% RSD.« less
NASA Astrophysics Data System (ADS)
Tiwari, Vaibhav
2018-07-01
The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.
Kleinhans, Sonja; Herrmann, Eva; Kohnen, Thomas; Bühren, Jens
2017-08-15
Background Iatrogenic keratectasia is one of the most dreaded complications of refractive surgery. In most cases, keratectasia develops after refractive surgery of eyes suffering from subclinical stages of keratoconus with few or no signs. Unfortunately, there has been no reliable procedure for the early detection of keratoconus. In this study, we used binary decision trees (recursive partitioning) to assess their suitability for discrimination between normal eyes and eyes with subclinical keratoconus. Patients and Methods The method of decision tree analysis was compared with discriminant analysis which has shown good results in previous studies. Input data were 32 eyes of 32 patients with newly diagnosed keratoconus in the contralateral eye and preoperative data of 10 eyes of 5 patients with keratectasia after laser in-situ keratomileusis (LASIK). The control group was made up of 245 normal eyes after LASIK and 12-month follow-up without any signs of iatrogenic keratectasia. Results Decision trees gave better accuracy and specificity than did discriminant analysis. The sensitivity of decision trees was lower than the sensitivity of discriminant analysis. Conclusion On the basis of the patient population of this study, decision trees did not prove to be superior to linear discriminant analysis for the detection of subclinical keratoconus. Georg Thieme Verlag KG Stuttgart · New York.
Thermochromatography and activation analysis
NASA Astrophysics Data System (ADS)
Stattarov, G. S.; Kist, A. A.
1999-01-01
Gas thermochromatography is a promising method in combination with neutron activation analysis. The procedure includes heating of irradiated samples in a stream of reacting gas carrier (air, chlorine, etc.) or heating in presence of compounds evolving gas at high temperatures. Gaseous products are passed through a tube with certain temperature gradient filled with various sorbents and the gases condense in different parts of the column. Studies of the processes of producing and trapping of volatile compounds allowed to work out various set-ups of apparatus with sorption tubes of various length and various temperature gradients, various filters, sorbents, etc. Sensitivity of these methods is sufficiently better then in INAA.
Myeloperoxidase mRNA detection for lineage determination of leukemic blasts: retrospective analysis.
Crisan, D; Anstett, M J
1995-07-01
Myeloperoxidase (MPO) mRNA is an early myeloid marker; its detection in the morphologically and immunophenotypically primitive blasts of acute undifferentiated leukemia (AUL) establishes myeloid lineage and allows reclassification as acute myelogenous leukemia with minimal differentiation (AML-MO). We have previously reported a procedure for MPO mRNA detection by RT-PCR (reverse transcription-polymerase chain reaction) and an adaptation for use of routine hematology smears. This variant procedure allows retrospective analysis of mRNA and is used in the present study to evaluate the lineage of leukemic blasts in seven cases with morphology and cytochemistry consistent with AUL. All hematology smears used in this study were air-dried, unstained or Wright-stained and stored at room temperature for periods varying between 3 days and 2 years. MPO mRNA was detected in six cases, establishing the myeloid lineage of the blasts and the diagnosis of AML-MO. In the remaining case, the blasts were MPO mRNA negative, confirming the diagnosis of AUL. The RT-PCR procedure for retrospective mRNA analysis is useful in the clinical setting, due to its high specificity and sensitivity, speed (less than 24 h), safety (no radioactivity) and convenient use of routine hematology smears; it is particularly attractive in clinical situations when fresh or frozen specimens are no longer available at the time when the need for molecular diagnostics becomes apparent.
Petrova, Olga E.; Garcia-Alcalde, Fernando; Zampaloni, Claudia; Sauer, Karin
2017-01-01
Global transcriptomic analysis via RNA-seq is often hampered by the high abundance of ribosomal (r)RNA in bacterial cells. To remove rRNA and enrich coding sequences, subtractive hybridization procedures have become the approach of choice prior to RNA-seq, with their efficiency varying in a manner dependent on sample type and composition. Yet, despite an increasing number of RNA-seq studies, comparative evaluation of bacterial rRNA depletion methods has remained limited. Moreover, no such study has utilized RNA derived from bacterial biofilms, which have potentially higher rRNA:mRNA ratios and higher rRNA carryover during RNA-seq analysis. Presently, we evaluated the efficiency of three subtractive hybridization-based kits in depleting rRNA from samples derived from biofilm, as well as planktonic cells of the opportunistic human pathogen Pseudomonas aeruginosa. Our results indicated different rRNA removal efficiency for the three procedures, with the Ribo-Zero kit yielding the highest degree of rRNA depletion, which translated into enhanced enrichment of non-rRNA transcripts and increased depth of RNA-seq coverage. The results indicated that, in addition to improving RNA-seq sensitivity, efficient rRNA removal enhanced detection of low abundance transcripts via qPCR. Finally, we demonstrate that the Ribo-Zero kit also exhibited the highest efficiency when P. aeruginosa/Staphylococcus aureus co-culture RNA samples were tested. PMID:28117413
Yong, Kai-Ling; Nguyen, Hai V.; Cajucom-Uy, Howard Y.; Foo, Valencia; Tan, Donald; Finkelstein, Eric A.; Mehta, Jodhbir S.
2016-01-01
Abstract Descemet stripping automated endothelial keratoplasty (DSAEK) is the most common corneal transplant procedure. A key step in the procedure is preparing the donor cornea for transplantation. This can be accomplished via 1 of 3 alternatives: surgeon cuts the cornea on the day of surgery, the cornea is precut ahead of time in an offsite facility by a trained technician, or a precut cornea is purchased from an eye bank. Currently, there is little evidence on the costs and effectiveness of these 3 strategies to allow healthcare providers decide upon the preferred method to prepare grafts. The aim of this study was to compare the costs and relative effectiveness of each strategy. The Singapore National Eye Centre and Singapore Eye Bank performed both precut cornea and surgeon-cut cornea transplant services between 2009 and 2013. This study included 110 subjects who received precut cornea and 140 who received surgeon-cut cornea. Clinical outcomes and surgical duration were compared across the strategies using the propensity score matching. The cost of each strategy was estimated using the microcosting and consisted of facility costs and procedural costs including surgical duration. One-way sensitivity analysis and threshold analysis were performed. The cost for DSAEK was highest for the surgeon-cut approach ($13,965 per procedure), followed by purchasing precut corneas ($12,659) and then setting up precutting ($12,421). The higher procedural cost of the surgeon-cut approach was largely due to the longer duration of the procedure (surgeon-cut = 72.54 minutes, precut = 59.45 minutes, P < 0.001) and the higher surgeon fees. There was no evidence of differences in clinical outcomes between grafts that were precut or surgeon-cut. Threshold analysis demonstrated that if the number of cases was below 31 a year, the strategy that yielded the lowest cost was purchasing precut cornea from eye bank. If there were more than 290 cases annually, the cheapest option would be to setup precutting facility. Our findings suggest that it is more efficient for centers that are performing a large number of cornea transplants (more than 290 cases) to set up their own facility to conduct precutting. PMID:26937927
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Ribarić, Goran; Kofler, Justus; Jayne, David G
2011-08-15
To undertake full economic evaluation of stapled hemorrhoidopexy (PPH) to establish its cost-effectiveness and investigate whether PPH can become cost-saving compared to conventional excisional hemorrhoidectomy (CH). A cost-utility analysis in hospital and health care system (UK) was undertaken using a probabilistic, cohort-based decision tree to compare the use of PPH with CH. Sensitivity analyses allowed showing outcomes in regard to the variations in clinical practice of PPH procedure. The participants were patients undergoing initial surgical treatment of third and fourth degree hemorrhoids within a 1-year time-horizon. Data on clinical effectiveness were obtained from a systematic review of the literature. Main outcome measures were the cost per procedure at the hospital level, total direct costs from the health care system perspective, quality adjusted life years (QALY) gained and incremental cost per QALY gained. A decrease in operating theater time and hospital stay associated with PPH led to a cost saving compared to CH of GBP 27 (US $43.11, €30.50) per procedure at the hospital level and to an incremental cost of GBP 33 (US $52.68, €37.29) after one year from the societal perspective. Calculation of QALYs induced an incremental QALY of 0.0076 and showed an incremental cost-effective ratio (ICER) of GBP 4316 (US $6890.47, €4878.37). Taking into consideration recent literature on clinical outcomes, PPH becomes cost saving compared to CH for the health care system. PPH is a cost-effective procedure with an ICER of GBP 4136 and it seems that an innovative surgical procedure could be cost saving in routine clinical practice.
WIPP waste characterization program sampling and analysis guidance manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-01-01
The Waste Isolation Pilot Plant (WIPP) Waste Characterization Program Sampling and Analysis Guidance Manual (Guidance Manual) provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Quality Assurance Program Plan (QAPP) for the WIPP Experimental-Waste Characterization Program (the Program). This Guidance Manual includes all of the sampling and testing methodologies accepted by the WIPP Project Office (DOE/WPO) for use in implementing the Program requirements specified in the QAPP. This includes methods for characterizing representative samples of transuranic (TRU) wastesmore » at DOE generator sites with respect to the gas generation controlling variables defined in the WIPP bin-scale and alcove test plans, as well as waste container headspace gas sampling and analytical procedures to support waste characterization requirements under the WIPP test program and the Resource Conservation and Recovery Act (RCRA). The procedures in this Guidance Manual are comprehensive and detailed and are designed to provide the necessary guidance for the preparation of site specific procedures. The use of these procedures is intended to provide the necessary sensitivity, specificity, precision, and comparability of analyses and test results. The solutions to achieving specific program objectives will depend upon facility constraints, compliance with DOE Orders and DOE facilities' operating contractor requirements, and the knowledge and experience of the TRU waste handlers and analysts. With some analytical methods, such as gas chromatography/mass spectrometry, the Guidance Manual procedures may be used directly. With other methods, such as nondestructive/destructive characterization, the Guidance Manual provides guidance rather than a step-by-step procedure.« less
2015-05-01
in consultation with the site management . 4.0 DATA TYPES AND QUALITY CONTROL A sampling plan must account for the collection, handling, and...GUIDANCE DOCUMENT Cost-Effective, Ultra-Sensitive Groundwater Monitoring for Site Remediation and Management : Standard Operating Procedures...Groundwater Monitoring for Site Remediation and Management 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Halden, R.U., Roll, I.B. 5d
Yousefifard, Mahmoud; Baikpour, Masoud; Ghelichkhani, Parisa; Asady, Hadi; Shahsavari Nia, Kavous; Moghadas Jafari, Ali; Hosseini, Mostafa; Safari, Saeed
2016-01-01
The role of ultrasonography in detection of pleural effusion has long been a subject of interest but controversial results have been reported. Accordingly, this study aims to conduct a systematic review of the available literature on diagnostic value of ultrasonography and radiography in detection of pleural effusion through a meta-analytic approach. An extended search was done in databases of Medline, EMBASE, ISI Web of Knowledge, Scopus, Cochrane Library, and ProQuest. Two reviewers independently extracted the data and assessed the quality of the articles. Meta-analysis was performed using a mixed-effects binary regression model. Finally, subgroup analysis was carried out in order to find the sources of heterogeneity between the included studies. 12 studies were included in this meta-analysis (1554 subjects, 58.6% male). Pooled sensitivity of ultrasonography in detection of pleural effusion was 0.94 (95% CI: 0.88-0.97; I2= 84.23, p<0.001) and its pooled specificity was calculated to be 0.98 (95% CI: 0.92-1.0; I2= 88.65, p<0.001), while sensitivity and specificity of chest radiography were 0.51 (95% CI: 0.33-0.68; I2= 91.76, p<0.001) and 0.91 (95% CI: 0.68-0.98; I2= 92.86, p<0.001), respectively. Sensitivity of ultrasonography was found to be higher when the procedure was carried out by an intensivist or a radiologist using 5-10 MHz transducers. Chest ultrasonography, as a screening tool, has a higher diagnostic accuracy in identification of plural effusion compared to radiography. The sensitivity of this imaging modality was found to be higher when performed by a radiologist or an intensivist and using 5-10MHz probes.
Takeyoshi, Masahiro; Iida, Kenji; Shiraishi, Keiji; Hoshuyama, Satsuki
2005-01-01
The murine local lymph node assay (LLNA) is currently recognized as a stand-alone sensitization test for determining the sensitizing potential of chemicals, and it has the advantage of yielding a quantitative endpoint that can be used to predict the sensitization potency of chemicals. The EC3 has been proposed as a parameter for classifying chemicals according to the sensitization potency. We previously developed a non-radioisotopic endpoint for the LLNA based on 5-bromo-2'-deoxyuridine (BrdU) incorporation (non-RI LLNA), and we are proposing a new procedure to predict the sensitization potency of chemicals based on comparisons with known human contact allergens. Nine chemicals (i.e. diphencyclopropenone, p-phenylenediamine, glutaraldehyde, cinnamicaldehyde, citral, eugenol, isopropyl myristate, propyleneglycol and hexane) categorized as human contact allergen classes 1-5 were tested by the non-RI LLNA with the following reference allergens: 2,4-dinitrochlorobenzene (DNCB) as a class 1 human contact allergen, isoeugenol as a class 2 human contact allergen and alpha-hexylcinnamic aldehyde (HCA) as a class 3 human contact allergen. Consequently, nine test chemicals were almost assigned to their correct allergen class. The results suggested that the new procedure for non-RI LLNA can provide correct sensitization potency data. Sensitization potency data are useful for evaluating the sensitization risk to humans of exposure to new chemical products. Accordingly, this approach would be an effective modification of LLNA with regard to its experimental design. Moreover, this procedure can be applied also to the standard LLNA with radioisotopes and to other modifications of the LLNA. Copyright 2005 John Wiley & Sons, Ltd.
Søndergaard, Rikke V; Henriksen, Jonas R; Andresen, Thomas L
2014-12-01
Particle-based nanosensors offer a tool for determining the pH in the endosomal-lysosomal system of living cells. Measurements providing absolute values of pH have so far been restricted by the limited sensitivity range of nanosensors, calibration challenges and the complexity of image analysis. This protocol describes the design and application of a polyacrylamide-based nanosensor (∼60 nm) that covalently incorporates two pH-sensitive fluorophores, fluorescein (FS) and Oregon Green (OG), to broaden the sensitivity range of the sensor (pH 3.1-7.0), and uses the pH-insensitive fluorophore rhodamine as a reference fluorophore. The nanosensors are spontaneously taken up via endocytosis and directed to the lysosomes where dynamic changes in pH can be measured with live-cell confocal microscopy. The most important focus areas of the protocol are the choice of pH-sensitive fluorophores, the design of calibration buffers, the determination of the effective range and especially the description of how to critically evaluate results. The entire procedure typically takes 2-3 weeks.
Sensitivity of Rayleigh wave ellipticity and implications for surface wave inversion
NASA Astrophysics Data System (ADS)
Cercato, Michele
2018-04-01
The use of Rayleigh wave ellipticity has gained increasing popularity in recent years for investigating earth structures, especially for near-surface soil characterization. In spite of its widespread application, the sensitivity of the ellipticity function to the soil structure has been rarely explored in a comprehensive and systematic manner. To this end, a new analytical method is presented for computing the sensitivity of Rayleigh wave ellipticity with respect to the structural parameters of a layered elastic half-space. This method takes advantage of the minor decomposition of the surface wave eigenproblem and is numerically stable at high frequency. This numerical procedure allowed to retrieve the sensitivity for typical near surface and crustal geological scenarios, pointing out the key parameters for ellipticity interpretation under different circumstances. On this basis, a thorough analysis is performed to assess how ellipticity data can efficiently complement surface wave dispersion information in a joint inversion algorithm. The results of synthetic and real-world examples are illustrated to analyse quantitatively the diagnostic potential of the ellipticity data with respect to the soil structure, focusing on the possible sources of misinterpretation in data inversion.
Occult glove perforation during ophthalmic surgery.
Apt, L; Miller, K M
1992-01-01
We examined the latex surgical gloves used by 56 primary surgeons in 454 ophthalmic surgical procedures performed over a 7-month period. Of five techniques used to detect pinholes, air inflation with water submersion and compression was found to be the most sensitive, yielding a 6.80% prevalence in control glove pairs and a 21.8% prevalence in postoperative study glove pairs, for a 15.0% incidence of surgically induced perforations (P = 0.000459). The lowest postoperative perforation rate was 11.4% for cataract and intraocular lens surgery, and the highest was 41.7% for oculoplastic procedures. Factors that correlated significantly with the presence of glove perforations as determined by multiple logistic regression analysis were oculoplastic and pediatric ophthalmology and strabismus surgical procedures, surgeon's status as a fellow in training, operating time, and glove size. The thumb and index finger of the nondominant hand contained the largest numbers of pinholes. These data suggest strategies for reducing the risk of cross-infection during ophthalmic surgery. PMID:1494836
Accelerator mass spectrometry of small biological samples.
Salehpour, Mehran; Forsgard, Niklas; Possnert, Göran
2008-12-01
Accelerator mass spectrometry (AMS) is an ultra-sensitive technique for isotopic ratio measurements. In the biomedical field, AMS can be used to measure femtomolar concentrations of labeled drugs in body fluids, with direct applications in early drug development such as Microdosing. Likewise, the regenerative properties of cells which are of fundamental significance in stem-cell research can be determined with an accuracy of a few years by AMS analysis of human DNA. However, AMS nominally requires about 1 mg of carbon per sample which is not always available when dealing with specific body substances such as localized, organ-specific DNA samples. Consequently, it is of analytical interest to develop methods for the routine analysis of small samples in the range of a few tens of microg. We have used a 5 MV Pelletron tandem accelerator to study small biological samples using AMS. Different methods are presented and compared. A (12)C-carrier sample preparation method is described which is potentially more sensitive and less susceptible to contamination than the standard procedures.
Rezaei, Behzad; Damiri, Sajjad
2010-11-15
A study of the electrochemical behavior and determination of RDX, a high explosive, is described on a multi-walled carbon nanotubes (MWCNTs) modified glassy carbon electrode (GCE) using adsorptive stripping voltammetry and electrochemical impedance spectroscopy (EIS) techniques. The results indicated that MWCNTs electrode remarkably enhances the sensitivity of the voltammetric method and provides measurements of this explosive down to the sub-mg/l level in a wide pH range. The operational parameters were optimized and a sensitive, simple and time-saving cyclic voltammetric procedure was developed for the analysis of RDX in ground and tap water samples. Under optimized conditions, the reduction peak have two linear dynamic ranges of 0.6-20.0 and 8.0-200.0 mM with a detection limit of 25.0 nM and a precision of <4% (RSD for 8 analysis). Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Watanabe, Kenichi; Minniti, Triestino; Kockelmann, Winfried; Dalgliesh, Robert; Burca, Genoveva; Tremsin, Anton S.
2017-07-01
The uncertainties and the stability of a neutron sensitive MCP/Timepix detector when operating in the event timing mode for quantitative image analysis at a pulsed neutron source were investigated. The dominant component to the uncertainty arises from the counting statistics. The contribution of the overlap correction to the uncertainty was concluded to be negligible from considerations based on the error propagation even if a pixel occupation probability is more than 50%. We, additionally, have taken into account the multiple counting effect in consideration of the counting statistics. Furthermore, the detection efficiency of this detector system changes under relatively high neutron fluxes due to the ageing effects of current Microchannel Plates. Since this efficiency change is position-dependent, it induces a memory image. The memory effect can be significantly reduced with correction procedures using the rate equations describing the permanent gain degradation and the scrubbing effect on the inner surfaces of the MCP pores.
Sironi, Emanuele; Taroni, Franco; Baldinotti, Claudio; Nardi, Cosimo; Norelli, Gian-Aristide; Gallidabino, Matteo; Pinchi, Vilma
2017-11-14
The present study aimed to investigate the performance of a Bayesian method in the evaluation of dental age-related evidence collected by means of a geometrical approximation procedure of the pulp chamber volume. Measurement of this volume was based on three-dimensional cone beam computed tomography images. The Bayesian method was applied by means of a probabilistic graphical model, namely a Bayesian network. Performance of that method was investigated in terms of accuracy and bias of the decisional outcomes. Influence of an informed elicitation of the prior belief of chronological age was also studied by means of a sensitivity analysis. Outcomes in terms of accuracy were adequate with standard requirements for forensic adult age estimation. Findings also indicated that the Bayesian method does not show a particular tendency towards under- or overestimation of the age variable. Outcomes of the sensitivity analysis showed that results on estimation are improved with a ration elicitation of the prior probabilities of age.
Bioanalytical methods for food contaminant analysis.
Van Emon, Jeanette M
2010-01-01
Foods are complex mixtures of lipids, carbohydrates, proteins, vitamins, organic compounds, and other naturally occurring substances. Sometimes added to this mixture are residues of pesticides, veterinary and human drugs, microbial toxins, preservatives, contaminants from food processing and packaging, and other residues. This milieu of compounds can pose difficulties in the analysis of food contaminants. There is an expanding need for rapid and cost-effective residue methods for difficult food matrixes to safeguard our food supply. Bioanalytical methods are established for many food contaminants such as mycotoxins and are the method of choice for many food allergens. Bioanalytical methods are often more cost-effective and sensitive than instrumental procedures. Recent developments in bioanalytical methods may provide more applications for their use in food analysis.
Gene expression analysis upon lncRNA DDSR1 knockdown in human fibroblasts
Jia, Li; Sun, Zhonghe; Wu, Xiaolin; Misteli, Tom; Sharma, Vivek
2015-01-01
Long non-coding RNAs (lncRNAs) play important roles in regulating diverse biological processes including DNA damage and repair. We have recently reported that the DNA damage inducible lncRNA DNA damage-sensitive RNA1 (DDSR1) regulates DNA repair by homologous recombination (HR). Since lncRNAs also modulate gene expression, we identified gene expression changes upon DDSR1 knockdown in human fibroblast cells. Gene expression analysis after RNAi treatment targeted against DDSR1 revealed 119 genes that show differential expression. Here we provide a detailed description of the microarray data (NCBI GEO accession number GSE67048) and the data analysis procedure associated with the publication by Sharma et al., 2015 in EMBO Reports [1]. PMID:26697398
Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 1
NASA Technical Reports Server (NTRS)
1985-01-01
The primary objective of Task 3 is to provide additional analysis and insight necessary to support key design/programmatic decision for options quantification and selection for system definition. This includes: (1) the identification of key trade study topics; (2) the definition of a trade study procedure for each topic (issues to be resolved, key inputs, criteria/weighting, methodology); (3) conduct tradeoff and sensitivity analysis; and (4) the review/verification of results within the context of evolving system design and definition. The trade study topics addressed in this volume include space autonomy and function automation, software transportability, system network topology, communications standardization, onboard local area networking, distributed operating system, software configuration management, and the software development environment facility.
NASA Astrophysics Data System (ADS)
Campbell, J.; Dean, J.; Clyne, T. W.
2017-02-01
This study concerns a commonly-used procedure for evaluating the steady state creep stress exponent, n, from indentation data. The procedure involves monitoring the indenter displacement history under constant load and making the assumption that, once its velocity has stabilised, the system is in a quasi-steady state, with stage II creep dominating the behaviour. The stress and strain fields under the indenter are represented by "equivalent stress" and "equivalent strain rate" values. The estimate of n is then obtained as the gradient of a plot of the logarithm of the equivalent strain rate against the logarithm of the equivalent stress. Concerns have, however, been expressed about the reliability of this procedure, and indeed it has already been shown to be fundamentally flawed. In the present paper, it is demonstrated, using a very simple analysis, that, for a genuinely stable velocity, the procedure always leads to the same, constant value for n (either 1.0 or 0.5, depending on whether the tip shape is spherical or self-similar). This occurs irrespective of the value of the measured velocity, or indeed of any creep characteristic of the material. It is now clear that previously-measured values of n, obtained using this procedure, have varied in a more or less random fashion, depending on the functional form chosen to represent the displacement-time history and the experimental variables (tip shape and size, penetration depth, etc.), with little or no sensitivity to the true value of n.
Zeng, Mingfei; Cao, Huachuan
2018-04-15
Short chain fatty acids (SCFA) and ketone bodies recently emerged as important physiological relevant metabolites because of their association with microbiota, immunology, obesity and other metabolic states. They were commonly analyzed by GC-MS with long run time and laborious sample preparation. In this study we developed a novel LC-MS/MS method using fast derivatization coupled with liquid-liquid extraction to detect SCFA and ketone bodies in plasma and feces. Several different derivatization reagents were evaluated to compare the efficiency, the sensitivity and chromatographic separation of structural isomers. O‑benzylhydroxylamine was selected for its superior overall performance in reaction time and isomeric separation that allowed the measurement of each SCFAs and ketone bodies free from interferences. The derivatization procedure is facile and reproducible in aqueous-organic medium, which abolished the evaporation procedure hampering the analysis of volatile short chain acids. Enhancement in sensitivity remarkably improved the detection limit of SCFA and ketone bodies to sub-fmol level. This novel method was applied to quantify these metabolites in fecal and plasma samples from lean and DIO mouse. Copyright © 2018 Elsevier B.V. All rights reserved.
Fernández de la Ossa, Ma Ángeles; Ortega-Ojeda, Fernando; García-Ruiz, Carmen
2014-11-01
This work reports an investigation for the analysis of different paper samples using CE with laser-induced detection. Papers from four different manufactures (white-copy paper) and four different paper sources (white and recycled-copy papers, adhesive yellow paper notes and restaurant serviettes) were pulverized by scratching with a surgical scalpel prior to their derivatization with a fluorescent labeling agent, 8-aminopyrene-1,3,6-trisulfonic acid. Methodological conditions were evaluated, specifically the derivatization conditions with the aim to achieve the best S/N signals and the separation conditions in order to obtain optimum values of sensitivity and reproducibility. The best conditions, in terms of fastest, and easiest sample preparation procedure, minimal sample consumption, as well as the use of the simplest and fastest CE-procedure for obtaining the best analytical parameters, were applied to the analysis of the different paper samples. The registered electropherograms were pretreated (normalized and aligned) and subjected to multivariate analysis (principal component analysis). A successful discrimination among paper samples without entanglements was achieved. To the best of our knowledge, this work presents the first approach to achieve a successful differentiation among visually similar white-copy paper samples produced by different manufactures and paper from different paper sources through their direct analysis by CE-LIF and subsequent comparative study of the complete cellulose electropherogram by chemometric tools. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Highly sensitive catalytic spectrophotometric determination of ruthenium
NASA Astrophysics Data System (ADS)
Naik, Radhey M.; Srivastava, Abhishek; Prasad, Surendra
2008-01-01
A new and highly sensitive catalytic kinetic method (CKM) for the determination of ruthenium(III) has been established based on its catalytic effect on the oxidation of L-phenylalanine ( L-Pheala) by KMnO 4 in highly alkaline medium. The reaction has been followed spectrophotometrically by measuring the decrease in the absorbance at 526 nm. The proposed CKM is based on the fixed time procedure under optimum reaction conditions. It relies on the linear relationship where the change in the absorbance (Δ At) versus added Ru(III) amounts in the range of 0.101-2.526 ng ml -1 is plotted. Under the optimum conditions, the sensitivity of the proposed method, i.e. the limit of detection corresponding to 5 min is 0.08 ng ml -1, and decreases with increased time of analysis. The method is featured with good accuracy and reproducibility for ruthenium(III) determination. The ruthenium(III) has also been determined in presence of several interfering and non-interfering cations, anions and polyaminocarboxylates. No foreign ions interfered in the determination ruthenium(III) up to 20-fold higher concentration of foreign ions. In addition to standard solutions analysis, this method was successfully applied for the quantitative determination of ruthenium(III) in drinking water samples. The method is highly sensitive, selective and very stable. A review of recently published catalytic spectrophotometric methods for the determination of ruthenium(III) has also been presented for comparison.
Totura, Christine M Wienke; Kutash, Krista; Labouliere, Christa D; Karver, Marc S
2017-02-01
Suicide is the second leading cause of death for adolescents. Whereas school-based prevention programs are effective, obtaining active consent for youth participation in public health programming concerning sensitive topics is challenging. We explored several active consent procedures for improving participation rates. Five active consent methods (in-person, students taking forms home, mailing, mailing preceded by primers, mailing followed by reminder calls) were compared against passive consent procedures to evaluate recruitment success, as determined by participation (proportion who responded yes) and response (proportion who returned any response) rates. Participation acceptance rates ranged from 38 to 100% depending on consent method implemented. Compared with passive consent, active consent procedures were more variable in response and participation rates. In-person methods provided higher rates than less interpersonal methods, such as mailing or students taking consents home. Mailed primers before or reminder calls after consent forms were mailed increased response but not participation rates. Students taking consents home resulted in the lowest rates. Although passive consent produces the highest student participation, these methods are not always appropriate for programs addressing sensitive topics in schools. In-person active consent procedures may be the best option when prioritizing balance between parental awareness and successful student recruitment. © 2017, American School Health Association.
Isselmann DiSantis, Katherine; Kumanyika, Shiriki; Carter-Edwards, Lori; Rohm Young, Deborah; Grier, Sonya A; Lassiter, Vikki
2017-10-29
Food marketing environments of Black American consumers are heavily affected by ethnically-targeted marketing of sugar sweetened beverages, fast foods, and other products that may contribute to caloric overconsumption. This qualitative study assessed Black consumers' responses to targeted marketing. Black adults (2 mixed gender groups; total n = 30) and youth (2 gender specific groups; total n = 35) from two U.S. communities participated before and after a sensitization procedure-a critical practice used to understand social justice concerns. Pre-sensitization focus groups elicited responses to scenarios about various targeted marketing tactics. Participants were then given an informational booklet about targeted marketing to Black Americans, and all returned for the second (post-sensitization) focus group one week later. Conventional qualitative content analysis of transcripts identified several salient themes: seeing the marketer's perspective ("it's about demand"; "consumers choose"), respect for community ("marketers are setting us up for failure"; "making wrong assumptions"), and food environments as a social justice issue ("no one is watching the door"; "I didn't realize"). Effects of sensitization were reflected in participants' stated reactions to the information in the booklet, and also in the relative occurrence of marketer-oriented themes and social justice-oriented themes, respectively, less and more after sensitization.
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
Rugged fiber optic probe for raman measurement
O'Rourke, Patrick E.; Toole, Jr., William R.; Nave, Stanley E.
1998-01-01
An optical probe for conducting light scattering analysis is disclosed. The probe comprises a hollow housing and a probe tip. A fiber assembly made up of a transmitting fiber and a receiving bundle is inserted in the tip. A filter assembly is inserted in the housing and connected to the fiber assembly. A signal line from the light source and to the spectrometer also is connected to the filter assembly and communicates with the fiber assembly. By using a spring-loaded assembly to hold the fiber connectors together with the in-line filters, complex and sensitive alignment procedures are avoided. The close proximity of the filter assembly to the probe tip eliminates or minimizes self-scattering generated by the optical fiber. Also, because the probe can contact the sample directly, sensitive optics can be eliminated.
Genetics-based methods for detection of Salmonella spp. in foods.
Mozola, Mark A
2006-01-01
Genetic methods are now at the forefront of foodborne pathogen testing. The sensitivity, specificity, and inclusivity advantages offered by deoxyribonucleic acid (DNA) probe technology have driven an intense effort in methods development over the past 20 years. DNA probe-based methods for Salmonella spp. and other pathogens have progressed from time-consuming procedures involving the use of radioisotopes to simple, high throughput, automated assays. The analytical sensitivity of nucleic acid amplification technology has facilitated a reduction in analysis time by allowing enriched samples to be tested for previously undetectable quantities of analyte. This article will trace the evolution of the development of genetic methods for detection of Salmonella in foods, review the basic assay formats and their advantages and limitations, and discuss method performance characteristics and considerations for selection of methods.
Yu, Chang Ho; Patel, Bhupendra; Palencia, Marilou; Fan, Zhihua Tina
2017-01-13
A selective, sensitive, and accurate analytical method for the measurement of perfluoroalkyl and polyfluoroalkyl substances (PFASs) in human serum, utilizing LC-MS/MS (liquid chromatography-tandem mass spectrometry), was developed and validated according to the Centers for Disease Control and Prevention (CDC) guidelines for biological sample analysis. Tests were conducted to determine the optimal analytical column, mobile phase composition and pH, gradient program, and cleaning procedure. The final analytical column selected for analysis was an extra densely bonded silica-packed reverse-phase column (Agilent XDB-C 8 , 3.0×100mm, 3.5μm). Mobile phase A was an aqueous buffer solution containing 10mM ammonium acetate (pH=4.3). Mobile phase B was a mixture of methanol and acetonitrile (1:1, v/v). The gradient program was programmed by initiating a fast elution (%B, from 40 to 65%) between 1.0 and 1.5min, followed by a slow elution (%B: 65-80%) in the period of 1.5-7.5min. The cleanup procedures were augmented by cleaning with (1) various solvents (isopropyl alcohol, methanol, acetonitrile, and reverse osmosis-purified water); (2) extensive washing steps for the autosampler and solid phase extraction (SPE) cartridge; and (3) a post-analysis cleaning step for the whole system. Under the above conditions, the resolution and sensitivity were significantly improved. Twelve target PFASs were baseline-separated (2.5-7.0min) within a 10-min of acquisition time. The limits of detection (LODs) were 0.01ng/mL or lower for all of the target compounds, making this method 5 times more sensitive than previously published methods. The newly developed method was validated in the linear range of 0.01-50ng/mL, and the accuracy (recovery between 80 and 120%) and precision (RSD<20%) were acceptable at three spiked levels (0.25, 2.5, and 25ng/mL). The method development and validation results demonstrated that this method was precise, accurate, and robust, with high-throughput (∼10min per sample); thus suitable for large-scale epidemiological studies. Published by Elsevier B.V.
Bloomfield, M S
2002-12-06
4-Aminophenol (4AP) is the primary degradation product of paracetamol which is limited at a low level (50 ppm or 0.005% w/w) in the drug substance by the European, United States, British and German Pharmacopoeias, employing a manual colourimetric limit test. The 4AP limit is widened to 1000 ppm or 0.1% w/w for the tablet product monographs, which quote the use of a less sensitive automated HPLC method. The lower drug substance specification limit is applied to our products, (50 ppm, equivalent to 25 mug 4AP in a tablet containing 500-mg paracetamol) and the pharmacopoeial HPLC assay was not suitable at this low level due to matrix interference. For routine analysis a rapid, automated assay was required. This paper presents a highly sensitive, precise and automated method employing the technique of Flow Injection (FI) analysis to quantitatively assay low levels of this degradant. A solution of the drug substance, or an extract of the tablets, containing 4AP and paracetamol is injected into a solvent carrier stream and merged on-line with alkaline sodium nitroprusside reagent, to form a specific blue derivative which is detected spectrophotometrically at 710 nm. Standard HPLC equipment is used throughout. The procedure is fully quantitative and has been optimised for sensitivity and robustness using a multivariate experimental design (multi-level 'Central Composite' response surface) model. The method has been fully validated and is linear down to 0.01 mug ml(-1). The approach should be applicable to a range of paracetamol products.
Expertise sensitive item selection.
Chow, P; Russell, H; Traub, R E
2000-12-01
In this paper we describe and illustrate a procedure for selecting items from a large pool for a certification test. The proposed procedure, which is intended to improve the alignment of the certification test with on-the-job performance, is based on an expertise sensitive index. This index for an item is the difference between the item's p values for experts and novices. An example is provided of the application of the index for selecting items to be used in certifying bakers.
NASA Astrophysics Data System (ADS)
Mondelain, D.; Vasilchenko, S.; Čermák, P.; Kassi, S.; Campargue, A.
2017-01-01
The CO2 absorption continuum near 2.3 μm is determined for a series of sub atmospheric pressures (250-750 Torr) by high sensitivity Cavity Ring Down Spectroscopy. An experimental procedure consisting in injecting successively a gas flow of CO2 and synthetic air, keeping constant the gas pressure in the CRDS cell, has been developed. This procedure insures a high stability of the spectra baseline by avoiding changes of the optical alignment due to pressure changes. The CO2 continuum was obtained as the difference between the CO2 absorption coefficient and a local lines simulation using a Voigt profile truncated at ±25 cm-1. Following the results of the preceding analysis of the CO2 rovibrational lines (Vasilchenko S et al. J Quant Spectrosc Radiat Transfer 10.1016/j.jqsrt.2016.07.002, a CO2 line list with intensities obtained by variational calculations and empirical line positions was preferred to the HITRAN line list. A quadratic pressure dependence of the absorption continuum is observed, with an average binary absorption coefficient increasing from 2 to 4×10-8 cm-1 amagat-2 between 4320 and 4380 cm-1. The obtained continuum is found in good agreement with a previous measurement using much higher densities (20 amagat) and a low resolution grating spectrograph and is consistent with values currently used in the analysis of Venus spectra.
Automatic Nuclei Segmentation in H&E Stained Breast Cancer Histopathology Images
Veta, Mitko; van Diest, Paul J.; Kornegoor, Robert; Huisman, André; Viergever, Max A.; Pluim, Josien P. W.
2013-01-01
The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8. PMID:23922958
Automatic nuclei segmentation in H&E stained breast cancer histopathology images.
Veta, Mitko; van Diest, Paul J; Kornegoor, Robert; Huisman, André; Viergever, Max A; Pluim, Josien P W
2013-01-01
The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.
NASA Technical Reports Server (NTRS)
Otterson, D. A.; Seng, G. T.
1984-01-01
A new high-performance liquid chromatographic (HPLC) method for group-type analysis of middistillate fuels is described. It uses a refractive index detector and standards that are prepared by reacting a portion of the fuel sample with sulfuric acid. A complete analysis of a middistillate fuel for saturates and aromatics (including the preparation of the standard) requires about 15 min if standards for several fuels are prepared simultaneously. From model fuel studies, the method was found to be accurate to within 0.4 vol% saturates or aromatics, and provides a precision of + or - 0.4 vol%. Olefin determinations require an additional 15 min of analysis time. However, this determination is needed only for those fuels displaying a significant olefin response at 200 nm (obtained routinely during the saturated/aromatics analysis procedure). The olefin determination uses the responses of the olefins and the corresponding saturates, as well as the average value of their refractive index sensitivity ratios (1.1). Studied indicated that, although the relative error in the olefins result could reach 10 percent by using this average sensitivity ratio, it was 5 percent for the fuels used in this study. Olefin concentrations as low as 0.1 vol% have been determined using this method.
A quantitative PCR assay for the detection and quantification of Babesia bovis and B. bigemina.
Buling, A; Criado-Fornelio, A; Asenzo, G; Benitez, D; Barba-Carretero, J C; Florin-Christensen, M
2007-06-20
The haemoparasites Babesia bovis and Babesia bigemina affect cattle over vast areas of the tropics and temperate parts of the world. Microscopic examination of blood smears allows the detection of clinical cases of babesiosis, but this procedure lacks sensitivity when parasitaemia levels are low. In addition, differentiating between similar haemoparasites can be very difficult. Molecular diagnostic procedures can, however, overcome these problems. This paper reports a quantitative PCR (qPCR) assay involving the use of SYBR Green. Based on the amplification of a small fragment of the cytochrome b gene, this method shows both high sensitivity and specificity, and allows quantification of parasite DNA. In tests, reproducible quantitative results were obtained over the range of 0.1 ng to 0.1 fg of parasite DNA. Melting curve analysis differentiated between B. bovis and B. bigemina. To assess the performance of the new qPCR procedure it was used to screen for babesiosis in 40 cows and 80 horses. B. bigemina was detected in five cows (three of these were also found to be positive by standard PCR techniques targeting the 18S rRNA gene). In addition, B. bovis was detected in one horse and B. bigemina in two horses using the proposed method, while none was found positive by ribosomal standard PCR. The sequences of the B. bigemina cytochrome b and 18S rRNA genes were completely conserved in isolates from Spain and Argentina, while those of B. bovis showed moderate polymorphism.
46 CFR 153.1002 - Special operating requirements for heat sensitive cargoes.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 5 2011-10-01 2011-10-01 false Special operating requirements for heat sensitive cargoes. 153.1002 Section 153.1002 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... MATERIALS Operations Special Cargo Procedures § 153.1002 Special operating requirements for heat sensitive...
46 CFR 153.1002 - Special operating requirements for heat sensitive cargoes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Special operating requirements for heat sensitive cargoes. 153.1002 Section 153.1002 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... MATERIALS Operations Special Cargo Procedures § 153.1002 Special operating requirements for heat sensitive...
46 CFR 153.1002 - Special operating requirements for heat sensitive cargoes.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 5 2014-10-01 2014-10-01 false Special operating requirements for heat sensitive cargoes. 153.1002 Section 153.1002 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... MATERIALS Operations Special Cargo Procedures § 153.1002 Special operating requirements for heat sensitive...
46 CFR 153.1002 - Special operating requirements for heat sensitive cargoes.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 5 2013-10-01 2013-10-01 false Special operating requirements for heat sensitive cargoes. 153.1002 Section 153.1002 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... MATERIALS Operations Special Cargo Procedures § 153.1002 Special operating requirements for heat sensitive...
46 CFR 153.1002 - Special operating requirements for heat sensitive cargoes.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 5 2012-10-01 2012-10-01 false Special operating requirements for heat sensitive cargoes. 153.1002 Section 153.1002 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED... MATERIALS Operations Special Cargo Procedures § 153.1002 Special operating requirements for heat sensitive...
System analysis in rotorcraft design: The past decade
NASA Technical Reports Server (NTRS)
Galloway, Thomas L.
1988-01-01
Rapid advances in the technology of electronic digital computers and the need for an integrated synthesis approach in developing future rotorcraft programs has led to increased emphasis on system analysis techniques in rotorcraft design. The task in systems analysis is to deal with complex, interdependent, and conflicting requirements in a structured manner so rational and objective decisions can be made. Whether the results are wisdom or rubbish depends upon the validity and sometimes more importantly, the consistency of the inputs, the correctness of the analysis, and a sensible choice of measures of effectiveness to draw conclusions. In rotorcraft design this means combining design requirements, technology assessment, sensitivity analysis and reviews techniques currently in use by NASA and Army organizations in developing research programs and vehicle specifications for rotorcraft. These procedures span simple graphical approaches to comprehensive analysis on large mainframe computers. Examples of recent applications to military and civil missions are highlighted.
NASA Astrophysics Data System (ADS)
Nanus, L.; Williams, M. W.; Campbell, D. H.
2005-12-01
Atmospheric deposition of pollutants threatens pristine environments around the world. However, scientifically-based decisions regarding management of these environments has been confounded by spatial variability of atmospheric deposition, particularly across regional scales at which resource management is typically considered. A statistically based methodology coupled within GIS is presented that builds on small alpine lake and sub-alpine catchments scale to identify deposition-sensitive lakes across larger watershed and regional scales. The sensitivity of 874 alpine and subalpine lakes to acidification from atmospheric deposition of nitrogen and sulfur was estimated using statistical models relating water quality and landscape attributes in Glacier National Park, Yellowstone National Park, Grand Teton National Park, Rocky Mountain National Park and Great Sand Dunes National Park and Preserve. Water-quality data measured during synoptic lake surveys were used to calibrate statistical models of lake sensitivity. In the case of nitrogen deposition, water quality data were supplemented with dual isotopic measurements of d15N and d18O of nitrate. Landscape attributes for the lake basins were derived from GIS including the following explanatory variables; topography (basin slope, basin aspect, basin elevation), bedrock type, vegetation type, and soil type. Using multivariate logistic regression analysis, probability estimates were developed for acid-neutralizing capacity, nitrate, sulfate and DOC concentrations, and lakes with a high probability of being sensitive to atmospheric deposition were identified. Water-quality data collected at 60 lakes during fall 2004 were used to validate statistical models. Relationships between landscape attributes and water quality vary by constituent, due to spatial variability in landscape attributes and spatial variation in the atmospheric deposition of pollutants within and among the five National Parks. Predictive ability, model fit and sensitivity were first assessed for each of the five National Parks individually, to evaluate the utility of this methodology for prediction of alpine and sub-alpine lake sensitivity across the catchment scale. A similar assessment was then performed, treating the five parks as a group. Validation results showed that 85 percent of lakes sampled were accurately identified by the model as having a greater than 60 percent probability of acid-neutralizing capacity concentrations less than 200 microequivalents per liter. Preliminary findings indicate good predictive ability and reasonable model fit and sensitivity, suggesting that logistic regression modeling coupled within a GIS framework is an appropriate approach for remote identification of deposition-sensitive lakes across the Rocky Mountain region. To assist resource management decisions regarding alpine and sub-alpine lakes across this region, screening procedures were developed based on terrain and landscape attribute information available to all participating parks. Since the screening procedure is based on publicly available data, our methodology and similar screening procedures may be applicable to other National Parks with deposition-sensitive surface waters.
Thieme, Detlef; Sachs, Ulf; Sachs, Hans; Moore, Christine
2015-07-01
Formation of picolinic acid esters of hydroxylated drugs or their biotransformation products is a promising tool to improve their mass spectrometric ionization efficiency, alter their fragmentation behaviour and enhance sensitivity and specificity of their detection. The procedure was optimized and tested for the detection of cannabinoids, which proved to be most challenging when dealing with alternative specimens, for example hair and oral fluid. In particular, the detection of the THC metabolites hydroxyl-THC and carboxy-THC requires ultimate sensitivity because of their poor incorporation into hair or saliva. Both biotransformation products are widely accepted as incorporation markers to distinguish drug consumption from passive contamination. The derivatization procedure was carried out by adding a mixture of picolinic acid, 4-(dimethylamino)pyridine and 2-methyl-6-nitrobenzoic anhydride in tetrahydrofuran/triethylamine to the dry extraction residues. Resulting derivatives were found to be very stable and could be reconstituted in aqueous or organic buffers and subsequently analyzed by liquid chromatography-mass spectrometry (LC-MS). Owing to the complex consecutive fragmentation patterns, the application of multistage MS3 proved to be extremely useful for a sensitive identification of doubly picolinated hydroxy-THC in complex matrices. The detection limits - estimated by comparison of corresponding signal-to-noise ratios - increased by a factor of 100 following picolination. All other species examined, like cannabinol, THC, cannabidiol, and carboxy-THC, could also be derivatized exhibiting only moderate sensitivity improvements. The assay was systematically tested using hair samples and exemplarily applied to oral fluid. Concentrations of OH-THC identified in THC-positive hair samples ranged from 0.02 to 0.29pg/mg. Copyright © 2014 John Wiley & Sons, Ltd.
Family presence during cardiopulmonary resuscitation and invasive procedures in children
Ferreira, Cristiana Araújo G.; Balbino, Flávia Simphronio; Balieiro, Maria Magda F. G.; Mandetta, Myriam Aparecida
2014-01-01
Objective: To identify literature evidences related to actions to promote family's presence during cardiopulmonary resuscitation and invasive procedures in children hospitalized in pediatric and neonatal critical care units. Data sources : Integrative literature review in PubMed, SciELO and Lilacs databases, from 2002 to 2012, with the following inclusion criteria: research article in Medicine, or Nursing, published in Portuguese, English or Spanish, using the keywords "family", "invasive procedures", "cardiopulmonary resuscitation", "health staff", and "Pediatrics". Articles that did not refer to the presence of the family in cardiopulmonary resuscitation and invasive procedures were excluded. Therefore, 15 articles were analyzed. Data synthesis : Most articles were published in the United States (80%), in Medicine and Nursing (46%), and were surveys (72%) with healthcare team members (67%) as participants. From the critical analysis, four themes related to the actions to promote family's presence in invasive procedures and cardiopulmonary resuscitation were obtained: a) to develop a sensitizing program for healthcare team; b) to educate the healthcare team to include the family in these circumstances; c) to develop a written institutional policy; d) to ensure the attendance of family's needs. Conclusions: Researches on these issues must be encouraged in order to help healthcare team to modify their practice, implementing the principles of the Patient and Family Centered Care model, especially during critical episodes. PMID:24676198
A Comparative Analysis of Coprologic Diagnostic Methods for Detection of Toxoplama gondii in Cats
Salant, Harold; Spira, Dan T.; Hamburger, Joseph
2010-01-01
The relative role of transmission of Toxoplasma gondii infection from cats to humans appears to have recently increased in certain areas. Large-scale screening of oocyst shedding in cats cannot rely on microscopy because oocyst identification lacks sensitivity and specificity, or on bioassays, which require test animals and weeks before examination. We compared a sensitive and species-specific coprologic–polymerase chain reaction (copro-PCR) for detection of T. gondii infected cats with microscopy and a bioassay. In experimentally infected cats followed over time, microscopy was positive occasionally, and positive copro-PCR and bioassay results were obtained continuously from days 2 to 24 post-infection. The copro-PCR is at least as sensitive and specific as the bioassay and is capable of detecting infective oocysts during cat infection. Therefore, this procedure can be used as the new gold standard for determining potential cat infectivity. Its technologic advantages over the bioassay make it superior for large-scale screening of cats. PMID:20439968
High-Performance Piezoresistive MEMS Strain Sensor with Low Thermal Sensitivity
Mohammed, Ahmed A. S.; Moussa, Walied A.; Lou, Edmond
2011-01-01
This paper presents the experimental evaluation of a new piezoresistive MEMS strain sensor. Geometric characteristics of the sensor silicon carrier have been employed to improve the sensor sensitivity. Surface features or trenches have been introduced in the vicinity of the sensing elements. These features create stress concentration regions (SCRs) and as a result, the strain/stress field was altered. The improved sensing sensitivity compensated for the signal loss. The feasibility of this methodology was proved in a previous work using Finite Element Analysis (FEA). This paper provides the experimental part of the previous study. The experiments covered a temperature range from −50 °C to +50 °C. The MEMS sensors are fabricated using five different doping concentrations. FEA is also utilized to investigate the effect of material properties and layer thickness of the bonding adhesive on the sensor response. The experimental findings are compared to the simulation results to guide selection of bonding adhesive and installation procedure. Finally, FEA was used to analyze the effect of rotational/alignment errors. PMID:22319384
Liu, Dan; Li, Xingrui; Zhou, Junkai; Liu, Shibo; Tian, Tian; Song, Yanling; Zhu, Zhi; Zhou, Leiji; Ji, Tianhai; Yang, Chaoyong
2017-10-15
Enzyme-linked immunosorbent assay (ELISA) is a popular laboratory technique for detection of disease-specific protein biomarkers with high specificity and sensitivity. However, ELISA requires labor-intensive and time-consuming procedures with skilled operators and spectroscopic instrumentation. Simplification of the procedures and miniaturization of the devices are crucial for ELISA-based point-of-care (POC) testing in resource-limited settings. Here, we present a fully integrated, instrument-free, low-cost and portable POC platform which integrates the process of ELISA and the distance readout into a single microfluidic chip. Based on manipulation using a permanent magnet, the process is initiated by moving magnetic beads with capture antibody through different aqueous phases containing ELISA reagents to form bead/antibody/antigen/antibody sandwich structure, and finally converts the molecular recognition signal into a highly sensitive distance readout for visual quantitative bioanalysis. Without additional equipment and complicated operations, our integrated ELISA-Chip with distance readout allows ultrasensitive quantitation of disease biomarkers within 2h. The ELISA-Chip method also showed high specificity, good precision and great accuracy. Furthermore, the ELISA-Chip system is highly applicable as a sandwich-based platform for the detection of a variety of protein biomarkers. With the advantages of visual analysis, easy operation, high sensitivity, and low cost, the integrated sample-in-answer-out ELISA-Chip with distance readout shows great potential for quantitative POCT in resource-limited settings. Copyright © 2017. Published by Elsevier B.V.
Glauser, Gaétan; Grund, Baptiste; Gassner, Anne-Laure; Menin, Laure; Henry, Hugues; Bromirski, Maciej; Schütz, Frédéric; McMullen, Justin; Rochat, Bertrand
2016-03-15
A paradigm shift is underway in the field of quantitative liquid chromatography-mass spectrometry (LC-MS) analysis thanks to the arrival of recent high-resolution mass spectrometers (HRMS). The capability of HRMS to perform sensitive and reliable quantifications of a large variety of analytes in HR-full scan mode is showing that it is now realistic to perform quantitative and qualitative analysis with the same instrument. Moreover, HR-full scan acquisition offers a global view of sample extracts and allows retrospective investigations as virtually all ionized compounds are detected with a high sensitivity. In time, the versatility of HRMS together with the increasing need for relative quantification of hundreds of endogenous metabolites should promote a shift from triple-quadrupole MS to HRMS. However, a current "pitfall" in quantitative LC-HRMS analysis is the lack of HRMS-specific guidance for validated quantitative analyses. Indeed, false positive and false negative HRMS detections are rare, albeit possible, if inadequate parameters are used. Here, we investigated two key parameters for the validation of LC-HRMS quantitative analyses: the mass accuracy (MA) and the mass-extraction-window (MEW) that is used to construct the extracted-ion-chromatograms. We propose MA-parameters, graphs, and equations to calculate rational MEW width for the validation of quantitative LC-HRMS methods. MA measurements were performed on four different LC-HRMS platforms. Experimentally determined MEW values ranged between 5.6 and 16.5 ppm and depended on the HRMS platform, its working environment, the calibration procedure, and the analyte considered. The proposed procedure provides a fit-for-purpose MEW determination and prevents false detections.
Drama-induced affect and pain sensitivity.
Zillmann, D; de Wied, M; King-Jablonski, C; Jenzowsky, S
1996-01-01
This study was conducted to examine the pain-ameliorating and pain-sensitizing effects of exposure to emotionally engaging drama. Specifically, the consequences for pain sensitivity of exposure to dramatic expositions differing in both excitatory and hedonic qualities were determined. Hedonically negative, neutral, and positive affective states were induced in male respondents by exposure to excerpts from cinematic drama. Pain sensitivity was assessed by the cuff-pressure procedure before and after exposure and by the cold pressor test after exposure only. When compared against the control condition, pain sensitivity diminished under conditions of hedonically positive affect. An inverse effect was suggested for hedonically negative conditions, but proved tentative and statistically unreliable. The findings are consistent with earlier demonstrations of mood effects on pain sensitivity. Unlike inconclusive earlier findings concerning the magnitude of directional effects, however, they suggest an asymmetry that emphasizes the pain-ameliorating effect of positive affects while lending little, if any, support to the proposal of a pain-sensitizing effect of negative affects. The investigation did not accomplish the intended creation of conditions necessary to test the proposal that heightened sympathetic activity diminishes pain sensitivity. The utility of a rigorous determination of this hypothesized relationship is emphasized, and procedures for a viable test of the proposal are suggested.
NASA Astrophysics Data System (ADS)
Zong, Shenfei; Wang, Zhuyuan; Chen, Hui; Hu, Guohua; Liu, Min; Chen, Peng; Cui, Yiping
2014-01-01
As an important biomarker and therapeutic target, telomerase has attracted considerable attention concerning its detection and monitoring. Here, we present a colorimetry and surface enhanced Raman scattering (SERS) dual-mode telomerase activity detection method, which has several distinctive advantages. First, colorimetric functionality allows rapid preliminary discrimination of telomerase activity by the naked eye. Second, the employment of SERS technique results in greatly improved detection sensitivity. Third, the combination of colorimetry and SERS into one detection system can ensure highly efficacious and sensitive screening of numerous samples. Besides, the avoidance of polymerase chain reaction (PCR) procedures further guarantees fine reliability and simplicity. Generally, the presented method is realized by an ``elongate and capture'' procedure. To be specific, gold nanoparticles modified with Raman molecules and telomeric repeat complementary oligonucleotide are employed as the colorimetric-SERS bifunctional reporting nanotag, while magnetic nanoparticles functionalized with telomerase substrate oligonucleotide are used as the capturing substrate. Telomerase can synthesize and elongate telomeric repeats onto the capturing substrate. The elongated telomeric repeats subsequently facilitate capturing of the reporting nanotag via hybridization between telomeric repeat and its complementary strand. The captured nanotags can cause a significant difference in the color and SERS intensity of the magnetically separated sediments. Thus both the color and SERS can be used as indicators of the telomerase activity. With fast screening ability and outstanding sensitivity, we anticipate that this method would greatly promote practical application of telomerase-based early-stage cancer diagnosis.As an important biomarker and therapeutic target, telomerase has attracted considerable attention concerning its detection and monitoring. Here, we present a colorimetry and surface enhanced Raman scattering (SERS) dual-mode telomerase activity detection method, which has several distinctive advantages. First, colorimetric functionality allows rapid preliminary discrimination of telomerase activity by the naked eye. Second, the employment of SERS technique results in greatly improved detection sensitivity. Third, the combination of colorimetry and SERS into one detection system can ensure highly efficacious and sensitive screening of numerous samples. Besides, the avoidance of polymerase chain reaction (PCR) procedures further guarantees fine reliability and simplicity. Generally, the presented method is realized by an ``elongate and capture'' procedure. To be specific, gold nanoparticles modified with Raman molecules and telomeric repeat complementary oligonucleotide are employed as the colorimetric-SERS bifunctional reporting nanotag, while magnetic nanoparticles functionalized with telomerase substrate oligonucleotide are used as the capturing substrate. Telomerase can synthesize and elongate telomeric repeats onto the capturing substrate. The elongated telomeric repeats subsequently facilitate capturing of the reporting nanotag via hybridization between telomeric repeat and its complementary strand. The captured nanotags can cause a significant difference in the color and SERS intensity of the magnetically separated sediments. Thus both the color and SERS can be used as indicators of the telomerase activity. With fast screening ability and outstanding sensitivity, we anticipate that this method would greatly promote practical application of telomerase-based early-stage cancer diagnosis. Electronic supplementary information (ESI) available: TEM images of individual MB@Au NPs, results of dynamic light scattering analysis and extinction spectrum obtained using colorimetry detection. See DOI: 10.1039/c3nr04942f
Limits to Sensitivity in Laser Enhanced Ionization.
ERIC Educational Resources Information Center
Travis, J. C.
1982-01-01
Laser enhanced ionization (LEI) occurs when a tunable dye laser is used to excite a specific atomic population in a flame. Explores the origin of LEI's high sensitivity and identifies possible avenues to higher sensitivity by describing instrument used and experimental procedures and discussing ion formation/detection. (Author/JN)
Predicting the difficulty of a lead extraction procedure: the LED index.
Bontempi, Luca; Vassanelli, Francesca; Cerini, Manuel; D'Aloia, Antonio; Vizzardi, Enrico; Gargaro, Alessio; Chiusso, Francesco; Mamedouv, Rashad; Lipari, Alessandro; Curnis, Antonio
2014-08-01
According to recent surveys, many sites performing permanent lead extractions do not meet the minimum prerequisites concerning personnel training, procedures' volume, or facility requirements. The current Heart Rhythm Society consensus on lead extractions suggests that patients should be referred to more experienced sites when a better outcome could be achieved. The purpose of this study was to develop a score aimed at predicting the difficulty of a lead extraction procedure through the analysis of a high-volume center database. This score could help to discriminate patients who should be sent to a referral site. A total of 889 permanent leads were extracted from 469 patients. All procedures were performed from January 2009 to May 2012 by two expert electrophysiologists, at the University Hospital of Brescia. Factors influencing the difficulty of a procedure were assessed using a univariate and a multivariate logistic regression model. The fluoroscopy time of the procedure was taken as an index of difficulty. A Lead Extraction Difficulty (LED) score was defined, considering the strongest predictors. Overall, 873 of 889 (98.2%) leads were completely removed. Major complications were reported in one patient (0.2%) who manifested cardiac tamponade. Minor complications occurred in six (1.3%) patients. No deaths occurred. Median fluoroscopic time was 8.7 min (3.3-17.3). A procedure was classified as difficult when fluoroscopy time was more than 31.2 min [90th percentile (PCTL)].At a univariate analysis, the number of extracted leads and years from implant were significantly associated with an increased risk of fluoroscopy time above 90th PCTL [odds ratio (OR) 1.51, 95% confidence interval (CI) 1.08-2.11, P = 0.01; and OR 1.19, 95% CI 1.12-1.25, P < 0.001, respectively). After adjusting for patient age and sex, and combining with other covariates potentially influencing the extraction procedure, a multivariate analysis confirmed a 71% increased risk of fluoroscopy time above 90th PCTL for each additional lead extracted (OR 1.71, 95% CI 1.06-2.77, P = 0.028) and a 23% increased risk for each year of lead age (OR 1.23, 95% CI 1.15-1.31, P < 0.001). Further nonindependent factors increasing the risk were the presence of active fixation leads and dual-coil implantable cardiac defibrillator leads. Conversely, vegetations significantly favored lead extraction.The LED score was defined as: number of extracted leads within a procedure + lead age (years from implant) + 1 if dual-coil - 1 if vegetation. The LED score independently predicted complex procedure (with fluoroscopic time >90th PCTL) both at univariate and multivariate analysis. A receiver-operating characteristic analysis showed an area under the curve of 0.81. A LED score greater than 10 could predict fluoroscopy time above 90th PCTL with a sensitivity of 78.3% and a specificity of 76.7%. The LED score is easy to compute and potentially predicts fluoroscopy time above 90th PCTL with a relatively high accuracy.
Pauk, Volodymyr; Pluháček, Tomáš; Havlíček, Vladimír; Lemr, Karel
2017-10-09
The ultra-high performance supercritical fluid chromatography-mass spectrometry (UHPSFC/MS) procedure for analysis of native monosaccharides was developed. Chromatographic conditions were investigated to separate a mixture of four hexoses, three pentoses, two deoxyhexoses and two uronic acids. Increasing water content in methanol modifier to 5% and formic acid to 4% improved peak shapes of neutral monosaccharides and allowed complete elution of highly polar uronic acids in a single run. An Acquity HSS C18SB column outperformed other three tested stationary phases (BEH (silica), BEH 2-ethylpyridine, CSH Fluoro-Phenyl) in terms of separation of isomers and analysis time (4.5 min). Limits of detection were in the range 0.01-0.12 ng μL -1 . Owing to separation of anomers, identification of critical pairs (arabinose-xylose and glucose-galactose) was possible. Feasibility of the new method was demonstrated on plant-derived polysaccharide binders. Samples of watercolor paints, painted paper and three plant gums widely encountered in painting media (Arabic, cherry and tragacanth) were decomposed prior the analysis by microwave-assisted hydrolysis at 40 bar initial pressure using 2 mol L -1 trifluoroacetic acid. Among tested temperatures, 120 °C ensured appropriate hydrolysis efficiency for different types of gum and avoided excessive degradation of labile monosaccharides. Procedure recovery tested on gum Arabic was 101% with an RSD below 8%. Aqueous hydrolysates containing monosaccharides in different ratios specific to each type of plant gum were diluted or analyzed directly. Filtration of samples before hydrolysis reduced interferences from a paper support and identification of gum Arabic in watercolor-painted paper samples was demonstrated. Successful identification of pure gum Arabic was confirmed for sample quantities as little as 1 μg. Two classification approaches were compared and principal component analysis was superior to analysis based on peak area ratios of monosaccharides. The proposed procedure using UHPSFC/MS represents an interesting alternative which can compete with other chromatographic methods in the field of saccharide analysis in terms of speed, sensitivity and simplicity of workflow. Copyright © 2017 Elsevier B.V. All rights reserved.
Hayashi, Masamichi; Guerrero-Preston, Rafael; Sidransky, David; Koch, Wayne M.
2015-01-01
Molecular deep surgical margin analysis has been shown to predict locoregional recurrences of head and neck squamous cell carcinoma (HNSCC). In order to improve the accuracy and versatility of the analysis, we used a highly tumor-specific methylation marker and highly sensitive detection technology to test DNA from surgical margins. Histologically cancer-negative deep surgical margin samples were prospectively collected from 82 eligible HNSCC surgeries by an imprinting procedure (n=75) and primary tissue collection (n=70). Bisulfite treated DNA from each sample was analyzed by both conventional quantitative methylation-specific polymerase chain reaction (QMSP) and QMSP by droplet digital PCR (ddQMSP) targeting PAX5 gene promoter methylation. The association between the presence of PAX5 methylation and locoregional recurrence free survival (LRFS) was evaluated. PAX5 methylation was found in 68.0% (51/75) of tumors in the imprint samples and 71.4% (50/70) in the primary tissue samples. Among cases which did not have postoperative radiation, (n=31 in imprint samples, n=29 in tissue samples), both conventional QMSP and ddQMSP revealed that PAX5 methylation positive margins was significantly associated with poor LRFS by univariate analysis. In particular, ddQMSP increased detection of the PAX5 marker from 29% to 71% in the non-radiated imprint cases. Also, PAX5 methylated imprint margins were an excellent predictor of poor LRFS (HR=3.89, 95%CI:1.19-17.52, P=0.023) by multivariate analysis. PAX5 methylation appears to be an excellent tumor-specific marker for molecular deep surgical margin analysis of HNSCC. Moreover, the ddQMSP assay displays increased sensitivity for methylation marker detection. PMID:26304463
Dexter, Franklin; Ledolter, Johannes; Hindman, Bradley J
2016-01-01
In this Statistical Grand Rounds, we review methods for the analysis of the diversity of procedures among hospitals, the activities among anesthesia providers, etc. We apply multiple methods and consider their relative reliability and usefulness for perioperative applications, including calculations of SEs. We also review methods for comparing the similarity of procedures among hospitals, activities among anesthesia providers, etc. We again apply multiple methods and consider their relative reliability and usefulness for perioperative applications. The applications include strategic analyses (e.g., hospital marketing) and human resource analytics (e.g., comparisons among providers). Measures of diversity of procedures and activities (e.g., Herfindahl and Gini-Simpson index) are used for quantification of each facility (hospital) or anesthesia provider, one at a time. Diversity can be thought of as a summary measure. Thus, if the diversity of procedures for 48 hospitals is studied, the diversity (and its SE) is being calculated for each hospital. Likewise, the effective numbers of common procedures at each hospital can be calculated (e.g., by using the exponential of the Shannon index). Measures of similarity are pairwise assessments. Thus, if quantifying the similarity of procedures among cases with a break or handoff versus cases without a break or handoff, a similarity index represents a correlation coefficient. There are several different measures of similarity, and we compare their features and applicability for perioperative data. We rely extensively on sensitivity analyses to interpret observed values of the similarity index.
Fast and sensitive determination of per- and polyfluoroalkyl substances in seawater.
Concha-Graña, Estefanía; Fernández-Martínez, Gerardo; López-Mahía, Purificación; Prada-Rodríguez, Darío; Muniategui-Lorenzo, Soledad
2018-06-22
In this work, a novel, fast, and sensitive method was developed for perfluorooctanoic acid (PFOA), perfluorooctane sulfonic acid (PFOS) and PFOS precursor's determination in seawater. The proposed method consists in a vortex-assisted liquid-liquid microextraction (VALLME) combined with liquid chromatography (LC) and LTQ-Orbitrap high resolution mass spectrometry (LTQ-Orbitrap HRMS) determination. Several parameters affecting both the HPLC-LTQ Orbitrap HRMS determination and the VALLME were studied, with special attention to blank contamination problem. The use of LTQ-Orbitrap-HRMS in full mode, quantifying the target analytes using the exact mass, provides a very powerful detection in terms of sensitivity and specificity maintaining all the information provided by the full mass spectra, allowing, also, the identification of non-target substances. The use of matrix-matched calibration, together with labelled surrogate standards, minimize matrix effects and compensate potential recovery losses, resulting in recoveries between 95 and 105%, with excellent sensitivity (quantitation limit between 0.7 and 6 ng L -1 ) and precision (4-10%). The proposed method requires only 35 mL of sample and 100 μL of extracting solvent, is fast and avoids the use of other solvents to obtain the dispersive cloudy solution, simplifying the procedure and improving the existing procedures for the determination of per- and polyfluoroalkyl substances (PFASs) in seawater in terms of green analytical chemistry. The method was successfully validated by participating in a proficiency test assay provided by the National Measurement Institute of the Australian Government for the determination of PFOA, total PFOS and linear PFOS in waters. A revision of the state of the art in the last twelve years of methods for the analysis of PFASs in seawater and other types of water was performed, and a critical comparison between the developed method and the previously published was included. Finally, the method was applied to the analysis of samples from Ría de Vigo, a sensitive and semiconfined coastal area located in the northwest of Spain. PFOS, N-methyl perfluorooctanesulfonamide (n-MeFOSA) and N-ethyl perfluorooctanesulfonamide (n-EtFOSA) were detected in samples at levels lower than the maximum allowable concentration (MAC) established by Directive 2013/39/EU, but above the annual average (AA) levels. Copyright © 2018 Elsevier B.V. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-05
... Unclassified Non-Safeguards Information and Order Imposing Procedures for Access to Sensitive Unclassified Non... procedures described below. To comply with the procedural requirements of E-Filing, at least ten (10) days... equipment lineup, accident scenario, or interaction mode not reviewed and approved as part of the design and...
Turner, Clare E; Russell, Bruce R; Gant, Nicholas
2015-11-01
Magnetic resonance spectroscopy (MRS) is an analytical procedure that can be used to non-invasively measure the concentration of a range of neural metabolites. Creatine is an important neurometabolite with dietary supplementation offering therapeutic potential for neurological disorders with dysfunctional energetic processes. Neural creatine concentrations can be probed using proton MRS and quantified using a range of software packages based on different analytical methods. This experiment examines the differences in quantification performance of two commonly used analysis packages following a creatine supplementation strategy with potential therapeutic application. Human participants followed a seven day dietary supplementation regime in a placebo-controlled, cross-over design interspersed with a five week wash-out period. Spectroscopy data were acquired the day immediately following supplementation and analyzed with two commonly-used software packages which employ vastly different quantification methods. Results demonstrate that neural creatine concentration was augmented following creatine supplementation when analyzed using the peak fitting method of quantification (105.9%±10.1). In contrast, no change in neural creatine levels were detected with supplementation when analysis was conducted using the basis spectrum method of quantification (102.6%±8.6). Results suggest that software packages that employ the peak fitting procedure for spectral quantification are possibly more sensitive to subtle changes in neural creatine concentrations. The relative simplicity of the spectroscopy sequence and the data analysis procedure suggest that peak fitting procedures may be the most effective means of metabolite quantification when detection of subtle alterations in neural metabolites is necessary. The straightforward technique can be used on a clinical magnetic resonance imaging system. Copyright © 2015 Elsevier Inc. All rights reserved.
Schmitz-Dräger, Claudia; Bonberg, Nadine; Pesch, Beate; Todenhöfer, Tilman; Sahin, Sevim; Behrens, Thomas; Brüning, Thomas; Schmitz-Dräger, Bernd J
2016-10-01
Numerous molecular urine markers for the diagnosis of bladder cancer have been developed and evaluated mostly in case-control settings through the past decades. However, despite all efforts none of them has been included into clinical decision-making and guideline recommendations until today. The aim of this retrospective longitudinal analysis was to investigate if a molecular marker might be able to replace cystoscopy as a primary examination in diagnosis and follow-up of patients with pTa grade 1-2 bladder cancer. Totally 36 patients (32 men) with pTa grade 1-2 bladder cancer underwent 232 follow-up examinations including urine analysis, cytology, immunocytology (uCyt+), and urethrocystoscopy (UC). Mean age at study entry was 63 years. Patients were observed through a median follow-up interval of 3.8 years. In summary, 47 Transurethral Resection of Bladder Tumors (TURB) procedures were indicated based upon a positive UC (44) or as re-TURB (3) and 33 tumors (plus 1 case of pTa G0) were histopathologically confirmed. Although uCyt+was positive in 12/13 primary tumors (92.3%), sensitivity dropped to 13/20 (65%) in tumor recurrence presumably because of their smaller size. Urine cytology had a sensitivity and a specificity of 30.3% and 94.9%, respectively, but did not improve the sensitivity of uCyt+alone. If UC was based upon a positive uCyt+test, 8/33 tumors (24.2%) would have been overlooked or diagnosed late. In contrast, 173 UCs (74%) would have been saved and 5 presumably unnecessary TURB procedures would not have been indicated. This longitudinal study suggests a potential of molecular urine tests in replacing cystoscopy in the follow-up of patients with pTa G1-2 bladder cancer. The use of additional markers might further improve sensitivity of urine testing. A prospective randomized study has been initiated to prospectively investigate the performance of a marker panel against UC. Copyright © 2016 Elsevier Inc. All rights reserved.
Occupancy estimation and the closure assumption
Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.
2009-01-01
1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing the closure assumption in both sampling designs and analysis. Furthermore, inappropriately applying closed models could have negative consequences when monitoring rare or declining species for conservation and management decisions, because violations of closure typically lead to overestimates of the probability of occurrence.
Post-SM4 Sensitivity Calibration of the STIS Echelle Modes
NASA Astrophysics Data System (ADS)
Bostroem, K. Azalee; Aloisi, A.; Bohlin, R.; Hodge, P.; Proffitt, C.
2012-01-01
On-orbit sensitivity curves for all echelle modes were derived for post - servicing mis- sion 4 data using observations of the DA white dwarf G191-B2B. Additionally, new echelle ripple tables and grating dependent bad pixel tables were created for the FUV and NUV MAMA. We review the procedures used to derive the adopted throughputs and implement them in the pipeline as well as the motivation for the modification of the additional reference files and pipeline procedures.
NASA Astrophysics Data System (ADS)
Conte, Eric D.; Barry, Eugene F.; Rubinstein, Harry
1996-12-01
Certain individuals may be sensitive to specific compounds in comsumer products. It is important to quantify these analytes in food products in order to monitor their intake. Caffeine is one such compound. Determination of caffeine in beverages by spectrophotometric procedures requires an extraction procedure, which can prove time-consuming. Although the corresponding determination by HPLC allows for a direct injection, capillary zone electrophoresis provides several advantages such as extremely low solvent consumption, smaller sample volume requirements, and improved sensitivity.
Jones, Edmund; Epstein, David; García-Mochón, Leticia
2017-10-01
For health-economic analyses that use multistate Markov models, it is often necessary to convert from transition rates to transition probabilities, and for probabilistic sensitivity analysis and other purposes it is useful to have explicit algebraic formulas for these conversions, to avoid having to resort to numerical methods. However, if there are four or more states then the formulas can be extremely complicated. These calculations can be made using packages such as R, but many analysts and other stakeholders still prefer to use spreadsheets for these decision models. We describe a procedure for deriving formulas that use intermediate variables so that each individual formula is reasonably simple. Once the formulas have been derived, the calculations can be performed in Excel or similar software. The procedure is illustrated by several examples and we discuss how to use a computer algebra system to assist with it. The procedure works in a wide variety of scenarios but cannot be employed when there are several backward transitions and the characteristic equation has no algebraic solution, or when the eigenvalues of the transition rate matrix are very close to each other.
Circulating tumor DNA: a promising biomarker in the liquid biopsy of cancer.
Cheng, Feifei; Su, Li; Qian, Cheng
2016-07-26
Tissue biopsy is the standard diagnostic procedure for cancers and also provides a material for genotyping, which can assist in the targeted therapies of cancers. However, tissue biopsy-based cancer diagnostic procedures have limitations in their assessment of cancer development, prognosis and genotyping, due to tumor heterogeneity and evolution. Circulating tumor DNA (ctDNA) is single- or double-stranded DNA released by the tumor cells into the blood and it thus harbors the mutations of the original tumor. In recent years, liquid biopsy based on ctDNA analysis has shed a new light on the molecular diagnosis and monitoring of cancer. Studies found that the screening of genetic mutations using ctDNA is highly sensitive and specific, suggesting that ctDNA analysis may significantly improve current systems of tumor diagnosis, even facilitating early-stage detection. Moreover, ctDNA analysis is capable of accurately determining the tumor progression, prognosis and assisting in targeted therapy. Therefore, using ctDNA as a liquid biopsy may herald a revolution for tumor management. Herein, we review the biology of ctDNA, its detection methods and potential applications in tumor diagnosis, treatment and prognosis.
Circulating tumor DNA: a promising biomarker in the liquid biopsy of cancer
Cheng, Feifei; Su, Li; Qian, Cheng
2016-01-01
Tissue biopsy is the standard diagnostic procedure for cancers and also provides a material for genotyping, which can assist in the targeted therapies of cancers. However, tissue biopsy-based cancer diagnostic procedures have limitations in their assessment of cancer development, prognosis and genotyping, due to tumor heterogeneity and evolution. Circulating tumor DNA (ctDNA) is single- or double-stranded DNA released by the tumor cells into the blood and it thus harbors the mutations of the original tumor. In recent years, liquid biopsy based on ctDNA analysis has shed a new light on the molecular diagnosis and monitoring of cancer. Studies found that the screening of genetic mutations using ctDNA is highly sensitive and specific, suggesting that ctDNA analysis may significantly improve current systems of tumor diagnosis, even facilitating early-stage detection. Moreover, ctDNA analysis is capable of accurately determining the tumor progression, prognosis and assisting in targeted therapy. Therefore, using ctDNA as a liquid biopsy may herald a revolution for tumor management. Herein, we review the biology of ctDNA, its detection methods and potential applications in tumor diagnosis, treatment and prognosis. PMID:27223063
Quantitative ion beam analysis of M-C-O systems: application to an oxidized uranium carbide sample
NASA Astrophysics Data System (ADS)
Martin, G.; Raveu, G.; Garcia, P.; Carlot, G.; Khodja, H.; Vickridge, I.; Barthe, M. F.; Sauvage, T.
2014-04-01
A large variety of materials contain both carbon and oxygen atoms, in particular oxidized carbides, carbon alloys (as ZrC, UC, steels, etc.), and oxycarbide compounds (SiCO glasses, TiCO, etc.). Here a new ion beam analysis methodology is described which enables quantification of elemental composition and oxygen concentration profile over a few microns. It is based on two procedures. The first, relative to the experimental configuration relies on a specific detection setup which is original in that it enables the separation of the carbon and oxygen NRA signals. The second concerns the data analysis procedure i.e. the method for deriving the elemental composition from the particle energy spectrum. It is a generic algorithm and is here successfully applied to characterize an oxidized uranium carbide sample, developed as a potential fuel for generation IV nuclear reactors. Furthermore, a micro-beam was used to simultaneously determine the local elemental composition and oxygen concentration profiles over the first microns below the sample surface. This method is adapted to the determination of the composition of M?C?O? compounds with a sensitivity on elemental atomic concentrations around 1000 ppm.
Comparing light sensitivity, linearity and step response of electronic cameras for ophthalmology.
Kopp, O; Markert, S; Tornow, R P
2002-01-01
To develop and test a procedure to measure and compare light sensitivity, linearity and step response of electronic cameras. The pixel value (PV) of digitized images as a function of light intensity (I) was measured. The sensitivity was calculated from the slope of the P(I) function, the linearity was estimated from the correlation coefficient of this function. To measure the step response, a short sequence of images was acquired. During acquisition, a light source was switched on and off using a fast shutter. The resulting PV was calculated for each video field of the sequence. A CCD camera optimized for the near-infrared (IR) spectrum showed the highest sensitivity for both, visible and IR light. There are little differences in linearity. The step response depends on the procedure of integration and read out.
Petit, F; Craquelin, S; Guespin-Michel, J; Buffet-Janvresse, C
1999-03-01
We describe an extraction protocol for genomic DNA and RNA of both viruses and bacteria from polluted estuary water. This procedure was adapted to the molecular study of microflora of estuarine water where bacteria and viruses are found free, forming low-density biofilms, or intimately associated with organo-mineral particles. The sensitivity of the method was determined with seeded samples for RT-PCR and PCR analysis of viruses (10 virions/mL), and bacteria (1 colony-forming unit mL). We report an example of molecular detection of both poliovirus and Salmonella in the Seine estuary (France) and an approach to studying their association with organo-mineral particles.
CUORE-0 results and prospects for the CUORE experiment
NASA Astrophysics Data System (ADS)
Cremonesi, O.; Artusa, D. R.; Avignone, F. T.; Azzolini, O.; Balata, M.; Banks, T. I.; Bari, G.; Beeman, J.; Bellini, F.; Bersani, A.; Biassoni, M.; Brofferio, C.; Bucci, C.; Camacho, A.; Caminata, A.; Canonica, L.; Cao, X.; Capelli, S.; Cappelli, L.; Carbone, L.; Cardani, L.; Casali, N.; Cassina, L.; Chiesa, D.; Chott, N.; Clemenza, M.; Copello, S.; Cosmelli, C.; Creswick, R. J.; Cushman, J. S.; Dafinei, I.; Dally, A.; Datskov, V.; Dell'Oro, S.; Deninno, M. M.; Di Domizio, S.; di Vacri, M. L.; Drobizhev, A.; Ejzak, L.; Fang, D. Q.; Farach, H. A.; Faverzani, M.; Fernandes, G.; Ferri, E.; Ferroni, F.; Fiorini, E.; Franceschi, M. A.; Freedman, S. J.; Fujikawa, B. K.; Giachero, A.; Gironi, L.; Giuliani, A.; Gorla, P.; Gotti, C.; Gutierrez, T. D.; Haller, E. E.; Han, K.; Heeger, K. M.; Hennings-Yeomans, R.; Hickerson, K. P.; Huang, H. Z.; Kadel, R.; Keppel, G.; Kolomensky, Yu. G.; Li, Y. L.; Ligi, C.; Lim, K. E.; Liu, X.; Ma, Y. G.; Maiano, C.; Maino, M.; Martinez, M.; Maruyama, R. H.; Mei, Y.; Moggi, N.; Morganti, S.; Napolitano, T.; Nastasi, M.; Nisi, S.; Nones, C.; Norman, E. B.; Nucciotti, A.; O'Donnell, T.; Orio, F.; Orlandi, D.; Ouellet, J. L.; Pagliarone, C. E.; Pallavicini, M.; Palmieri, V.; Pattavina, L.; Pavan, M.; Pedretti, M.; Pessina, G.; Pettinacci, V.; Piperno, G.; Pira, C.; Pirro, S.; Pozzi, S.; Previtali, E.; Rosenfeld, C.; Rusconi, C.; Sala, E.; Sangiorgio, S.; Scielzo, N. D.; Sisti, M.; Smith, A. R.; Taffarello, L.; Tenconi, M.; Terranova, F.; Tomei, C.; Trentalange, S.; Ventura, G.; Vignati, M.; Wang, B. S.; Wang, H. W.; Wielgus, L.; Wilson, J.; Winslow, L. A.; Wise, T.; Woodcraft, A.; Zanotti, L.; Zarra, C.; Zhang, G. Q.; Zhu, B. X.; Zucchelli, S.
2015-07-01
With 741 kg of TeO2 crystals and an excellent energy resolution of 5 keV (0.2%) at the region of interest, the CUORE (Cryogenic Underground Observatory for Rare Events) experiment aims at searching for neutrinoless double beta decay of 130Te with unprecedented sensitivity. Expected to start data taking in 2015, CUORE is currently in an advanced construction phase at LNGS. CUORE projected neutrinoless double beta decay half-life sensitivity is 1.6 × 1026 y at 1σ (9.5 × 1025 y at the 90 % confidence level), in five years of live time, corresponding to an upper limit on the effective Majorana mass in the range 40-100 meV (50-130 meV). Further background rejection with auxiliary bolometric detectors could improve CUORE sensitivity and competitiveness of bolometric detectors towards a full analysis of the inverted neutrino mass hierarchy. CUORE-0 was built to test and demonstrate the performance of the upcoming CUORE experiment. It consists of a single CUORE tower (52 TeO2 bolometers of 750 g each, arranged in a 13 floor structure) constructed strictly following CUORE recipes both for materials and assembly procedures. An experiment its own, CUORE-0 is expected to reach a sensitivity to the ββ(0ν) half-life of 130Te around 3×1024 y in one year of live time. We present an update of the data, corresponding to an exposure of 18.1 kg y. An analysis of the background indicates that the CUORE performance goal is satisfied while the sensitivity goal is within reach.
Eggers, Kai M; Lindahl, Bertil; Melki, Dina; Jernberg, Tomas
2016-08-07
Cardiac troponin (cTn) assays with improved sensitivity are increasingly utilized for the assessment of patients admitted because of suspected acute coronary syndrome (ACS). However, data on the clinical consequences of the implementation of such assays are limited. In a retrospective register-based study (37 710 coronary care unit admissions; SWEDEHEART registry), we compared the case mix, the use of diagnostic procedures, treatments, and 1-year all-cause mortality 1 year before the implementation of a cTn assay with improved sensitivity (study period 1) and 1 year thereafter (study period 2). During study period 2, more at-risk patients were admitted and more patients had cTn levels above the myocardial infarction cut-off (ACS patients +13.1%; non-ACS patients +160.1%). cTn levels above this cut-off exhibited stronger associations with mortality risk in study period 2 (adjusted HR 4.45 [95% confidence interval, CI, 3.36-5.89]) compared with period 1 (adjusted HR 2.43 [95% CI 2.11-2.80]), similar as for the cTn ratio relative to the respective 99th percentile. While there was no multivariable-adjusted increase in the use of diagnostic procedures, significant trends towards more differentiated treatment depending on the cause of cTn elevation, i.e. ACS or non-ACS, were noted. The implementation of a cTn assay with improved sensitivity was associated with an increase in the number of patients who due to their cTn-status were identified as suitable for beneficial therapies. There was no inappropriate increase in hospital resource utilization. As such, cTn assays with improved sensitivity provide an opportunity to improve the clinical management of patients with suspected ACS. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
1980-01-01
A simple procedure to evaluate actual evaporation was derived by linearizing the surface energy balance equation, using Taylor's expansion. The original multidimensional hypersurface could be reduced to a linear relationship between evaporation and surface temperature or to a surface relationship involving evaporation, surface temperature and albedo. This procedure permits a rapid sensitivity analysis of the surface energy balance equation as well as a speedy mapping of evaporation from remotely sensed surface temperatures and albedo. Comparison with experimental data yielded promising results. The validity of evapotranspiration and soil moisture models in semiarid conditions was tested. Wheat was the crop chosen for a continuous measurement campaign made in the south of Italy. Radiometric, micrometeorologic, agronomic and soil data were collected for processing and interpretation.
Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,
2016-09-15
Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.« less
French, Michael T; Salomé, Helena J; Sindelar, Jody L; McLellan, A Thomas
2002-04-01
To provide detailed methodological guidelines for using the Drug Abuse Treatment Cost Analysis Program (DATCAP) and Addiction Severity Index (ASI) in a benefit-cost analysis of addiction treatment. A representative benefit-cost analysis of three outpatient programs was conducted to demonstrate the feasibility and value of the methodological guidelines. Procedures are outlined for using resource use and cost data collected with the DATCAP. Techniques are described for converting outcome measures from the ASI to economic (dollar) benefits of treatment. Finally, principles are advanced for conducting a benefit-cost analysis and a sensitivity analysis of the estimates. The DATCAP was administered at three outpatient drug-free programs in Philadelphia, PA, for 2 consecutive fiscal years (1996 and 1997). The ASI was administered to a sample of 178 treatment clients at treatment entry and at 7-months postadmission. The DATCAP and ASI appear to have significant potential for contributing to an economic evaluation of addiction treatment. The benefit-cost analysis and subsequent sensitivity analysis all showed that total economic benefit was greater than total economic cost at the three outpatient programs, but this representative application is meant to stimulate future economic research rather than justifying treatment per se. This study used previously validated, research-proven instruments and methods to perform a practical benefit-cost analysis of real-world treatment programs. The study demonstrates one way to combine economic and clinical data and offers a methodological foundation for future economic evaluations of addiction treatment.
Validation of Living Donor Nephrectomy Codes
Lam, Ngan N.; Lentine, Krista L.; Klarenbach, Scott; Sood, Manish M.; Kuwornu, Paul J.; Naylor, Kyla L.; Knoll, Gregory A.; Kim, S. Joseph; Young, Ann; Garg, Amit X.
2018-01-01
Background: Use of administrative data for outcomes assessment in living kidney donors is increasing given the rarity of complications and challenges with loss to follow-up. Objective: To assess the validity of living donor nephrectomy in health care administrative databases compared with the reference standard of manual chart review. Design: Retrospective cohort study. Setting: 5 major transplant centers in Ontario, Canada. Patients: Living kidney donors between 2003 and 2010. Measurements: Sensitivity and positive predictive value (PPV). Methods: Using administrative databases, we conducted a retrospective study to determine the validity of diagnostic and procedural codes for living donor nephrectomies. The reference standard was living donor nephrectomies identified through the province’s tissue and organ procurement agency, with verification by manual chart review. Operating characteristics (sensitivity and PPV) of various algorithms using diagnostic, procedural, and physician billing codes were calculated. Results: During the study period, there were a total of 1199 living donor nephrectomies. Overall, the best algorithm for identifying living kidney donors was the presence of 1 diagnostic code for kidney donor (ICD-10 Z52.4) and 1 procedural code for kidney procurement/excision (1PC58, 1PC89, 1PC91). Compared with the reference standard, this algorithm had a sensitivity of 97% and a PPV of 90%. The diagnostic and procedural codes performed better than the physician billing codes (sensitivity 60%, PPV 78%). Limitations: The donor chart review and validation study was performed in Ontario and may not be generalizable to other regions. Conclusions: An algorithm consisting of 1 diagnostic and 1 procedural code can be reliably used to conduct health services research that requires the accurate determination of living kidney donors at the population level. PMID:29662679
Sensitivity of the diagnostic radiological index of protection to procedural factors in fluoroscopy.
Jones, A Kyle; Pasciak, Alexander S; Wagner, Louis K
2016-07-01
To evaluate the sensitivity of the diagnostic radiological index of protection (DRIP), used to quantify the protective value of radioprotective garments, to procedural factors in fluoroscopy in an effort to determine an appropriate set of scatter-mimicking primary beams to be used in measuring the DRIP. Monte Carlo simulations were performed to determine the shape of the scattered x-ray spectra incident on the operator in different clinical fluoroscopy scenarios, including interventional radiology and interventional cardiology (IC). Two clinical simulations studied the sensitivity of the scattered spectrum to gantry angle and patient size, while technical factors were varied according to measured automatic dose rate control (ADRC) data. Factorial simulations studied the sensitivity of the scattered spectrum to gantry angle, field of view, patient size, and beam quality for constant technical factors. Average energy (Eavg) was the figure of merit used to condense fluence in each energy bin to a single numerical index. Beam quality had the strongest influence on the scattered spectrum in fluoroscopy. Many procedural factors affect the scattered spectrum indirectly through their effect on primary beam quality through ADRC, e.g., gantry angle and patient size. Lateral C-arm rotation, common in IC, increased the energy of the scattered spectrum, regardless of the direction of rotation. The effect of patient size on scattered radiation depended on ADRC characteristics, patient size, and procedure type. The scattered spectrum striking the operator in fluoroscopy is most strongly influenced by primary beam quality, particularly kV. Use cases for protective garments should be classified by typical procedural primary beam qualities, which are governed by the ADRC according to the impacts of patient size, anatomical location, and gantry angle.
1952-11-01
COPPER lb Although copper can be determined by measurement of the blue cupric ammonia complex, the reaction is not very sensitive and is subject to...alkaline solution of the sample con- taining tartrate , provided a means of separation of copper by extraction of WADO TR 52-246 1 the copper bensoinoximate...potassium tartrate ), and sodium hydroxide solution added to ad- just the pH within the range ll3 to 12-3. After adding alpha-benzoinoxime the mixture was
Time domain modal identification/estimation of the mini-mast testbed
NASA Technical Reports Server (NTRS)
Roemer, Michael J.; Mook, D. Joseph
1991-01-01
The Mini-Mast is a 20 meter long 3-dimensional, deployable/retractable truss structure designed to imitate future trusses in space. Presented here are results from a robust (with respect to measurement noise sensitivity), time domain, modal identification technique for identifying the modal properties of the Mini-Mast structure even in the face of noisy environments. Three testing/analysis procedures are considered: sinusoidal excitation near resonant frequencies of the Mini-Mast, frequency response function averaging of several modal tests, and random input excitation with a free response period.
Potentialities of mass spectrometry (ICP-MS) for actinides determination in urine.
Bouvier-Capely, C; Ritt, J; Baglan, N; Cossonnet, C
2004-05-01
The applicability of inductively coupled plasma-mass spectrometry (ICP-MS) for determining actinides in urine was investigated. Performances of ICP-MS including detection limit and analysis time were studied and compared with alpha spectrometry performances. In the field of individual monitoring of workers, the comparison chart obtained in this study can be used as a guide for medical laboratories to select the most adequate procedure to be carried out depending on the case in question (the radioisotope to be measured, the required sensitivity, and the desired response time).
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, Mohammed Omair
2012-01-01
Our simulation was able to mimic the results of 30 tests on the actual hardware. This shows that simulations have the potential to enable early design validation - well before actual hardware exists. Although simulations focused around data processing procedures at subsystem and device level, they can also be applied to system level analysis to simulate mission scenarios and consumable tracking (e.g. power, propellant, etc.). Simulation engine plug-in developments are continually improving the product, but handling time for time-sensitive operations (like those of the remote engineering unit and bus controller) can be cumbersome.
GC-MS quantitation of fragrance compounds suspected to cause skin reactions. 1.
Chaintreau, Alain; Joulain, Daniel; Marin, Christophe; Schmidt, Claus-Oliver; Vey, Matthias
2003-10-22
Recent changes in European legislation require monitoring of 24 volatile compounds in perfumes as they might elicit skin sensitization. This paper reports a GC-MS quantitation procedure for their determination in fragrance concentrates. GC and MS conditions were optimized for a routine use: analysis within 30 min, solvent and internal standard selection, and stock solution stability. Calibration curves were linear in the range of 2-100 mg/L with coefficients of determination in excess of 0.99. The method was tested using real perfumes spiked with known amounts of reference compounds.
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summery, D. C.; Johnson, W. D.
1972-01-01
Techniques quoted in the literature for the extraction of stability derivative information from flight test records are reviewed. A recent technique developed at NASA's Langley Research Center was regarded as the most productive yet developed. Results of tests of the sensitivity of this procedure to various types of data noise and to the accuracy of the estimated values of the derivatives are reported. Computer programs for providing these initial estimates are given. The literature review also includes a discussion of flight test measuring techniques, instrumentation, and piloting techniques.
Solar energy system economic evaluation for Seeco Lincoln, Lincoln, Nebraska
NASA Technical Reports Server (NTRS)
1980-01-01
The economic analysis of the solar energy system that was installed at Lincoln, Nebraska is developed for this and four other sites typical of a wide range of environmental and economic conditions in the continental United States. This analysis is accomplished based on the technical and economic models in the f chart design procedure with inputs based on the characteristics of the installed system and local conditions. The results are expressed in terms of the economic parameters of present worth of system cost over projected twenty year life: life cycle savings, year of positive savings and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainties in constituent system and economic variables is also investigated.
Vibroacoustic optimization using a statistical energy analysis model
NASA Astrophysics Data System (ADS)
Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia
2016-08-01
In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.
NASA Astrophysics Data System (ADS)
Gao, Yi; Zhu, Liangjia; Norton, Isaiah; Agar, Nathalie Y. R.; Tannenbaum, Allen
2014-03-01
Desorption electrospray ionization mass spectrometry (DESI-MS) provides a highly sensitive imaging technique for differentiating normal and cancerous tissue at the molecular level. This can be very useful, especially under intra-operative conditions where the surgeon has to make crucial decision about the tumor boundary. In such situations, the time it takes for imaging and data analysis becomes a critical factor. Therefore, in this work we utilize compressive sensing to perform the sparse sampling of the tissue, which halves the scanning time. Furthermore, sparse feature selection is performed, which not only reduces the dimension of data from about 104 to less than 50, and thus significantly shortens the analysis time. This procedure also identifies biochemically important molecules for further pathological analysis. The methods are validated on brain and breast tumor data sets.
NASA Astrophysics Data System (ADS)
Toledo, D.; Arruego, I.; Apéstigue, V.; Jiménez, J. J.; Gómez, L.; Yela, M.; Rannou, P.; Pommereau, J.-P.
2017-04-01
The solar irradiance sensor (SIS) was included in the DREAMS package onboard the ExoMars 2016 Entry Descent and Landing Demonstrator Module, and has been selected in the METEO meteorological station onboard the ExoMars 2020 Lander. This instrument is designed to measure at different time intervals the scattered flux or the sum of direct flux and scattered flux in UVA (315-400 nm) and NIR (700-1100 nm) bands. For SIS'16, these measurements are performed by a total of 3 sensors per band placed at the faces of a truncated tetrahedron with face inclination angles of 60°. The principal goal of SIS'16 design is to perform measurements of the dust opacity in UVA and NIR wavelengths ranges, crucial parameters in the understanding of the Martian dust cycle. The retrieval procedure is based on the use of radiative transfer simulations to reproduce SIS observations acquired during daytime as a function of dust opacity. Based on different sensitivity analysis, the retrieval procedure also requires to include as free parameters (1) the dust effective radius; (2) the dust effective variance; and (3) the imaginary part of the refractive index of dust particles in UVA band. We found that the imaginary part of the refractive index of dust particles does not have a big impact on NIR signal, and hence we can kept constant this parameter in the retrieval of dust opacity at this channel. In addition to dust opacity measurements, this instrument is also capable to detect and characterize clouds by looking at the time variation of the color index (CI), defined as the ratio between the observations in NIR and UVA channels, during daytime or twilight. By simulating CI signals with a radiative transfer model, the cloud opacity and cloud altitude (only during twilight) can be retrieved. Here the different retrieval procedures that are used to analyze SIS measurements, as well as the results obtained in different sensitivity analysis, are presented and discussed.
Ultra-low background mass spectrometry for rare-event searches
NASA Astrophysics Data System (ADS)
Dobson, J.; Ghag, C.; Manenti, L.
2018-01-01
Inductively Coupled Plasma Mass Spectrometry (ICP-MS) allows for rapid, high-sensitivity determination of trace impurities, notably the primordial radioisotopes 238U and 232Th, in candidate materials for low-background rare-event search experiments. We describe the setup and characterisation of a dedicated low-background screening facility at University College London where we operate an Agilent 7900 ICP-MS. The impact of reagent and carrier gas purity is evaluated and we show that twice-distilled ROMIL-SpATM-grade nitric acid and zero-grade Ar gas delivers similar sensitivity to ROMIL-UpATM-grade acid and research-grade gas. A straightforward procedure for sample digestion and analysis of materials with U/Th concentrations down to 10 ppt g/g is presented. This includes the use of 233U and 230Th spikes to correct for signal loss from a range of sources and verification of 238U and 232Th recovery through digestion and analysis of a certified reference material with a complex sample matrix. Finally, we demonstrate assays and present results from two sample preparation and assay methods: a high-sensitivity measurement of ultra-pure Ti using open digestion techniques, and a closed vessel microwave digestion of a nickel-chromium-alloy using a multi-acid mixture.
Bangle, Rachel; Sampaio, Renato N; Troian-Gautier, Ludovic; Meyer, Gerald J
2018-01-24
The electrografting of [Ru(ttt)(tpy-C 6 H 4 -N 2 + )] 3+ , where "ttt" is 4,4',4″-tri-tert-butyl-2,2':6',2″-terpyridine, was investigated on several wide band gap metal oxide surfaces (TiO 2 , SnO 2 , ZrO 2 , ZnO, In 2 O 3 :Sn) and compared to structurally analogous sensitizers that differed only by the anchoring group, i.e., -PO 3 H 2 and -COOH. An optimized procedure for diazonium electrografting to semiconductor metal oxides is presented that allowed surface coverages that ranged between 4.7 × 10 -8 and 10.6 × 10 -8 mol cm -2 depending on the nature of the metal oxide. FTIR analysis showed the disappearance of the diazonium stretch at 2266 cm -1 after electrografting. XPS analysis revealed a characteristic peak of Ru 3d at 285 eV as well as a peak at 531.6 eV that was attributed to O 1s in Ti-O-C bonds. Photocurrents were measured to assess electron injection efficiency of these modified surfaces. The electrografted sensitizers exhibited excellent stability across a range of pHs spanning from 1 to 14, where classical binding groups such as carboxylic and phosphonic derivatives were hydrolyzed.
San Juan, R; Aguado, J M; López, M J; Lumbreras, C; Enriquez, F; Sanz, F; Chaves, F; López-Medrano, F; Lizasoain, M; Rufilanchas, J J
2005-03-01
Postsurgical mediastinitis (PSM) remains a major cause of morbidity and mortality in patients undergoing cardiac surgery procedures. Although prompt diagnosis is crucial in these patients, neither clinical data nor imaging techniques have shown enough sensitivity or specificity for early diagnosis of PSM. The aim of the present study was to assess the validity of blood cultures as a diagnostic test for the early detection of PSM in patients who become febrile after cardiac surgery procedures. During a 4-year period (1999-2002), patients who developed fever (>37.8 degrees C) in the first 60 days after a cardiac surgery procedure were evaluated. Blood cultures were drawn from these patients. PSM was defined as deep infection involving retrosternal tissue and/or the sternal bone directly observed by the surgeon and confirmed microbiologically. Three criteria for positivity of blood cultures were applied: bacteremia, staphylococcal bacteremia, or Staphylococcus aureus bacteremia. For purposes of the analysis, a positive blood culture in patients with PSM was considered a true-positive test and a negative blood culture a false-negative test. Otherwise, in febrile patients without PSM in the postsurgery period, a positive blood culture was considered a false-positive test and a negative blood culture a true-negative test. Blood cultures were drawn from 266 febrile patients in the postsurgery period. PSM occurred in 38 patients (26 cases due to S. aureus, 8 to Staphylococcus epidermidis, 3 to gram-negative enteric bacteria, and one to Pseudomonas aeruginosa). Within the 60-day postsurgical period, blood culture as a diagnostic test was most accurate in patients with S. aureus bacteremia, providing 68% sensitivity, 98% specificity, a positive predictive value of 87%, and a negative predictive value of 95%. If the analysis was limited to the period during which patients are at maximum risk for PSM (day 7-20), the values in patients with S. aureus bacteremia were as follows: 73% sensitivity, 98% specificity, 90% positive predictive value, and 93% negative predictive value. Blood culture is an accurate test for the early diagnosis of PSM in febrile patients after cardiac surgery, particularly in institutions where S. aureus is prevalent in this context. A negative blood culture practically excludes PSM and, during the period of maximum risk for PSM, the presence of S. aureus bacteremia should compel early surgical management.
Influence of mixing procedure on robustness of self-consolidating concrete.
DOT National Transportation Integrated Search
2014-08-01
Self-Consolidating Concrete is, in the fresh state, more sensitive to small variations in the constituent elements and the mixing : procedure compared to Conventional Vibrated Concrete. Several studies have been performed recently to identify robustn...
Extraction Methods in Soil Phosphorus Characterisation
NASA Astrophysics Data System (ADS)
Soinne, Helena
2010-05-01
Extraction methods are widely used to assess the bioavailability of P and to characterise soil P reserves. Even though new and more sophisticated methods to characterise soil P are constantly developed the use of extraction methods is not likely to be replaced because of the relatively simple analytical equipment needed for the analysis. However, the large variety of extractants, pre-treatments and sample preparation procedures complicate the comparison of published results. In order to improve our understanding of the behaviour and cycling of P in soil, it is important to know the role of extracted P in the soil P cycle. The knowledge of the factors affecting the analytical outcome is a prerequisite for justified interpretation of the results. In this study, the effect of sample pre-treatment and properties of the used extractant on extractable molybdate-reactive phosphorus (MRP) and molybdate-unreactive phosphorus (MUP) was studied. Furthermore, the effect of sample preparation procedures prior the analysis on measured MRP and MUP was studied. Two widely used sequential extraction procedures were compared on their ability to show management induced differences on soil P. These results revealed that pre-treatments changed soil properties and air-drying was found to affect soil P, particularly extractable MUP, thought to represent organic P, by disrupting organic matter. This was evidenced by an increase in the water-extractable small-sized (<0.2 µm) P that, at least partly, took place at the expense of the large-sized (>0.2 µm) P. In addition to the effects of sample pre-treatment, the results showed that extractable organic P was sensitive to the chemical nature of the used extractant and to the sample preparation procedures employed prior to P analysis, including centrifugation and filtering of soil suspensions. Filtering may remove a major proportion of extractable MUP; therefore filtering cannot be recommended in the characterisation of solubilised MUP. However, extractants having high ionic strength may cause the organic molecules to collapse during centrifugation and thus affect the recovered concentration of MUP. These findings highlight the importance of characterising the nature of the MUP extracted with different extractants and acknowledging the sensitivity of MUP to analytical procedures when comparing published results. Widely used sequential fractionation procedures proved to be able to detect land-use -derived differences in the distribution of P among fractions of different solubilities. The results of this study demonstrate that, although the extraction methods do not reveal the biogeochemical function of a given P pool in soil, the extraction methods can be used to detect changes in soil P pools with different solubilities. To obtain the most benefit from extraction methods, we need a better understanding of the biological availability of P and the role of extracted P fraction in the P cycle in soils from different environments (climatic and weather) and land-uses.
ERIC Educational Resources Information Center
Olatunji, Bunmi O.; Broman-Fulks, Joshua J.
2007-01-01
Disgust sensitivity has recently been implicated as a specific vulnerability factor for several anxiety-related disorders. However, it is not clear whether disgust sensitivity is a dimensional or categorical phenomenon. The present study examined the latent structure of disgust by applying three taxometric procedures (maximum eigenvalue, mean…
ERIC Educational Resources Information Center
Bakermans-Kranenburg, Marian J.; Alink, Lenneke R. A.; Biro, Szilvia; Voorthuis, Alexandra; van IJzendoorn, Marinus H.
2015-01-01
Observation of parental sensitivity in a standard procedure, in which caregivers are faced with the same level of infant demand, enables the comparison of sensitivity "between" caregivers. We developed an ecologically valid standardized setting using an infant simulator with interactive features, the Leiden Infant Simulator Sensitivity…
Nanomaterials for Electrochemical Immunosensing
Pan, Mingfei; Gu, Ying; Yun, Yaguang; Li, Min; Jin, Xincui; Wang, Shuo
2017-01-01
Electrochemical immunosensors resulting from a combination of the traditional immunoassay approach with modern biosensors and electrochemical analysis constitute a current research hotspot. They exhibit both the high selectivity characteristics of immunoassays and the high sensitivity of electrochemical analysis, along with other merits such as small volume, convenience, low cost, simple preparation, and real-time on-line detection, and have been widely used in the fields of environmental monitoring, medical clinical trials and food analysis. Notably, the rapid development of nanotechnology and the wide application of nanomaterials have provided new opportunities for the development of high-performance electrochemical immunosensors. Various nanomaterials with different properties can effectively solve issues such as the immobilization of biological recognition molecules, enrichment and concentration of trace analytes, and signal detection and amplification to further enhance the stability and sensitivity of the electrochemical immunoassay procedure. This review introduces the working principles and development of electrochemical immunosensors based on different signals, along with new achievements and progress related to electrochemical immunosensors in various fields. The importance of various types of nanomaterials for improving the performance of electrochemical immunosensor is also reviewed to provide a theoretical basis and guidance for the further development and application of nanomaterials in electrochemical immunosensors. PMID:28475158
Sample preparation: a critical step in the analysis of cholesterol oxidation products.
Georgiou, Christiana A; Constantinou, Michalis S; Kapnissi-Christodoulou, Constantina P
2014-02-15
In recent years, cholesterol oxidation products (COPs) have drawn scientific interest, particularly due to their implications on human health. A big number of these compounds have been demonstrated to be cytotoxic, mutagenic, and carcinogenic. The main source of COPs is through diet, and particularly from the consumption of cholesterol-rich foods. This raises questions about the safety of consumers, and it suggests the necessity for the development of a sensitive and a reliable analytical method in order to identify and quantify these components in food samples. Sample preparation is a necessary step in the analysis of COPs in order to eliminate interferences and increase sensitivity. Numerous publications have, over the years, reported the use of different methods for the extraction and purification of COPs. However, no method has, so far, been established as a routine method for the analysis of COPs in foods. Therefore, it was considered important to overview different sample preparation procedures and evaluate the different preparative parameters, such as time of saponification, the type of organic solvents for fat extraction, the stationary phase in solid phase extraction, etc., according to recovery, precision and simplicity. Copyright © 2013 Elsevier Ltd. All rights reserved.
DNA melting analysis: application of the "open tube" format for detection of mutant KRAS.
Botezatu, Irina V; Kondratova, Valentina N; Shelepov, Valery P; Lichtenstein, Anatoly V
2011-12-15
High-resolution melting (HRM) analysis is a very effective method for genotyping and mutation scanning that is usually performed just after PCR amplification (the "closed tube" format). Though simple and convenient, the closed tube format makes the HRM dependent on the PCR mix, not generally optimal for DNA melting analysis. Here, the "open tube" format, namely the post-PCR optimization procedure (amplicon shortening and solution chemistry modification), is proposed. As a result, mutation scanning of short amplicons becomes feasible on a standard real-time PCR instrument (not primarily designed for HRM) using SYBR Green I. This approach has allowed us to considerably enhance the sensitivity of detecting mutant KRAS using both low- and high-resolution systems (the Bio-Rad iQ5-SYBR Green I and Bio-Rad CFX96-EvaGreen, respectively). The open tube format, though more laborious than the closed tube one, can be used in situations when maximal sensitivity of the method is needed. It also permits standardization of DNA melting experiments and the introduction of instruments of a "lower level" into the range of those suitable for mutation scanning. Copyright © 2011 Elsevier Inc. All rights reserved.
Low energy analysis techniques for CUORE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alduino, C.; Alfonso, K.; Artusa, D. R.
CUORE is a tonne-scale cryogenic detector operating at the Laboratori Nazionali del Gran Sasso (LNGS) that uses tellurium dioxide bolometers to search for neutrinoless double-beta decay of 130Te. CUORE is also suitable to search for low energy rare events such as solar axions or WIMP scattering, thanks to its ultra-low background and large target mass. However, to conduct such sensitive searches requires improving the energy threshold to 10 keV. Here in this article, we describe the analysis techniques developed for the low energy analysis of CUORE-like detectors, using the data acquired from November 2013 to March 2015 by CUORE-0, amore » single-tower prototype designed to validate the assembly procedure and new cleaning techniques of CUORE. We explain the energy threshold optimization, continuous monitoring of the trigger efficiency, data and event selection, and energy calibration at low energies in detail. We also present the low energy background spectrum of CUORE-0 below 60keV. Finally, we report the sensitivity of CUORE to WIMP annual modulation using the CUORE-0 energy threshold and background, as well as an estimate of the uncertainty on the nuclear quenching factor from nuclear recoils inCUORE-0.« less
Low energy analysis techniques for CUORE
Alduino, C.; Alfonso, K.; Artusa, D. R.; ...
2017-12-12
CUORE is a tonne-scale cryogenic detector operating at the Laboratori Nazionali del Gran Sasso (LNGS) that uses tellurium dioxide bolometers to search for neutrinoless double-beta decay of 130Te. CUORE is also suitable to search for low energy rare events such as solar axions or WIMP scattering, thanks to its ultra-low background and large target mass. However, to conduct such sensitive searches requires improving the energy threshold to 10 keV. Here in this article, we describe the analysis techniques developed for the low energy analysis of CUORE-like detectors, using the data acquired from November 2013 to March 2015 by CUORE-0, amore » single-tower prototype designed to validate the assembly procedure and new cleaning techniques of CUORE. We explain the energy threshold optimization, continuous monitoring of the trigger efficiency, data and event selection, and energy calibration at low energies in detail. We also present the low energy background spectrum of CUORE-0 below 60keV. Finally, we report the sensitivity of CUORE to WIMP annual modulation using the CUORE-0 energy threshold and background, as well as an estimate of the uncertainty on the nuclear quenching factor from nuclear recoils inCUORE-0.« less
Measurement Consistency from Magnetic Resonance Images
Chung, Dongjun; Chung, Moo K.; Durtschi, Reid B.; Lindell, R. Gentry; Vorperian, Houri K.
2010-01-01
Rationale and Objectives In quantifying medical images, length-based measurements are still obtained manually. Due to possible human error, a measurement protocol is required to guarantee the consistency of measurements. In this paper, we review various statistical techniques that can be used in determining measurement consistency. The focus is on detecting a possible measurement bias and determining the robustness of the procedures to outliers. Materials and Methods We review correlation analysis, linear regression, Bland-Altman method, paired t-test, and analysis of variance (ANOVA). These techniques were applied to measurements, obtained by two raters, of head and neck structures from magnetic resonance images (MRI). Results The correlation analysis and the linear regression were shown to be insufficient for detecting measurement inconsistency. They are also very sensitive to outliers. The widely used Bland-Altman method is a visualization technique so it lacks the numerical quantification. The paired t-test tends to be sensitive to small measurement bias. On the other hand, ANOVA performs well even under small measurement bias. Conclusion In almost all cases, using only one method is insufficient and it is recommended to use several methods simultaneously. In general, ANOVA performs the best. PMID:18790405
Gariepy, Aileen M; Creinin, Mitchell D; Smith, Kenneth J; Xu, Xiao
2014-08-01
To compare the expected probability of pregnancy after hysteroscopic versus laparoscopic sterilization based on available data using decision analysis. We developed an evidence-based Markov model to estimate the probability of pregnancy over 10 years after three different female sterilization procedures: hysteroscopic, laparoscopic silicone rubber band application and laparoscopic bipolar coagulation. Parameter estimates for procedure success, probability of completing follow-up testing and risk of pregnancy after different sterilization procedures were obtained from published sources. In the base case analysis at all points in time after the sterilization procedure, the initial and cumulative risk of pregnancy after sterilization is higher in women opting for hysteroscopic than either laparoscopic band or bipolar sterilization. The expected pregnancy rates per 1000 women at 1 year are 57, 7 and 3 for hysteroscopic sterilization, laparoscopic silicone rubber band application and laparoscopic bipolar coagulation, respectively. At 10 years, the cumulative pregnancy rates per 1000 women are 96, 24 and 30, respectively. Sensitivity analyses suggest that the three procedures would have an equivalent pregnancy risk of approximately 80 per 1000 women at 10 years if the probability of successful laparoscopic (band or bipolar) sterilization drops below 90% and successful coil placement on first hysteroscopic attempt increases to 98% or if the probability of undergoing a hysterosalpingogram increases to 100%. Based on available data, the expected population risk of pregnancy is higher after hysteroscopic than laparoscopic sterilization. Consistent with existing contraceptive classification, future characterization of hysteroscopic sterilization should distinguish "perfect" and "typical" use failure rates. Pregnancy probability at 1 year and over 10 years is expected to be higher in women having hysteroscopic as compared to laparoscopic sterilization. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sargent, Dusty; Chen, Chao-I.; Wang, Yuan-Fang
2010-02-01
The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance, structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation) of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient alternative for 3D model construction. Furthermore, the calibration procedure does not require special training nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that can be printed on any laser or inkjet printer).
NASA Astrophysics Data System (ADS)
Plokhikh, A.; Vazhenin, N.; Soganova, G.
Wide application of electric propulsions (EP) as attitude control and orbit correction thrusters for a numerous class of satellites (remote sensing and communications satellites including) imposes new problems before the developers in meeting the electromagnetic compatibility requirements on board these satellites. This is connected with the fact that any EP is a source of interference broad-band emission reaching, as a rule, frequency ranges used by on-board radio systems designed for remote sensing and communications. In this case, reliable joint operation should be secured for the highly sensitive on -board radio receiving systems and sensors of remote sensing systems on one hand and EP on the other. In view of this, analysis is rather actual for the influence of EP interference emission upon the parameters and characteristics of modern remote sensing and communications systems. Procedures and results of typical operating characteristics calculation for the radio systems with the presence of operating EP on board are discussed in the paper on the basis of systematic approach with the following characteristics being among them: signal-to-noise ratio, range, data transmission rate, error probability, etc. EP effect is taken into account by the statistical analysis for the results of joint influence of valid signal and interference produced by EP upon the quality indices of communication systems and paths of the sensors being the parts of remote sensing systems. Test data for the measured EP interference characteristics were used for qualitative assessments. All necessary measurements were made by authors on the basis of the test procedure developed by them for assessing self- em ission of EP under ground conditions that may be used as a base for the certification of such measurements. Analysis was made on the basis of test data obtained and calculation procedures developed by authors for the EP influence upon the qualitative characteristics of remote sensing and communications radio systems that revealed the presence of destructive effect resulting in substantial decrease in maximum range and data transmission rate, as well as reduction of sensitivity for the sensors of remote sensing systems. Recommendations are given on the basis of analysis made for the optimization of radio systems and calibration of their sensors at a presence of electric propulsions on board the satellites.
Comparison of methods for determination of volatile organic compounds in drinking water.
Golfinopoulos, S K; Lekkas, T D; Nikolaou, A D
2001-10-01
Comparison of four methods including liquid-liquid extraction (LLE), direct aqueous injection (DAI), purge and trap (PAT) and head space (HS) were carried out in this work for determination of volatile organic compounds (VOCs) including trihalomethanes (THMs) in drinking water. This comparison is made especially to show the advantages and disadvantages and specifically the different detection limits (DL) that can be obtained for a given type of analysis. LLE is applicable only for determination of the THMs concentrations, while DAI, PAT, HS methods with different DL each of them are applicable for all VOCs, with PAT to be the most sensitive. Sampling apparatus and procedure for all these methods except of PAT are very simple and easy, but possible disadvantages for LLE and DAI are the low sensitivity and especially the detection only of THMs with LLE.
Optical demodulation system for digitally encoded suspension array in fluoroimmunoassay
NASA Astrophysics Data System (ADS)
He, Qinghua; Li, Dongmei; He, Yonghong; Guan, Tian; Zhang, Yilong; Shen, Zhiyuan; Chen, Xuejing; Liu, Siyu; Lu, Bangrong; Ji, Yanhong
2017-09-01
A laser-induced breakdown spectroscopy and fluorescence spectroscopy-coupled optical system is reported to demodulate digitally encoded suspension array in fluoroimmunoassay. It takes advantage of the plasma emissions of assembled elemental materials to digitally decode the suspension array, providing a more stable and accurate recognition to target biomolecules. By separating the decoding procedure of suspension array and adsorption quantity calculation of biomolecules into two independent channels, the cross talk between decoding and label signals in traditional methods had been successfully avoided, which promoted the accuracy of both processes and realized more sensitive quantitative detection of target biomolecules. We carried a multiplexed detection of several types of anti-IgG to verify the quantitative analysis performance of the system. A limit of detection of 1.48×10-10 M was achieved, demonstrating the detection sensitivity of the optical demodulation system.
Affinity Proteomics for Fast, Sensitive, Quantitative Analysis of Proteins in Plasma.
O'Grady, John P; Meyer, Kevin W; Poe, Derrick N
2017-01-01
The improving efficacy of many biological therapeutics and identification of low-level biomarkers are driving the analytical proteomics community to deal with extremely high levels of sample complexity relative to their analytes. Many protein quantitation and biomarker validation procedures utilize an immunoaffinity enrichment step to purify the sample and maximize the sensitivity of the corresponding liquid chromatography tandem mass spectrometry measurements. In order to generate surrogate peptides with better mass spectrometric properties, protein enrichment is followed by a proteolytic cleavage step. This is often a time-consuming multistep process. Presented here is a workflow which enables rapid protein enrichment and proteolytic cleavage to be performed in a single, easy-to-use reactor. Using this strategy Klotho, a low-abundance biomarker found in plasma, can be accurately quantitated using a protocol that takes under 5 h from start to finish.
Reliability and validity of procedure-based assessments in otolaryngology training.
Awad, Zaid; Hayden, Lindsay; Robson, Andrew K; Muthuswamy, Keerthini; Tolley, Neil S
2015-06-01
To investigate the reliability and construct validity of procedure-based assessment (PBA) in assessing performance and progress in otolaryngology training. Retrospective database analysis using a national electronic database. We analyzed PBAs of otolaryngology trainees in North London from core trainees (CTs) to specialty trainees (STs). The tool contains six multi-item domains: consent, planning, preparation, exposure/closure, technique, and postoperative care, rated as "satisfactory" or "development required," in addition to an overall performance rating (pS) of 1 to 4. Individual domain score, overall calculated score (cS), and number of "development-required" items were calculated for each PBA. Receiver operating characteristic analysis helped determine sensitivity and specificity. There were 3,152 otolaryngology PBAs from 46 otolaryngology trainees analyzed. PBA reliability was high (Cronbach's α 0.899), and sensitivity approached 99%. cS correlated positively with pS and level in training (rs : +0.681 and +0.324, respectively). ST had higher cS and pS than CT (93% ± 0.6 and 3.2 ± 0.03 vs. 71% ± 3.1 and 2.3 ± 0.08, respectively; P < .001). cS and pS increased from CT1 to ST8 showing construct validity (rs : +0.348 and +0.354, respectively; P < .001). The technical skill domain had the highest utilization (98% of PBAs) and was the best predictor of cS and pS (rs : +0.96 and +0.66, respectively). PBA is reliable and valid for assessing otolaryngology trainees' performance and progress at all levels. It is highly sensitive in identifying competent trainees. The tool is used in a formative and feedback capacity. The technical domain is the best predictor and should be given close attention. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C
2012-06-01
Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mehrpooya, Mehdi; Ansarinasab, Hojat; Moftakhari Sharifzadeh, Mohammad Mehdi; Rosen, Marc A.
2017-10-01
An integrated power plant with a net electrical power output of 3.71 × 105 kW is developed and investigated. The electrical efficiency of the process is found to be 60.1%. The process includes three main sub-systems: molten carbonate fuel cell system, heat recovery section and cryogenic carbon dioxide capturing process. Conventional and advanced exergoeconomic methods are used for analyzing the process. Advanced exergoeconomic analysis is a comprehensive evaluation tool which combines an exergetic approach with economic analysis procedures. With this method, investment and exergy destruction costs of the process components are divided into endogenous/exogenous and avoidable/unavoidable parts. Results of the conventional exergoeconomic analyses demonstrate that the combustion chamber has the largest exergy destruction rate (182 MW) and cost rate (13,100 /h). Also, the total process cost rate can be decreased by reducing the cost rate of the fuel cell and improving the efficiency of the combustion chamber and heat recovery steam generator. Based on the total avoidable endogenous cost rate, the priority for modification is the heat recovery steam generator, a compressor and a turbine of the power plant, in rank order. A sensitivity analysis is done to investigate the exergoeconomic factor parameters through changing the effective parameter variations.
Lubowitz, James H; Appleby, David
2011-10-01
The purpose of this study was to determine the cost-effectiveness of knee arthroscopy and anterior cruciate ligament (ACL) reconstruction. Retrospective analysis of prospectively collected data from a single-surgeon, institutional review board-approved outcomes registry included 2 cohorts: surgically treated knee arthroscopy and ACL reconstruction patients. Our outcome measure is cost-effectiveness (cost of a quality-adjusted life-year [QALY]). The QALY is calculated by multiplying difference in health-related quality of life, before and after treatment, by life expectancy. Health-related quality of life is measured by use of the Quality of Well-Being scale, which has been validated for cost-effectiveness analysis. Costs are facility charges per the facility cost-to-charges ratio plus surgeon fee. Sensitivity analyses are performed to determine the effect of variations in costs or outcomes. There were 93 knee arthroscopy and 35 ACL reconstruction patients included at a mean follow-up of 2.1 years. Cost per QALY was $5,783 for arthroscopy and $10,326 for ACL reconstruction (2009 US dollars). Sensitivity analysis shows that our results are robust (relatively insensitive) to variations in costs or outcomes. Knee arthroscopy and knee ACL reconstruction are very cost-effective. Copyright © 2011 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
A preliminary study applying decision analysis to the treatment of caries in primary teeth.
Tamošiūnas, Vytautas; Kay, Elizabeth; Craven, Rebecca
2013-01-01
To determine an optimal treatment strategy for carious deciduous teeth. Manchester Dental Hospital. Decision analysis. The likelihoods of each of the sequelae of caries in deciduous teeth were determined from the literature. The utility of the outcomes from non-treatment and treatment was then measured in 100 parents of children with caries, using a visual analogue scale. Decision analysis was performed which weighted the value of each potential outcome by the probability of its occurrence. A decision tree "fold-back" and sensitivity analysis then determined which treatment strategies, under which circumstances, offered the maximum expected utilities. The decision to leave a carious deciduous tooth unrestored attracted a maximum utility of 76.65 and the overall expected utility for the decision "restore" was 73.27 The decision to restore or not restore carious deciduous teeth are therefore of almost equal value. The decision is however highly sensitive to the utility value assigned to the advent of pain by the patient. There is no clear advantage to be gained by restoring deciduous teeth if patients' evaluations of outcomes are taken into account. Avoidance of pain and avoidance of procedures which are viewed as unpleasant by parents should be key determinants of clinical decision making about carious deciduous teeth.
Gu, Binghe; Meldrum, Brian; McCabe, Terry; Phillips, Scott
2012-01-01
A theoretical treatment was developed and validated that relates analyte concentration and mass sensitivities to injection volume, retention factor, particle diameter, column length, column inner diameter and detection wavelength in liquid chromatography, and sample volume and extracted volume in solid-phase extraction (SPE). The principles were applied to improve sensitivity for trace analysis of clopyralid in drinking water. It was demonstrated that a concentration limit of detection of 0.02 ppb (μg/L) for clopyralid could be achieved with the use of simple UV detection and 100 mL of a spiked drinking water sample. This enabled reliable quantitation of clopyralid at the targeted 0.1 ppb level. Using a buffered solution as the elution solvent (potassium acetate buffer, pH 4.5, containing 10% of methanol) in the SPE procedures was found superior to using 100% methanol, as it provided better extraction recovery (70-90%) and precision (5% for a concentration at 0.1 ppb level). In addition, the eluted sample was in a weaker solvent than the mobile phase, permitting the direct injection of the extracted sample, which enabled a faster cycle time of the overall analysis. Excluding the preparation of calibration standards, the analysis of a single sample, including acidification, extraction, elution and LC run, could be completed in 1 h. The method was used successfully for the determination of clopyralid in over 200 clopyralid monoethanolamine-fortified drinking water samples, which were treated with various water treatment resins. Copyright © 2012 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim.
Tracking Matrix Effects in the Analysis of DNA Adducts of Polycyclic Aromatic Hydrocarbons
Klaene, Joshua J.; Flarakos, Caroline; Glick, James; Barret, Jennifer T.; Zarbl, Helmut; Vouros, Paul
2015-01-01
LC-MS using electrospray ionization is currently the method of choice in bio-organic analysis covering a wide range of applications in a broad spectrum of biological media. The technique is noted for its high sensitivity but one major limitation which hinders achievement of its optimal sensitivity is the signal suppression due to matrix inferences introduced by the presence of co-extracted compounds during the sample preparation procedure. The analysis of DNA adducts of common environmental carcinogens is particularly sensitive to such matrix effects as sample preparation is a multistep process which involves “contamination” of the sample due to the addition of enzymes and other reagents for digestion of the DNA in order to isolate the analyte(s). This problem is further exacerbated by the need to reach low levels of quantitation (LOQ in the ppb level) while also working with limited (2-5 μg) quantities of sample. We report here on the systematic investigation of ion signal suppression contributed by each individual step involved in the sample preparation associated with the analysis of DNA adducts of polycyclic aromatic hydrocarbon (PAH) using as model analyte dG-BaP, the deoxyguanosine adduct of benzo[a]pyrene (BaP). The individual matrix contribution of each one of these sources to analyte signal was systematically addressed as were any interactive effects. The information was used to develop a validated analytical protocol for the target biomarker at levels typically encountered in vivo using as little as 2 μg of DNA and applied to a dose response study using a metabolically competent cell line. PMID:26607319
Rapid and sensitive PCR-dipstick DNA chromatography for multiplex analysis of the oral microbiota.
Tian, Lingyang; Sato, Takuichi; Niwa, Kousuke; Kawase, Mitsuo; Tanner, Anne C R; Takahashi, Nobuhiro
2014-01-01
A complex of species has been associated with dental caries under the ecological hypothesis. This study aimed to develop a rapid, sensitive PCR-dipstick DNA chromatography assay that could be read by eye for multiplex and semiquantitative analysis of plaque bacteria. Parallel oligonucleotides were immobilized on a dipstick strip for multiplex analysis of target DNA sequences of the caries-associated bacteria, Streptococcus mutans, Streptococcus sobrinus, Scardovia wiggsiae, Actinomyces species, and Veillonella parvula. Streptavidin-coated blue-colored latex microspheres were to generate signal. Target DNA amplicons with an oligonucleotide-tagged terminus and a biotinylated terminus were coupled with latex beads through a streptavidin-biotin interaction and then hybridized with complementary oligonucleotides on the strip. The accumulation of captured latex beads on the test and control lines produced blue bands, enabling visual detection with the naked eye. The PCR-dipstick DNA chromatography detected quantities as low as 100 pg of DNA amplicons and demonstrated 10- to 1000-fold higher sensitivity than PCR-agarose gel electrophoresis, depending on the target bacterial species. Semiquantification of bacteria was performed by obtaining a series of chromatograms using serial 10-fold dilution of PCR-amplified DNA extracted from dental plaque samples. The assay time was less than 3 h. The semiquantification procedure revealed the relative amounts of each test species in dental plaque samples, indicating that this disposable device has great potential in analysis of microbial composition in the oral cavity and intestinal tract, as well as in point-of-care diagnosis of microbiota-associated diseases.
Rapid and Sensitive PCR-Dipstick DNA Chromatography for Multiplex Analysis of the Oral Microbiota
Niwa, Kousuke; Kawase, Mitsuo; Tanner, Anne C. R.; Takahashi, Nobuhiro
2014-01-01
A complex of species has been associated with dental caries under the ecological hypothesis. This study aimed to develop a rapid, sensitive PCR-dipstick DNA chromatography assay that could be read by eye for multiplex and semiquantitative analysis of plaque bacteria. Parallel oligonucleotides were immobilized on a dipstick strip for multiplex analysis of target DNA sequences of the caries-associated bacteria, Streptococcus mutans, Streptococcus sobrinus, Scardovia wiggsiae, Actinomyces species, and Veillonella parvula. Streptavidin-coated blue-colored latex microspheres were to generate signal. Target DNA amplicons with an oligonucleotide-tagged terminus and a biotinylated terminus were coupled with latex beads through a streptavidin-biotin interaction and then hybridized with complementary oligonucleotides on the strip. The accumulation of captured latex beads on the test and control lines produced blue bands, enabling visual detection with the naked eye. The PCR-dipstick DNA chromatography detected quantities as low as 100 pg of DNA amplicons and demonstrated 10- to 1000-fold higher sensitivity than PCR-agarose gel electrophoresis, depending on the target bacterial species. Semiquantification of bacteria was performed by obtaining a series of chromatograms using serial 10-fold dilution of PCR-amplified DNA extracted from dental plaque samples. The assay time was less than 3 h. The semiquantification procedure revealed the relative amounts of each test species in dental plaque samples, indicating that this disposable device has great potential in analysis of microbial composition in the oral cavity and intestinal tract, as well as in point-of-care diagnosis of microbiota-associated diseases. PMID:25485279
Pricing strategy for aesthetic surgery: economic analysis of a resident clinic's change in fees.
Krieger, L M; Shaw, W W
1999-02-01
The laws of microeconomics explain how prices affect consumer purchasing decisions and thus overall revenues and profits. These principles can easily be applied to the behavior aesthetic plastic surgery patients. The UCLA Division of Plastic Surgery resident aesthetics clinic recently offered a radical price change for its services. The effects of this change on demand for services and revenue were tracked. Economic analysis was applied to see if this price change resulted in the maximization of total revenues, or if additional price changes could further optimize them. Economic analysis of pricing involves several steps. The first step is to assess demand. The number of procedures performed by a given practice at different price levels can be plotted to create a demand curve. From this curve, price sensitivities of consumers can be calculated (price elasticity of demand). This information can then be used to determine the pricing level that creates demand for the exact number of procedures that yield optimal revenues. In economic parlance, revenues are maximized by pricing services such that elasticity is equal to 1 (the point of unit elasticity). At the UCLA resident clinic, average total fees per procedure were reduced by 40 percent. This resulted in a 250-percent increase in procedures performed for representative 4-month periods before and after the price change. Net revenues increased by 52 percent. Economic analysis showed that the price elasticity of demand before the price change was 6.2. After the price change it was 1. We conclude that the magnitude of the price change resulted in a fee schedule that yielded the highest possible revenues from the resident clinic. These results show that changes in price do affect total revenue and that the nature of these effects can be understood, predicted, and maximized using the tools of microeconomics.
NASA Technical Reports Server (NTRS)
Fetterman, Timothy L.; Noor, Ahmed K.
1987-01-01
Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.
A concept analysis of professional commitment in nursing.
García-Moyano, Loreto; Altisent, Rogelio; Pellicer-García, Begoña; Guerrero-Portillo, Sandra; Arrazola-Alberdi, Oihana; Delgado-Marroquín, María Teresa
2017-01-01
The concept of professional commitment is being widely studied at present. However, although it is considered an indicator for the most human part of nursing care, there is no clear definition for it, and different descriptors are being used indiscriminately to reference it. The aim of this study is to clarify the concept of professional commitment in nursing through the Rodgers' evolutionary concept analysis process. Systematic search using English and Spanish descriptors and concept analysis. Studies published between 2009 and June 2015, front-to-back analysis of the Nursing Ethics journal and manual check of articles cited in studies related to the Nijmegen Professionalism Scale. The procedure of concept analysis developed by Rodgers was used. Ethical considerations: Although the topic was not labeled as sensitive and subject to ethical approval, its realization was approved by the Ethical Committee of Clinical Research of Aragon (CEICA) approved the study on 18 March 2015 and also careful procedures have been followed according to ethics expressed in the Declaration of Helsinki. A total of 17 published studies. A clear definition of the concept was made, and surrogate terms, concept dimension, differential factors related to the concept, sociocultural variations and consequences for nursing practice were identified. There is a need for continuous advancement in the development of the concept, specific actions to encourage this and the improvement of evaluation methods for its study.
An efficient early phase 2 procedure to screen medications for efficacy in smoking cessation.
Perkins, Kenneth A; Lerman, Caryn
2014-01-01
Initial screening of new medications for potential efficacy (i.e., Food and Drug Administration (FDA) early phase 2), such as in aiding smoking cessation, should be efficient in identifying which drugs do, or do not, warrant more extensive (and expensive) clinical testing. This focused review outlines our research on development, evaluation, and validation of an efficient crossover procedure for sensitivity in detecting medication efficacy for smoking cessation. First-line FDA-approved medications of nicotine patch, varenicline, and bupropion were tested as model drugs, in three separate placebo-controlled studies. We also tested specificity of our procedure in identifying a drug that lacks efficacy, using modafinil. This crossover procedure showed sensitivity (increased days of abstinence) during week-long "practice" quit attempts with each of the active cessation medications (positive controls) versus placebo, but not with modafinil (negative control) versus placebo, as hypothesized. Sensitivity to medication efficacy signal was observed only in smokers high in intrinsic quit motivation (i.e., already preparing to quit soon) and not smokers low in intrinsic quit motivation, even if monetarily reinforced for abstinence (i.e., given extrinsic motivation). A crossover procedure requiring less time and fewer subjects than formal trials may provide an efficient strategy for a go/no-go decision whether to advance to subsequent phase 2 randomized clinical trials with a novel drug. Future research is needed to replicate our results and evaluate this procedure with novel compounds, identify factors that may limit its utility, and evaluate its applicability to testing efficacy of compounds for treating other forms of addiction.
A Review of Current Methods for Analysis of Mycotoxins in Herbal Medicines
Zhang, Lei; Dou, Xiao-Wen; Zhang, Cheng; Logrieco, Antonio F.; Yang, Mei-Hua
2018-01-01
The presence of mycotoxins in herbal medicines is an established problem throughout the entire world. The sensitive and accurate analysis of mycotoxin in complicated matrices (e.g., herbs) typically involves challenging sample pretreatment procedures and an efficient detection instrument. However, although numerous reviews have been published regarding the occurrence of mycotoxins in herbal medicines, few of them provided a detailed summary of related analytical methods for mycotoxin determination. This review focuses on analytical techniques including sampling, extraction, cleanup, and detection for mycotoxin determination in herbal medicines established within the past ten years. Dedicated sections of this article address the significant developments in sample preparation, and highlight the importance of this procedure in the analytical technology. This review also summarizes conventional chromatographic techniques for mycotoxin qualification or quantitation, as well as recent studies regarding the development and application of screening assays such as enzyme-linked immunosorbent assays, lateral flow immunoassays, aptamer-based lateral flow assays, and cytometric bead arrays. The present work provides a good insight regarding the advanced research that has been done and closes with an indication of future demand for the emerging technologies. PMID:29393905
A procedure for Alcian blue staining of mucins on polyvinylidene difluoride membranes.
Dong, Weijie; Matsuno, Yu-ki; Kameyama, Akihiko
2012-10-16
The isolation and characterization of mucins are critically important for obtaining insight into the molecular pathology of various diseases, including cancers and cystic fibrosis. Recently, we developed a novel membrane electrophoretic method, supported molecular matrix electrophoresis (SMME), which separates mucins on a polyvinylidene difluoride (PVDF) membrane impregnated with a hydrophilic polymer. Alcian blue staining is widely used to visualize mucopolysaccharides and acidic mucins on both blotted membranes and SMME membranes; however, this method cannot be used to stain mucins with a low acidic glycan content. Meanwhile, periodic acid-Schiff staining can selectively visualize glycoproteins, including mucins, but is incompatible with glycan analysis, which is indispensable for mucin characterizations. Here we describe a novel staining method, designated succinylation-Alcian blue staining, for visualizing mucins on a PVDF membrane. This method can visualize mucins regardless of the acidic residue content and shows a sensitivity 2-fold higher than that of Pro-Q Emerald 488, a fluorescent periodate Schiff-base stain. Furthermore, we demonstrate the compatibility of this novel staining procedure with glycan analysis using porcine gastric mucin as a model mucin.
Alshreef, Abualbishr; Wailoo, Allan J; Brown, Steven R; Tiernan, James P; Watson, Angus J M; Biggs, Katie; Bradburn, Mike; Hind, Daniel
2017-09-01
Haemorrhoids are a common condition, with nearly 30,000 procedures carried out in England in 2014/15, and result in a significant quality-of-life burden to patients and a financial burden to the healthcare system. This study examined the cost effectiveness of haemorrhoidal artery ligation (HAL) compared with rubber band ligation (RBL) in the treatment of grade II-III haemorrhoids. This analyses used data from the HubBLe study, a multicentre, open-label, parallel group, randomised controlled trial conducted in 17 acute UK hospitals between September 2012 and August 2015. A full economic evaluation, including long-term cost effectiveness, was conducted from the UK National Health Service (NHS) perspective. Main outcomes included healthcare costs, quality-adjusted life-years (QALYs) and recurrence. Cost-effectiveness results were presented in terms of incremental cost per QALY gained and cost per recurrence avoided. Extrapolation analysis for 3 years beyond the trial follow-up, two subgroup analyses (by grade of haemorrhoids and recurrence following RBL at baseline), and various sensitivity analyses were undertaken. In the primary base-case within-trial analysis, the incremental total mean cost per patient for HAL compared with RBL was £1027 (95% confidence interval [CI] £782-£1272, p < 0.001). The incremental QALYs were 0.01 QALYs (95% CI -0.02 to 0.04, p = 0.49). This generated an incremental cost-effectiveness ratio (ICER) of £104,427 per QALY. In the extrapolation analysis, the estimated probabilistic ICER was £21,798 per QALY. Results from all subgroup and sensitivity analyses did not materially change the base-case result. Under all assessed scenarios, the HAL procedure was not cost effective compared with RBL for the treatment of grade II-III haemorrhoids at a cost-effectiveness threshold of £20,000 per QALY; therefore, economically, its use in the NHS should be questioned.
Code of Federal Regulations, 2010 CFR
2010-10-01
... unclassified information. MD 4300.1, entitled Information Technology Systems Security, and the DHS Sensitive Systems Handbook, prescribe the policies and procedures on security for Information Technology resources... ACQUISITION REGULATION (HSAR) GENERAL ADMINISTRATIVE MATTERS Safeguarding Classified and Sensitive Information...
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Pimentel, Mark; Purdy, Chris; Magar, Raf; Rezaie, Ali
2016-07-01
A high incidence of irritable bowel syndrome (IBS) is associated with significant medical costs. Diarrhea-predominant IBS (IBS-D) is diagnosed on the basis of clinical presentation and diagnostic test results and procedures that exclude other conditions. This study was conducted to estimate the potential cost savings of a novel IBS diagnostic blood panel that tests for the presence of antibodies to cytolethal distending toxin B and anti-vinculin associated with IBS-D. A cost-minimization (CM) decision tree model was used to compare the costs of a novel IBS diagnostic blood panel pathway versus an exclusionary diagnostic pathway (ie, standard of care). The probability that patients proceed to treatment was modeled as a function of sensitivity, specificity, and likelihood ratios of the individual biomarker tests. One-way sensitivity analyses were performed for key variables, and a break-even analysis was performed for the pretest probability of IBS-D. Budget impact analysis of the CM model was extrapolated to a health plan with 1 million covered lives. The CM model (base-case) predicted $509 cost savings for the novel IBS diagnostic blood panel versus the exclusionary diagnostic pathway because of the avoidance of downstream testing (eg, colonoscopy, computed tomography scans). Sensitivity analysis indicated that an increase in both positive likelihood ratios modestly increased cost savings. Break-even analysis estimated that the pretest probability of disease would be 0.451 to attain cost neutrality. The budget impact analysis predicted a cost savings of $3,634,006 ($0.30 per member per month). The novel IBS diagnostic blood panel may yield significant cost savings by allowing patients to proceed to treatment earlier, thereby avoiding unnecessary testing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
MRI-Guided Focused Ultrasound Surgery for Uterine Fibroid Treatment: A Cost-Effectiveness Analysis
Kong, Chung Y.; Omer, Zehra B.; Pandharipande, Pari V.; Swan, J. Shannon; Srouji, Serene; Gazelle, G. Scott; Fennessy, Fiona M.
2015-01-01
Objective To evaluate the cost-effectiveness of a treatment strategy for symptomatic uterine fibroids that employs Magnetic Resonance guided Focused Ultrasound (MRgFUS) as a first-line therapy relative to uterine artery embolization (UAE) or abdominal hysterectomy (HYST). Materials and Methods We developed a decision-analytic model to compare the cost-effectiveness of three treatment strategies: MRgFUS, UAE and HYST. Short and long-term utilities specific to each treatment were incorporated, allowing us to account for differences in quality of life across the strategies considered. Lifetime costs and quality-adjusted life-years (QALYs) were calculated for each strategy. An incremental cost-effectiveness analysis was performed, using a societal willingness-to-pay (WTP) threshold of $50,000 per QALY to designate a strategy as cost-effective. Sensitivity analysis was performed on all key model parameters. Results In the base-case analysis, in which treatment for symptomatic fibroids started at age 40, UAE was the most effective and expensive strategy (22.81 QALYs, $22,164), followed by MRgFUS (22.80 QALYs, $19,796) and HYST (22.60 QALYs, $13,291). MRgFUS was cost-effective relative to HYST, with an associated incremental cost-effectiveness ratio (ICER) of $33,110/QALY. MRgFUS was also cost-effective relative to UAE – the ICER of UAE relative to MRgFUS ($270,057) far exceeded the WTP threshold of $50,000/QALY. In sensitivity analysis, results were robust to changes in most parameters, but were sensitive to changes in probabilities of recurrence and symptom relief following certain procedures, and quality of life associated with symptomatic fibroids. Conclusions MRgFUS is cost-effective relative to both UAE and hysterectomy for the treatment of women with symptomatic fibroids. PMID:25055272
Aydin, Ümit; Vorwerk, Johannes; Küpper, Philipp; Heers, Marcel; Kugel, Harald; Galka, Andreas; Hamid, Laith; Wellmer, Jörg; Kellinghaus, Christoph; Rampp, Stefan; Wolters, Carsten Hermann
2014-01-01
To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP) and field (SEF) data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data. PMID:24671208
Sensitivity Analysis and Optimization of Aerodynamic Configurations with Blend Surfaces
NASA Technical Reports Server (NTRS)
Thomas, A. M.; Tiwari, S. N.
1997-01-01
A novel (geometrical) parametrization procedure using solutions to a suitably chosen fourth order partial differential equation is used to define a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. The general airplane configuration has wing, fuselage, vertical tail and horizontal tail. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. A graphic interface software is developed which dynamically changes the surface of the airplane configuration with the change in input design variable. The software is made user friendly and is targeted towards the initial conceptual development of any aerodynamic configurations. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an Automatic Differentiation precompiler software tool ADIFOR. Aerodynamic shape optimization of the complete aircraft with twenty four design variables is performed. Unstructured and structured volume grids and Euler solutions are obtained with standard software to demonstrate the feasibility of the new surface definition.
Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T
2016-02-01
The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.
Identification of differentially expressed genes and false discovery rate in microarray studies.
Gusnanto, Arief; Calza, Stefano; Pawitan, Yudi
2007-04-01
To highlight the development in microarray data analysis for the identification of differentially expressed genes, particularly via control of false discovery rate. The emergence of high-throughput technology such as microarrays raises two fundamental statistical issues: multiplicity and sensitivity. We focus on the biological problem of identifying differentially expressed genes. First, multiplicity arises due to testing tens of thousands of hypotheses, rendering the standard P value meaningless. Second, known optimal single-test procedures such as the t-test perform poorly in the context of highly multiple tests. The standard approach of dealing with multiplicity is too conservative in the microarray context. The false discovery rate concept is fast becoming the key statistical assessment tool replacing the P value. We review the false discovery rate approach and argue that it is more sensible for microarray data. We also discuss some methods to take into account additional information from the microarrays to improve the false discovery rate. There is growing consensus on how to analyse microarray data using the false discovery rate framework in place of the classical P value. Further research is needed on the preprocessing of the raw data, such as the normalization step and filtering, and on finding the most sensitive test procedure.
Hu, Xiaofeng; Hu, Rui; Zhang, Zhaowei; Li, Peiwu; Zhang, Qi; Wang, Min
2016-09-01
A sensitive and specific immunoaffinity column to clean up and isolate multiple mycotoxins was developed along with a rapid one-step sample preparation procedure for ultra-performance liquid chromatography-tandem mass spectrometry analysis. Monoclonal antibodies against aflatoxin B1, aflatoxin B2, aflatoxin G1, aflatoxin G2, zearalenone, ochratoxin A, sterigmatocystin, and T-2 toxin were coupled to microbeads for mycotoxin purification. We optimized a homogenization and extraction procedure as well as column loading and elution conditions to maximize recoveries from complex feed matrices. This method allowed rapid, simple, and simultaneous determination of mycotoxins in feeds with a single chromatographic run. Detection limits for these toxins ranged from 0.006 to 0.12 ng mL(-1), and quantitation limits ranged from 0.06 to 0.75 ng mL(-1). Concentration curves were linear from 0.12 to 40 μg kg(-1) with correlation coefficients of R (2) > 0.99. Intra-assay and inter-assay comparisons indicated excellent repeatability and reproducibility of the multiple immunoaffinity columns. As a proof of principle, 80 feed samples were tested and several contained multiple mycotoxins. This method is sensitive, rapid, and durable enough for multiple mycotoxin determinations that fulfill European Union and Chinese testing criteria.
Choleau, C; Klein, J C; Reach, G; Aussedat, B; Demaria-Pesce, V; Wilson, G S; Gifford, R; Ward, W K
2002-08-01
The calibration of a continuous glucose monitoring system, i.e. the transformation of the signal I(t) generated by the glucose sensor at time (t) into an estimation of glucose concentration G(t), represents a key issue. The two-point calibration procedure consists of the determination of a sensor sensitivity S and of a background current I(o) by plotting two values of the sensor signal versus the concomitant blood glucose concentrations. The estimation of G(t) is subsequently given by G(t) = (I(t)-I(o))/S. A glucose sensor was implanted in the subcutaneous tissue of nine type 1 diabetic patients during 3 (n = 2) and 7 days (n = 7). For each individual trial, S and I(o) were determined by taking into account the values of two sets of sensor output and blood glucose concentration distant by at least 1 h, the procedure being repeated for each consecutive set of values. S and I(o) were found to be negatively correlated, the value of I(o) being sometimes negative. Theoretical analysis demonstrates that this phenomenon can be explained by the effect of measurement uncertainties on the determination of capillary glucose concentration and of sensor output.
NASA Astrophysics Data System (ADS)
de Souza, Roseli M.; Mathias, Bárbara M.; da Silveira, Carmem Lúcia P.; Aucélio, Ricardo Q.
2005-06-01
The quantitative evaluation of trace elements in foodstuffs is of considerable interest due to the potential toxicity of many elements, and because the presence of some metallic species might affect the overall quality (flavor and stability) of these products. In the present work, an inductively coupled plasma optical emission spectrometric method has been developed for the determination of six elements (Cd, Co, Cr, Cu, Ni and Mn) in olive oil, soy oil, margarine and butter. Organic samples (oils and fats) were stabilized using propan-1-ol and water, which enabled long-time sample dispersion in the solution. This simple sample preparation procedure, together with an efficient sample introduction strategy (using a Meinhard K3 nebulizer and a twister cyclonic spray chamber), facilitated the overall analytical procedure, allowing quantification using calibration curves prepared with inorganic standards. Internal standardization (Sc) was used for correction of matrix effects and signal fluctuations. Good sensitivities with limits of detection in the ng g -1 range were achieved for all six elements. These sensitivities were appropriate for the intended application. The method was tested through the analysis of laboratory-fortified samples with good recoveries (between 91.3% and 105.5%).
A nuclear mutation defective in mitochondrial recombination in yeast.
Ling, F; Makishima, F; Morishima, N; Shibata, T
1995-08-15
Homologous recombination (crossing over and gene conversion) is generally essential for heritage and DNA repair, and occasionally causes DNA aberrations, in nuclei of eukaryotes. However, little is known about the roles of homologous recombination in the inheritance and stability of mitochondrial DNA which is continuously damaged by reactive oxygen species, by-products of respiration. Here, we report the first example of a nuclear recessive mutation which suggests an essential role for homologous recombination in the stable inheritance of mitochondrial DNA. For the detection of this class of mutants, we devised a novel procedure, 'mitochondrial crossing in haploid', which has enabled us to examine many mutant clones. Using this procedure, we examined mutants of Saccharomyces cerevisiae that showed an elevated UV induction of respiration-deficient mutations. We obtained a mutant that was defective in both the omega-intron homing and Endo.SceI-induced homologous gene conversion. We found that the mutant cells are temperature sensitive in the maintenance of mitochondrial DNA. A tetrad analysis indicated that elevated UV induction of respiration-deficient mutations, recombination deficiency and temperature sensitivity are all caused by a single nuclear mutation (mhr1) on chromosome XII. The pleiotropic characteristics of the mutant suggest an essential role for the MHR1 gene in DNA repair, recombination and the maintenance of DNA in mitochondria.
Disposable electrochemiluminescent biosensor for lactate determination in saliva.
Ballesta Claver, J; Valencia Mirón, M C; Capitán-Vallvey, L F
2009-07-01
An electrochemiluminescence-based disposable biosensor for lactate is characterized. The lactate recognition system is based on lactate oxidase (LOx) and the transduction system consists of luminol. All the needed reagents, luminol, LOx, BSA, electrolyte and buffer have been immobilized by a Methocel membrane placed on the working electrode of the screen-printed electrochemical cell. The measurement of the electrochemiluminescence (ECL) is made possible via a photocounting head when 50 microl of sample is placed into the screen-printed cell with a circular container containing the disposable sensing membrane. The compositions of the membrane and reaction conditions have been optimized to obtain adequate sensitivity. The disposable biosensor responds to lactate after 20 s when two 1 s pulses at 0.5 V are applied to obtain the analytical parameter, the ECL initial rate. The linearized double logarithmic dependence for lactate shows a dynamic range from 10(-5) to 5 x 10(-4) M with a detection limit of 5 x 10(-6) M and a sensor-to-sensor repeatability, as relative standard deviation, RSD, of 3.30% at the medium level of the range. The ECL disposable biosensor was applied to the analysis of lactate in human saliva as an alternative procedure for obtaining the lactate level in a non-invasive way. Interferences coming from components of saliva were studied and eliminated in a simple way that was easy to handle. The procedure was validated for use in human saliva, comparing the results against an enzymatic reference procedure. The proposed method is quick, inexpensive, selective and sensitive and uses conventional ECL instrumentation.
NASA Technical Reports Server (NTRS)
Knapp, Charles F.; Evans, J. M.; Patwardhan, A.; Levenhagen, D.; Wang, M.; Charles, John B.
1991-01-01
A major focus of our research program is to develop noninvasive procedures for determining changes in cardiovascular function associated with the null gravity environment. We define changes in cardiovascular function to be (1) the result of the regulatory system operating at values different from 'normal' but with an overall control system basically unchanged by the null gravity exposure, or (2) the result of operating with a control system that has significantly different regulatory characteristics after an exposure. To this end, we have used a model of weightlessness that consisted of exposing humans to 2 hrs. in the launch position, followed by 20 hrs. of 6 deg head down bedrest. Our principal objective was to use this model to measure cardiovascular responses to the 6 deg head down bedrest protocol and to develop the most sensitive 'systems identification' procedure for indicating change. A second objective, related to future experiments, is to use the procedure in combination with experiments designed to determine the degree to which a regulatory pathway has been altered and to determine the mechanisms responsible for the changes.
Spectrophotometric determination of fenoterol hydrobromide in pure form and dosage forms.
El-Shabrawy, Y; Belal, F; Sharaf El-Din, M; Shalan, Sh
2003-10-01
A sensitive and rapid spectrophotometric procedure has been investigated for the determination of fenoterol either per se or in pharmaceutical preparations. The proposed procedure is based on the reaction between the drug and 4-chloro-7-nitrobenzo-2-oxa-1,3-diazole (NBD-Cl) at pH 7.2, using borate buffer, to produce a yellow adduct. The latter has maximum absorbance at 400 nm and obeys Beer's law within the concentration range 5-30 microg/ml. Regression analysis of the calibration data showed a good correlation coefficient (r=0.9996) with minimum detection limit of 0.24 microg/ml (6.2 x 10(-8) M). The proposed procedure has been successfully applied to the determination of this drug in its tablets and in syrup, the mean percent recoveries were 97.45+/-0.59 and 98.7+/-0.64%, respectively. The results obtained are in good agreement with those given using a reference method. The pharmaceutical additives other than active ingredient did not interfere. A proposal of the reaction pathway has been postulated.
Boenzi, Sara; Deodato, Federica; Taurisano, Roberta; Martinelli, Diego; Verrigni, Daniela; Carrozzo, Rosalba; Bertini, Enrico; Pastore, Anna; Dionisi-Vici, Carlo; Johnson, David W
2014-11-01
Two oxysterols, cholestan-3β,5α,6β-triol (C-triol) and 7-ketocholesterol (7-KC), have been recently proposed as diagnostic markers of Niemann-Pick type C (NP-C) disease, representing a potential alternative diagnostic tool to the more invasive and time consuming filipin test in cultured fibroblasts. Usually, the oxysterols are detected and quantified by liquid chromatography-tandem mass spectrometry (LC-MS/MS) method using atmospheric pressure chemical ionization (APCI) or electro-spray-ionization (ESI) sources, after a variety of derivatization procedures to enhance sensitivity. We developed a sensitive LC-MS/MS method to quantify the oxysterols in plasma as dimethylaminobutyrate ester, suitable for ESI analysis. This method, with an easy liquid-phase extraction and a short derivatization procedure, has been validated to demonstrate specificity, linearity, recovery, lowest limit of quantification, accuracy and precision. The assay was linear over a concentration range of 0.5-200ng/mL for C-triol and 1.0-200ng/mL for 7-KC. Intra-day and inter-day coefficients of variation (CV%) were <15% for both metabolites. Receiver operating characteristic analysis estimates that the area under curve was 0.998 for C-triol, and 0.972 for 7-KC, implying a significant discriminatory power for the method in this patient population of both oxysterols. In summary, our method provides a simple, rapid and non-invasive diagnostic tool for the biochemical diagnosis of NP-C disease. Copyright © 2014 Elsevier B.V. All rights reserved.
Valeriani, Federica; Protano, Carmela; Gianfranceschi, Gianluca; Cozza, Paola; Campanella, Vincenzo; Liguori, Giorgio; Vitali, Matteo; Divizia, Maurizio; Romano Spica, Vincenzo
2016-08-09
Appropriate sanitation procedures and monitoring of their actual efficacy represent critical points for improving hygiene and reducing the risk of healthcare-associated infections. Presently, surveillance is based on traditional protocols and classical microbiology. Innovation in monitoring is required not only to enhance safety or speed up controls but also to prevent cross infections due to novel or uncultivable pathogens. In order to improve surveillance monitoring, we propose that biological fluid microflora (mf) on reprocessed devices is a potential indicator of sanitation failure, when tested by an mfDNA-based approach. The survey focused on oral microflora traces in dental care settings. Experimental tests (n = 48) and an "in field" trial (n = 83) were performed on dental instruments. Conventional microbiology and amplification of bacterial genes by multiple real-time PCR were applied to detect traces of salivary microflora. Six different sanitation protocols were considered. A monitoring protocol was developed and performance of the mfDNA assay was evaluated by sensitivity and specificity. Contaminated samples resulted positive for saliva traces by the proposed approach (CT < 35). In accordance with guidelines, only fully sanitized samples were considered negative (100 %). Culture-based tests confirmed disinfectant efficacy, but failed in detecting incomplete sanitation. The method provided sensitivity and specificity over 95 %. The principle of detecting biological fluids by mfDNA analysis seems promising for monitoring the effectiveness of instrument reprocessing. The molecular approach is simple, fast and can provide a valid support for surveillance in dental care or other hospital settings.
NASA Astrophysics Data System (ADS)
Schumacher, F.; Friederich, W.
2015-12-01
We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.
Connolly, Keith P; Schwartzberg, Randy S; Reuss, Bryan; Crumbie, David; Homan, Brad M
2013-02-20
Magnetic resonance imaging (MRI) has been suggested to be of high accuracy at academic institutions in the identification of superior labral tears; however, many Type-II superior labral anterior-posterior (SLAP) lesions encountered during arthroscopy have not been previously diagnosed with noncontrast images. This study evaluated the accuracy of diagnosing Type-II SLAP lesions in a community setting with use of noncontrast MRI and analyzed the effect that radiologist training and the scanner type or magnet strength had on sensitivity and specificity. One hundred and forty-four patients requiring repair of an arthroscopically confirmed Type-II SLAP lesion who had a noncontrast MRI examination performed within twelve months before the procedure were included in the sensitivity analysis. An additional 100 patients with arthroscopically confirmed, normal superior labral anatomy were identified for specificity analysis. The transcribed interpretations of the images by the radiologists were used to document the diagnosis of a SLAP lesion and were compared with the operative report. The magnet strength, type of MRI system (open or closed), and whether the radiologist had completed a musculoskeletal fellowship were also recorded. Noncontrast MRI identified SLAP lesions in fifty-four of 144 shoulders, yielding an overall sensitivity of 38% (95% confidence interval [CI] = 30%, 46%). Specificity was 94% (95% CI = 87%, 98%), with six SLAP lesions diagnosed in 100 shoulders that did not contain the lesion. Musculoskeletal fellowship-trained radiologists performed with higher sensitivity than those who had not completed the fellowship (46% versus 19%; p = 0.009). Our results demonstrate a low sensitivity and high specificity in the diagnosis of Type-II SLAP lesions with noncontrast MRI in this community setting. Musculoskeletal fellowship-trained radiologists had significantly higher sensitivities in accurately diagnosing the lesion than did radiologists without such training. Noncontrast MRI is not a reliable diagnostic tool for Type-II SLAP lesions in a community setting.
ERIC Educational Resources Information Center
Harrison, Justin; McKay, Ryan
2012-01-01
Temporal discounting rates have become a popular dependent variable in social science research. While choice procedures are commonly employed to measure discounting rates, equivalent present value (EPV) procedures may be more sensitive to experimental manipulation. However, their use has been impeded by the absence of test-retest reliability data.…
Using Data Mining for Wine Quality Assessment
NASA Astrophysics Data System (ADS)
Cortez, Paulo; Teixeira, Juliana; Cerdeira, António; Almeida, Fernando; Matos, Telmo; Reis, José
Certification and quality assessment are crucial issues within the wine industry. Currently, wine quality is mostly assessed by physicochemical (e.g alcohol levels) and sensory (e.g. human expert evaluation) tests. In this paper, we propose a data mining approach to predict wine preferences that is based on easily available analytical tests at the certification step. A large dataset is considered with white vinho verde samples from the Minho region of Portugal. Wine quality is modeled under a regression approach, which preserves the order of the grades. Explanatory knowledge is given in terms of a sensitivity analysis, which measures the response changes when a given input variable is varied through its domain. Three regression techniques were applied, under a computationally efficient procedure that performs simultaneous variable and model selection and that is guided by the sensitivity analysis. The support vector machine achieved promising results, outperforming the multiple regression and neural network methods. Such model is useful for understanding how physicochemical tests affect the sensory preferences. Moreover, it can support the wine expert evaluations and ultimately improve the production.
Nakano, Yosuke; Konya, Yutaka; Taniguchi, Moyu; Fukusaki, Eiichiro
2017-01-01
d-Amino acids have recently attracted much attention in various research fields including medical, clinical and food industry due to their important biological functions that differ from l-amino acid. Most chiral amino acid separation techniques require complicated derivatization procedures in order to achieve the desirable chromatographic behavior and detectability. Thus, the aim of this research is to develop a highly sensitive analytical method for the enantioseparation of chiral amino acids without any derivatization process using liquid chromatography-tandem mass spectrometry (LC-MS/MS). By optimizing MS/MS parameters, we established a quantification method that allowed the simultaneous analysis of 18 d-amino acids with high sensitivity and reproducibility. Additionally, we applied the method to food sample (vinegar) for the validation, and successfully quantified trace levels of d-amino acids in samples. These results demonstrated the applicability and feasibility of the LC-MS/MS method as a novel, effective tool for d-amino acid measurement in various biological samples. Copyright © 2016 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Identifying hearing loss by means of iridology.
Stearn, Natalie; Swanepoel, De Wet
2006-11-13
Isolated reports of hearing loss presenting as markings on the iris exist, but to date the effectiveness of iridology to identify hearing loss has not been investigated. This study therefore aimed to determine the efficacy of iridological analysis in the identification of moderate to profound sensorineural hearing loss in adolescents. A controlled trial was conducted with an iridologist, blind to the actual hearing status of participants, analyzing the irises of participants with and without hearing loss. Fifty hearing impaired and fifty normal hearing subjects, between the ages of 15 and 19 years, controlled for gender, participated in the study. An experienced iridologist analyzed the randomised set of participants' irises. A 70% correct identification of hearing status was obtained by iridological analyses with a false negative rate of 41% compared to a 19% false positive rate. The respective sensitivity and specificity rates therefore came to 59% and 81%. Iridological analysis of hearing status indicated a statistically significant relationship to actual hearing status (P < 0.05). Although statistically significant sensitivity and specificity rates for identifying hearing loss by iridology were not comparable to those of traditional audiological screening procedures.
COMPARATIVE SENSITIVITY OF THE SHEEPSHEAD MINNOW AND ENDANGERED PUPFISHES
Standard environmental assessment procedures are assumed to protect aquatic species, including endangered ones. However, it is not known if endangered species are adequately protected by these procedures. To test the validity of this assumption, static acute toxicity tests were c...
Wu, Jintao; Zhu, Dexiao; Zhang, Jing; Li, Guibao; Liu, Zengxun; Sun, Jinhao
2016-02-04
Behavior sensitization is a long-lasting enhancement of locomotor activity after exposure to psychostimulants. Incubation of sensitization is a phenomenon of remarkable augmentation of locomotor response after withdrawal and reflects certain aspects of compulsive drug craving. However, the mechanisms underlying these phenomena remain elusive. Here we pay special attention to the incubation of sensitization and suppose that the intervention of this procedure will finally decrease the expression of sensitization. Melatonin is an endogenous hormone secreted mainly by the pineal gland. It is effective in treating sleep disorder, which turns out to be one of the major withdrawal symptoms of methamphetamine (MA) addiction. Furthermore, melatonin can also protect neuronal cells against MA-induced neurotoxicity. In the present experiment, we treated mice with low dose (10mg/kg) of melatonin for 14 consecutive days during the incubation of sensitization. We found that melatonin significantly attenuated the expression of sensitization. In contrast, the vehicle treated mice showed prominent enhancement of locomotor activity after incubation. MeCP2 expression was also elevated in the vehicle treated mice and melatonin attenuated its expression. Surprisingly, correlation analysis suggested significant correlation between MeCP2 expression in the nucleus accumbens (NAc) and locomotion in both saline control and vehicle treated mice, but not in melatonin treated ones. MA also induced MeCP2 over-expression in PC12 cells. However, melatonin failed to reduce MeCP2 expression in vitro. Our results suggest that melatonin treatment during the incubation of sensitization attenuates MA-induced expression of sensitization and decreases MeCP2 expression in vivo. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vicente-Serrano, Sergio M.; Van der Schrier, Gerard; Beguería, Santiago; Azorin-Molina, Cesar; Lopez-Moreno, Juan-I.
2015-07-01
In this study we analyzed the sensitivity of four drought indices to precipitation (P) and reference evapotranspiration (ETo) inputs. The four drought indices are the Palmer Drought Severity Index (PDSI), the Reconnaissance Drought Index (RDI), the Standardized Precipitation Evapotranspiration Index (SPEI) and the Standardized Palmer Drought Index (SPDI). The analysis uses long-term simulated series with varying averages and variances, as well as global observational data to assess the sensitivity to real climatic conditions in different regions of the World. The results show differences in the sensitivity to ETo and P among the four drought indices. The PDSI shows the lowest sensitivity to variation in their climate inputs, probably as a consequence of the standardization procedure of soil water budget anomalies. The RDI is only sensitive to the variance but not to the average of P and ETo. The SPEI shows the largest sensitivity to ETo variation, with clear geographic patterns mainly controlled by aridity. The low sensitivity of the PDSI to ETo makes the PDSI perhaps less apt as the suitable drought index in applications in which the changes in ETo are most relevant. On the contrary, the SPEI shows equal sensitivity to P and ETo. It works as a perfect supply and demand system modulated by the average and standard deviation of each series and combines the sensitivity of the series to changes in magnitude and variance. Our results are a robust assessment of the sensitivity of drought indices to P and ETo variation, and provide advice on the use of drought indices to detect climate change impacts on drought severity under a wide variety of climatic conditions.
Time-saving impact of an algorithm to identify potential surgical site infections.
Knepper, B C; Young, H; Jenkins, T C; Price, C S
2013-10-01
To develop and validate a partially automated algorithm to identify surgical site infections (SSIs) using commonly available electronic data to reduce manual chart review. Retrospective cohort study of patients undergoing specific surgical procedures over a 4-year period from 2007 through 2010 (algorithm development cohort) or over a 3-month period from January 2011 through March 2011 (algorithm validation cohort). A single academic safety-net hospital in a major metropolitan area. Patients undergoing at least 1 included surgical procedure during the study period. Procedures were identified in the National Healthcare Safety Network; SSIs were identified by manual chart review. Commonly available electronic data, including microbiologic, laboratory, and administrative data, were identified via a clinical data warehouse. Algorithms using combinations of these electronic variables were constructed and assessed for their ability to identify SSIs and reduce chart review. The most efficient algorithm identified in the development cohort combined microbiologic data with postoperative procedure and diagnosis codes. This algorithm resulted in 100% sensitivity and 85% specificity. Time savings from the algorithm was almost 600 person-hours of chart review. The algorithm demonstrated similar sensitivity on application to the validation cohort. A partially automated algorithm to identify potential SSIs was highly sensitive and dramatically reduced the amount of manual chart review required of infection control personnel during SSI surveillance.