Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Williams, Mark L
2007-01-01
Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysismore » sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.« less
Netlist Oriented Sensitivity Evaluation (NOSE)
2017-03-01
developing methodologies to assess sensitivities of alternative chip design netlist implementations. The research is somewhat foundational in that such...Netlist-Oriented Sensitivity Evaluation (NOSE) project was to develop methodologies to assess sensitivities of alternative chip design netlist...analysis to devise a methodology for scoring the sensitivity of circuit nodes in a netlist and thus providing the raw data for any meaningful
Development of Flight Safety Prediction Methodology for U. S. Naval Safety Center. Revision 1
1970-02-01
Safety Center. The methodology develoned encompassed functional analysis of the F-4J aircraft, assessment of the importance of safety- sensitive ... Sensitivity ... ....... . 4-8 V 4.5 Model Implementation ........ ......... . 4-10 4.5.1 Functional Analysis ..... ........... . 4-11 4. 5. 2 Major...Function Sensitivity Assignment ........ ... 4-13 i 4.5.3 Link Dependency Assignment ... ......... . 4-14 4.5.4 Computer Program for Sensitivity
Comparison between two methodologies for urban drainage decision aid.
Moura, P M; Baptista, M B; Barraud, S
2006-01-01
The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.
Aircraft optimization by a system approach: Achievements and trends
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1992-01-01
Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.
Benefit-Cost Analysis of Integrated Paratransit Systems : Volume 6. Technical Appendices.
DOT National Transportation Integrated Search
1979-09-01
This last volume, includes five technical appendices which document the methodologies used in the benefit-cost analysis. They are the following: Scenario analysis methodology; Impact estimation; Example of impact estimation; Sensitivity analysis; Agg...
Aeroelastic optimization methodology for viscous and turbulent flows
NASA Astrophysics Data System (ADS)
Barcelos Junior, Manuel Nascimento Dias
2007-12-01
In recent years, the development of faster computers and parallel processing allowed the application of high-fidelity analysis methods to the aeroelastic design of aircraft. However, these methods are restricted to the final design verification, mainly due to the computational cost involved in iterative design processes. Therefore, this work is concerned with the creation of a robust and efficient aeroelastic optimization methodology for inviscid, viscous and turbulent flows by using high-fidelity analysis and sensitivity analysis techniques. Most of the research in aeroelastic optimization, for practical reasons, treat the aeroelastic system as a quasi-static inviscid problem. In this work, as a first step toward the creation of a more complete aeroelastic optimization methodology for realistic problems, an analytical sensitivity computation technique was developed and tested for quasi-static aeroelastic viscous and turbulent flow configurations. Viscous and turbulent effects are included by using an averaged discretization of the Navier-Stokes equations, coupled with an eddy viscosity turbulence model. For quasi-static aeroelastic problems, the traditional staggered solution strategy has unsatisfactory performance when applied to cases where there is a strong fluid-structure coupling. Consequently, this work also proposes a solution methodology for aeroelastic and sensitivity analyses of quasi-static problems, which is based on the fixed point of an iterative nonlinear block Gauss-Seidel scheme. The methodology can also be interpreted as the solution of the Schur complement of the aeroelastic and sensitivity analyses linearized systems of equations. The methodologies developed in this work are tested and verified by using realistic aeroelastic systems.
Source apportionment and sensitivity analysis: two methodologies with two different purposes
NASA Astrophysics Data System (ADS)
Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe
2017-11-01
This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts
(sensitivity analysis) and contributions
(source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.
A methodology to estimate uncertainty for emission projections through sensitivity analysis.
Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación
2015-04-01
Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.
2010-08-01
a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Justin Matthew
These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less
NASA Astrophysics Data System (ADS)
Wang, Qiqi; Rigas, Georgios; Esclapez, Lucas; Magri, Luca; Blonigan, Patrick
2016-11-01
Bluff body flows are of fundamental importance to many engineering applications involving massive flow separation and in particular the transport industry. Coherent flow structures emanating in the wake of three-dimensional bluff bodies, such as cars, trucks and lorries, are directly linked to increased aerodynamic drag, noise and structural fatigue. For low Reynolds laminar and transitional regimes, hydrodynamic stability theory has aided the understanding and prediction of the unstable dynamics. In the same framework, sensitivity analysis provides the means for efficient and optimal control, provided the unstable modes can be accurately predicted. However, these methodologies are limited to laminar regimes where only a few unstable modes manifest. Here we extend the stability analysis to low-dimensional chaotic regimes by computing the Lyapunov covariant vectors and their associated Lyapunov exponents. We compare them to eigenvectors and eigenvalues computed in traditional hydrodynamic stability analysis. Computing Lyapunov covariant vectors and Lyapunov exponents also enables the extension of sensitivity analysis to chaotic flows via the shadowing method. We compare the computed shadowing sensitivities to traditional sensitivity analysis. These Lyapunov based methodologies do not rely on mean flow assumptions, and are mathematically rigorous for calculating sensitivities of fully unsteady flow simulations.
Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C
2018-01-01
Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1992-01-01
Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
Analysis of pressure distortion testing
NASA Technical Reports Server (NTRS)
Koch, K. E.; Rees, R. L.
1976-01-01
The development of a distortion methodology, method D, was documented, and its application to steady state and unsteady data was demonstrated. Three methodologies based upon DIDENT, a NASA-LeRC distortion methodology based upon the parallel compressor model, were investigated by applying them to a set of steady state data. The best formulation was then applied to an independent data set. The good correlation achieved with this data set showed that method E, one of the above methodologies, is a viable concept. Unsteady data were analyzed by using the method E methodology. This analysis pointed out that the method E sensitivities are functions of pressure defect level as well as corrected speed and pattern.
Infiltration modeling guidelines for commercial building energy analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gowri, Krishnan; Winiarski, David W.; Jarnagin, Ronald E.
This report presents a methodology for modeling air infiltration in EnergyPlus to account for envelope air barrier characteristics. Based on a review of various infiltration modeling options available in EnergyPlus and sensitivity analysis, the linear wind velocity coefficient based on DOE-2 infiltration model is recommended. The methodology described in this report can be used to calculate the EnergyPlus infiltration input for any given building level infiltration rate specified at known pressure difference. The sensitivity analysis shows that EnergyPlus calculates the wind speed based on zone altitude, and the linear wind velocity coefficient represents the variation in infiltration heat loss consistentmore » with building location and weather data.« less
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
Health economic assessment: a methodological primer.
Simoens, Steven
2009-12-01
This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments.
Health Economic Assessment: A Methodological Primer
Simoens, Steven
2009-01-01
This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments. PMID:20049237
van Voorn, George A. K.; Ligtenberg, Arend; Molenaar, Jaap
2017-01-01
Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover’s distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system. PMID:28196372
Resilience through adaptation.
Ten Broeke, Guus A; van Voorn, George A K; Ligtenberg, Arend; Molenaar, Jaap
2017-01-01
Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover's distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system.
SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations
Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.
2016-02-25
Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less
NASA Technical Reports Server (NTRS)
Francois, J.
1981-01-01
The focus of the investigation is centered around two main themes: an analysis of the effects of aircraft noise on the psychological and physiological equilibrium of airport residents; and an analysis of the sources of variability of sensitivity to noise. The methodology used is presented. Nine statistical tables are included, along with a set of conclusions.
French, Michael T; Salomé, Helena J; Sindelar, Jody L; McLellan, A Thomas
2002-04-01
To provide detailed methodological guidelines for using the Drug Abuse Treatment Cost Analysis Program (DATCAP) and Addiction Severity Index (ASI) in a benefit-cost analysis of addiction treatment. A representative benefit-cost analysis of three outpatient programs was conducted to demonstrate the feasibility and value of the methodological guidelines. Procedures are outlined for using resource use and cost data collected with the DATCAP. Techniques are described for converting outcome measures from the ASI to economic (dollar) benefits of treatment. Finally, principles are advanced for conducting a benefit-cost analysis and a sensitivity analysis of the estimates. The DATCAP was administered at three outpatient drug-free programs in Philadelphia, PA, for 2 consecutive fiscal years (1996 and 1997). The ASI was administered to a sample of 178 treatment clients at treatment entry and at 7-months postadmission. The DATCAP and ASI appear to have significant potential for contributing to an economic evaluation of addiction treatment. The benefit-cost analysis and subsequent sensitivity analysis all showed that total economic benefit was greater than total economic cost at the three outpatient programs, but this representative application is meant to stimulate future economic research rather than justifying treatment per se. This study used previously validated, research-proven instruments and methods to perform a practical benefit-cost analysis of real-world treatment programs. The study demonstrates one way to combine economic and clinical data and offers a methodological foundation for future economic evaluations of addiction treatment.
Sumner, T; Shephard, E; Bogle, I D L
2012-09-07
One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.
Space system operations and support cost analysis using Markov chains
NASA Technical Reports Server (NTRS)
Unal, Resit; Dean, Edwin B.; Moore, Arlene A.; Fairbairn, Robert E.
1990-01-01
This paper evaluates the use of Markov chain process in probabilistic life cycle cost analysis and suggests further uses of the process as a design aid tool. A methodology is developed for estimating operations and support cost and expected life for reusable space transportation systems. Application of the methodology is demonstrated for the case of a hypothetical space transportation vehicle. A sensitivity analysis is carried out to explore the effects of uncertainty in key model inputs.
Coertjens, Liesje; Donche, Vincent; De Maeyer, Sven; Vanthournout, Gert; Van Petegem, Peter
2017-01-01
Longitudinal data is almost always burdened with missing data. However, in educational and psychological research, there is a large discrepancy between methodological suggestions and research practice. The former suggests applying sensitivity analysis in order to the robustness of the results in terms of varying assumptions regarding the mechanism generating the missing data. However, in research practice, participants with missing data are usually discarded by relying on listwise deletion. To help bridge the gap between methodological recommendations and applied research in the educational and psychological domain, this study provides a tutorial example of sensitivity analysis for latent growth analysis. The example data concern students' changes in learning strategies during higher education. One cohort of students in a Belgian university college was asked to complete the Inventory of Learning Styles-Short Version, in three measurement waves. A substantial number of students did not participate on each occasion. Change over time in student learning strategies was assessed using eight missing data techniques, which assume different mechanisms for missingness. The results indicated that, for some learning strategy subscales, growth estimates differed between the models. Guidelines in terms of reporting the results from sensitivity analysis are synthesised and applied to the results from the tutorial example.
Global sensitivity analysis of groundwater transport
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Soltani, S.; Vigouroux, G.
2015-12-01
In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.
Analysis and design of optical systems by use of sensitivity analysis of skew ray tracing
NASA Astrophysics Data System (ADS)
Lin, Psang Dain; Lu, Chia-Hung
2004-02-01
Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat's eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.
Analysis and Design of Optical Systems by Use of Sensitivity Analysis of Skew Ray Tracing
NASA Astrophysics Data System (ADS)
Dain Lin, Psang; Lu, Chia-Hung
2004-02-01
Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat ?s eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.
ERIC Educational Resources Information Center
Metzger, Isha; Cooper, Shauna M.; Zarrett, Nicole; Flory, Kate
2013-01-01
The current review conducted a systematic assessment of culturally sensitive risk prevention programs for African American adolescents. Prevention programs meeting the inclusion and exclusion criteria were evaluated across several domains: (1) theoretical orientation and foundation; (2) methodological rigor; (3) level of cultural integration; (4)…
Managing Awkward, Sensitive, or Delicate Topics in (Chinese) Radio Medical Consultations
ERIC Educational Resources Information Center
Yu, Guodong; Wu, Yaxin
2015-01-01
This study, using conversation analysis as the research methodology, probes into the use of "nage" (literally "that") as a practice of managing awkward, sensitive, or delicate issues in radio phone-in medical consultations about sex-related problems. Through sequential manipulation and turn manipulation, the caller uses…
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)
2001-01-01
A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.
Efficient Analysis of Complex Structures
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.
2000-01-01
Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
’Coxiella Burnetii’ Vaccine Development: Lipopolysaccharide Structural Analysis
1991-02-20
Analytical instrumentation and methodology is presented for the determination of endotoxin -related structures at much improved sensitivity and... ENDOTOXIN CHARACTERIZATION BY SFC .......................... 10 III. COXIELLA BURNETII LPS CHARACTERIZATION A. EXPERIMENTAL...period for the determination of endotoxin -related structures at much improved sensitivity and specificity. Reports, and their applications, are listed in
Sensitivity analysis of Repast computational ecology models with R/Repast.
Prestes García, Antonio; Rodríguez-Patón, Alfonso
2016-12-01
Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.
A design methodology for nonlinear systems containing parameter uncertainty
NASA Technical Reports Server (NTRS)
Young, G. E.; Auslander, D. M.
1983-01-01
In the present design methodology for nonlinear systems containing parameter uncertainty, a generalized sensitivity analysis is incorporated which employs parameter space sampling and statistical inference. For the case of a system with j adjustable and k nonadjustable parameters, this methodology (which includes an adaptive random search strategy) is used to determine the combination of j adjustable parameter values which maximize the probability of those performance indices which simultaneously satisfy design criteria in spite of the uncertainty due to k nonadjustable parameters.
Schueler, Sabine; Walther, Stefan; Schuetz, Georg M; Schlattmann, Peter; Dewey, Marc
2013-06-01
To evaluate the methodological quality of diagnostic accuracy studies on coronary computed tomography (CT) angiography using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) tool. Each QUADAS item was individually defined to adapt it to the special requirements of studies on coronary CT angiography. Two independent investigators analysed 118 studies using 12 QUADAS items. Meta-regression and pooled analyses were performed to identify possible effects of methodological quality items on estimates of diagnostic accuracy. The overall methodological quality of coronary CT studies was merely moderate. They fulfilled a median of 7.5 out of 12 items. Only 9 of the 118 studies fulfilled more than 75 % of possible QUADAS items. One QUADAS item ("Uninterpretable Results") showed a significant influence (P = 0.02) on estimates of diagnostic accuracy with "no fulfilment" increasing specificity from 86 to 90 %. Furthermore, pooled analysis revealed that each QUADAS item that is not fulfilled has the potential to change estimates of diagnostic accuracy. The methodological quality of studies investigating the diagnostic accuracy of non-invasive coronary CT is only moderate and was found to affect the sensitivity and specificity. An improvement is highly desirable because good methodology is crucial for adequately assessing imaging technologies. • Good methodological quality is a basic requirement in diagnostic accuracy studies. • Most coronary CT angiography studies have only been of moderate design quality. • Weak methodological quality will affect the sensitivity and specificity. • No improvement in methodological quality was observed over time. • Authors should consider the QUADAS checklist when undertaking accuracy studies.
Verma, Nishant; Beretvas, S Natasha; Pascual, Belen; Masdeu, Joseph C; Markey, Mia K
2015-11-12
As currently used, the Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog) has low sensitivity for measuring Alzheimer's disease progression in clinical trials. A major reason behind the low sensitivity is its sub-optimal scoring methodology, which can be improved to obtain better sensitivity. Using item response theory, we developed a new scoring methodology (ADAS-CogIRT) for the ADAS-Cog, which addresses several major limitations of the current scoring methodology. The sensitivity of the ADAS-CogIRT methodology was evaluated using clinical trial simulations as well as a negative clinical trial, which had shown an evidence of a treatment effect. The ADAS-Cog was found to measure impairment in three cognitive domains of memory, language, and praxis. The ADAS-CogIRT methodology required significantly fewer patients and shorter trial durations as compared to the current scoring methodology when both were evaluated in simulated clinical trials. When validated on data from a real clinical trial, the ADAS-CogIRT methodology had higher sensitivity than the current scoring methodology in detecting the treatment effect. The proposed scoring methodology significantly improves the sensitivity of the ADAS-Cog in measuring progression of cognitive impairment in clinical trials focused in the mild-to-moderate Alzheimer's disease stage. This provides a boost to the efficiency of clinical trials requiring fewer patients and shorter durations for investigating disease-modifying treatments.
NASA Technical Reports Server (NTRS)
Dec, John A.; Braun, Robert D.
2011-01-01
A finite element ablation and thermal response program is presented for simulation of three-dimensional transient thermostructural analysis. The three-dimensional governing differential equations and finite element formulation are summarized. A novel probabilistic design methodology for thermal protection systems is presented. The design methodology is an eight step process beginning with a parameter sensitivity study and is followed by a deterministic analysis whereby an optimum design can determined. The design process concludes with a Monte Carlo simulation where the probabilities of exceeding design specifications are estimated. The design methodology is demonstrated by applying the methodology to the carbon phenolic compression pads of the Crew Exploration Vehicle. The maximum allowed values of bondline temperature and tensile stress are used as the design specifications in this study.
Development and application of optimum sensitivity analysis of structures
NASA Technical Reports Server (NTRS)
Barthelemy, J. F. M.; Hallauer, W. L., Jr.
1984-01-01
The research focused on developing an algorithm applying optimum sensitivity analysis for multilevel optimization. The research efforts have been devoted to assisting NASA Langley's Interdisciplinary Research Office (IRO) in the development of a mature methodology for a multilevel approach to the design of complex (large and multidisciplinary) engineering systems. An effort was undertaken to identify promising multilevel optimization algorithms. In the current reporting period, the computer program generating baseline single level solutions was completed and tested out.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1988-12-01
This document contains twelve papers on various aspects of low-level radioactive waste management. Topics of this volume include: performance assessment methodology; remedial action alternatives; site selection and site characterization procedures; intruder scenarios; sensitivity analysis procedures; mathematical models for mixed waste environmental transport; and risk assessment methodology. Individual papers were processed separately for the database. (TEM)
Banks, Caitlin L.; Pai, Mihir M.; McGuirk, Theresa E.; Fregly, Benjamin J.; Patten, Carolynn
2017-01-01
Muscle synergy analysis (MSA) is a mathematical technique that reduces the dimensionality of electromyographic (EMG) data. Used increasingly in biomechanics research, MSA requires methodological choices at each stage of the analysis. Differences in methodological steps affect the overall outcome, making it difficult to compare results across studies. We applied MSA to EMG data collected from individuals post-stroke identified as either responders (RES) or non-responders (nRES) on the basis of a critical post-treatment increase in walking speed. Importantly, no clinical or functional indicators identified differences between the cohort of RES and nRES at baseline. For this exploratory study, we selected the five highest RES and five lowest nRES available from a larger sample. Our goal was to assess how the methodological choices made before, during, and after MSA affect the ability to differentiate two groups with intrinsic physiologic differences based on MSA results. We investigated 30 variations in MSA methodology to determine which choices allowed differentiation of RES from nRES at baseline. Trial-to-trial variability in time-independent synergy vectors (SVs) and time-varying neural commands (NCs) were measured as a function of: (1) number of synergies computed; (2) EMG normalization method before MSA; (3) whether SVs were held constant across trials or allowed to vary during MSA; and (4) synergy analysis output normalization method after MSA. MSA methodology had a strong effect on our ability to differentiate RES from nRES at baseline. Across all 10 individuals and MSA variations, two synergies were needed to reach an average of 90% variance accounted for (VAF). Based on effect sizes, differences in SV and NC variability between groups were greatest using two synergies with SVs that varied from trial-to-trial. Differences in SV variability were clearest using unit magnitude per trial EMG normalization, while NC variability was less sensitive to EMG normalization method. No outcomes were greatly impacted by output normalization method. MSA variability for some, but not all, methods successfully differentiated intrinsic physiological differences inaccessible to traditional clinical or biomechanical assessments. Our results were sensitive to methodological choices, highlighting the need for disclosure of all aspects of MSA methodology in future studies. PMID:28912707
Banks, Caitlin L; Pai, Mihir M; McGuirk, Theresa E; Fregly, Benjamin J; Patten, Carolynn
2017-01-01
Muscle synergy analysis (MSA) is a mathematical technique that reduces the dimensionality of electromyographic (EMG) data. Used increasingly in biomechanics research, MSA requires methodological choices at each stage of the analysis. Differences in methodological steps affect the overall outcome, making it difficult to compare results across studies. We applied MSA to EMG data collected from individuals post-stroke identified as either responders (RES) or non-responders (nRES) on the basis of a critical post-treatment increase in walking speed. Importantly, no clinical or functional indicators identified differences between the cohort of RES and nRES at baseline. For this exploratory study, we selected the five highest RES and five lowest nRES available from a larger sample. Our goal was to assess how the methodological choices made before, during, and after MSA affect the ability to differentiate two groups with intrinsic physiologic differences based on MSA results. We investigated 30 variations in MSA methodology to determine which choices allowed differentiation of RES from nRES at baseline. Trial-to-trial variability in time-independent synergy vectors (SVs) and time-varying neural commands (NCs) were measured as a function of: (1) number of synergies computed; (2) EMG normalization method before MSA; (3) whether SVs were held constant across trials or allowed to vary during MSA; and (4) synergy analysis output normalization method after MSA. MSA methodology had a strong effect on our ability to differentiate RES from nRES at baseline. Across all 10 individuals and MSA variations, two synergies were needed to reach an average of 90% variance accounted for (VAF). Based on effect sizes, differences in SV and NC variability between groups were greatest using two synergies with SVs that varied from trial-to-trial. Differences in SV variability were clearest using unit magnitude per trial EMG normalization, while NC variability was less sensitive to EMG normalization method. No outcomes were greatly impacted by output normalization method. MSA variability for some, but not all, methods successfully differentiated intrinsic physiological differences inaccessible to traditional clinical or biomechanical assessments. Our results were sensitive to methodological choices, highlighting the need for disclosure of all aspects of MSA methodology in future studies.
Sensitivity of VIIRS Polarization Measurements
NASA Technical Reports Server (NTRS)
Waluschka, Eugene
2010-01-01
The design of an optical system typically involves a sensitivity analysis where the various lens parameters, such as lens spacing and curvatures, to name two parameters, are (slightly) varied to see what, if any, effect this has on the performance and to establish manufacturing tolerances. A sinular analysis was performed for the VIIRS instruments polarization measurements to see how real world departures from perfectly linearly polarized light entering VIIRS effects the polarization measurement. The methodology and a few of the results of this polarization sensitivity analysis are presented and applied to the construction of a single polarizer which will cover the VIIRS VIS/NIR spectral range. Keywords: VIIRS, polarization, ray, trace; polarizers, Bolder Vision, MOXTEK
Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry
2018-06-19
Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.
Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.
Commercial test kits for detection of Lyme borreliosis: a meta-analysis of test accuracy
Cook, Michael J; Puri, Basant K
2016-01-01
The clinical diagnosis of Lyme borreliosis can be supported by various test methodologies; test kits are available from many manufacturers. Literature searches were carried out to identify studies that reported characteristics of the test kits. Of 50 searched studies, 18 were included where the tests were commercially available and samples were proven to be positive using serology testing, evidence of an erythema migrans rash, and/or culture. Additional requirements were a test specificity of ≥85% and publication in the last 20 years. The weighted mean sensitivity for all tests and for all samples was 59.5%. Individual study means varied from 30.6% to 86.2%. Sensitivity for each test technology varied from 62.4% for Western blot kits, and 62.3% for enzyme-linked immunosorbent assay tests, to 53.9% for synthetic C6 peptide ELISA tests and 53.7% when the two-tier methodology was used. Test sensitivity increased as dissemination of the pathogen affected different organs; however, the absence of data on the time from infection to serological testing and the lack of standard definitions for “early” and “late” disease prevented analysis of test sensitivity versus time of infection. The lack of standardization of the definitions of disease stage and the possibility of retrospective selection bias prevented clear evaluation of test sensitivity by “stage”. The sensitivity for samples classified as acute disease was 35.4%, with a corresponding sensitivity of 64.5% for samples from patients defined as convalescent. Regression analysis demonstrated an improvement of 4% in test sensitivity over the 20-year study period. The studies did not provide data to indicate the sensitivity of tests used in a clinical setting since the effect of recent use of antibiotics or steroids or other factors affecting antibody response was not factored in. The tests were developed for only specific Borrelia species; sensitivities for other species could not be calculated. PMID:27920571
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Moss, Robert; Grosse, Thibault; Marchant, Ivanny; Lassau, Nathalie; Gueyffier, François; Thomas, S. Randall
2012-01-01
Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a “virtual population” from which “virtual individuals” can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the “virtual individuals” that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models. PMID:22761561
Automatic Target Recognition Classification System Evaluation Methodology
2002-09-01
Testing Set of Two-Class XOR Data (250 Samples)......................................... 2-59 2.27 Decision Analysis Process Flow Chart...ROC curve meta - analysis , which is the estimation of the true ROC curve of a given diagnostic system through ROC analysis across many studies or...technique can be very effective in sensitivity analysis ; trying to determine which data points have the most effect on the solution, and in
Public Involvement Processes and Methodologies: An Analysis
Ernst Valfer; Stephen Laner; Daina Dravnieks
1977-01-01
This report explor'es some sensitive or critical areas in public involvement.. A 1972 RF&D workshop on public involvement identified a series of issues requiring research and analysis. A subsequent PNW study "Public Involvement and the Forest Service", (Hendee 1973) addressed many of these issues. This study assignment by the Chief's Office was...
Lunar Exploration Architecture Level Key Drivers and Sensitivities
NASA Technical Reports Server (NTRS)
Goodliff, Kandyce; Cirillo, William; Earle, Kevin; Reeves, J. D.; Shyface, Hilary; Andraschko, Mark; Merrill, R. Gabe; Stromgren, Chel; Cirillo, Christopher
2009-01-01
Strategic level analysis of the integrated behavior of lunar transportation and lunar surface systems architecture options is performed to assess the benefit, viability, affordability, and robustness of system design choices. This analysis employs both deterministic and probabilistic modeling techniques so that the extent of potential future uncertainties associated with each option are properly characterized. The results of these analyses are summarized in a predefined set of high-level Figures of Merit (FOMs) so as to provide senior NASA Constellation Program (CxP) and Exploration Systems Mission Directorate (ESMD) management with pertinent information to better inform strategic level decision making. The strategic level exploration architecture model is designed to perform analysis at as high a level as possible but still capture those details that have major impacts on system performance. The strategic analysis methodology focuses on integrated performance, affordability, and risk analysis, and captures the linkages and feedbacks between these three areas. Each of these results leads into the determination of the high-level FOMs. This strategic level analysis methodology has been previously applied to Space Shuttle and International Space Station assessments and is now being applied to the development of the Constellation Program point-of-departure lunar architecture. This paper provides an overview of the strategic analysis methodology and the lunar exploration architecture analyses to date. In studying these analysis results, the strategic analysis team has identified and characterized key drivers affecting the integrated architecture behavior. These key drivers include inclusion of a cargo lander, mission rate, mission location, fixed-versus- variable costs/return on investment, and the requirement for probabilistic analysis. Results of sensitivity analysis performed on lunar exploration architecture scenarios are also presented.
Application of the HARDMAN methodology to the single channel ground-airborne radio system (SINCGARS)
NASA Astrophysics Data System (ADS)
Balcom, J.; Park, J.; Toomer, L.; Feng, T.
1984-12-01
The HARDMAN methodology is designed to assess the human resource requirements early in the weapon system acquisition process. In this case, the methodology was applied to the family of radios known as SINCGARS (Single Channel Ground-Airborne Radio System). At the time of the study, SINCGARS was approaching the Full-Scale Development phase, with 2 contractors in competition. Their proposed systems were compared with a composite baseline comparison (reference) system. The systems' manpower, personnel and training requirements were compared. Based on RAM data, the contractors' MPT figures showed a significant reduction from the figures derived for the baseline comparison system. Differences between the two contractors were relatively small. Impact and some tradeoff analyses were hindered by data access problems. Tactical radios, manpower and personnel requirements analysis, impact and tradeoff analysis, human resource sensitivity, training requirements analysis, human resources in LCSMM, and logistics analyses are discussed.
Vergucht, Eva; Brans, Toon; Beunis, Filip; Garrevoet, Jan; Bauters, Stephen; De Rijcke, Maarten; Deruytter, David; Janssen, Colin; Riekel, Christian; Burghammer, Manfred; Vincze, Laszlo
2015-07-01
Recently, a radically new synchrotron radiation-based elemental imaging approach for the analysis of biological model organisms and single cells in their natural in vivo state was introduced. The methodology combines optical tweezers (OT) technology for non-contact laser-based sample manipulation with synchrotron radiation confocal X-ray fluorescence (XRF) microimaging for the first time at ESRF-ID13. The optical manipulation possibilities and limitations of biological model organisms, the OT setup developments for XRF imaging and the confocal XRF-related challenges are reported. In general, the applicability of the OT-based setup is extended with the aim of introducing the OT XRF methodology in all research fields where highly sensitive in vivo multi-elemental analysis is of relevance at the (sub)micrometre spatial resolution level.
Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie
2013-06-04
Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
NASA Astrophysics Data System (ADS)
Halbe, Johannes; Pahl-Wostl, Claudia; Adamowski, Jan
2018-01-01
Multiple barriers constrain the widespread application of participatory methods in water management, including the more technical focus of most water agencies, additional cost and time requirements for stakeholder involvement, as well as institutional structures that impede collaborative management. This paper presents a stepwise methodological framework that addresses the challenges of context-sensitive initiation, design and institutionalization of participatory modeling processes. The methodological framework consists of five successive stages: (1) problem framing and stakeholder analysis, (2) process design, (3) individual modeling, (4) group model building, and (5) institutionalized participatory modeling. The Management and Transition Framework is used for problem diagnosis (Stage One), context-sensitive process design (Stage Two) and analysis of requirements for the institutionalization of participatory water management (Stage Five). Conceptual modeling is used to initiate participatory modeling processes (Stage Three) and ensure a high compatibility with quantitative modeling approaches (Stage Four). This paper describes the proposed participatory model building (PMB) framework and provides a case study of its application in Québec, Canada. The results of the Québec study demonstrate the applicability of the PMB framework for initiating and designing participatory model building processes and analyzing barriers towards institutionalization.
Sensitivity assessment of sea lice to chemotherapeutants: Current bioassays and best practices.
Marín, S L; Mancilla, J; Hausdorf, M A; Bouchard, D; Tudor, M S; Kane, F
2017-12-18
Traditional bioassays are still necessary to test sensitivity of sea lice species to chemotherapeutants, but the methodology applied by the different scientists has varied over time in respect to that proposed in "Sea lice resistance to chemotherapeutants: A handbook in resistance management" (2006). These divergences motivated the organization of a workshop during the Sea Lice 2016 conference "Standardization of traditional bioassay process by sharing best practices." There was an agreement by the attendants to update the handbook. The objective of this article is to provide a baseline analysis of the methodology for traditional bioassays and to identify procedures that need to be addressed to standardize the protocol. The methodology was divided into the following steps: bioassay design; material and equipment; sea lice collection, transportation and laboratory reception; preparation of dilution; parasite exposure; response evaluation; data analysis; and reporting. Information from the presentations of the workshop, and also from other studies, allowed for the identification of procedures inside a given step that need to be standardized as they were reported to be performed differently by the different working groups. Bioassay design and response evaluation were the targeted steps where more procedures need to be analysed and agreed upon. © 2017 John Wiley & Sons Ltd.
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
Radiation Assurance for the Space Environment
NASA Technical Reports Server (NTRS)
Barth, Janet L.; LaBel, Kenneth A.; Poivey, Christian
2004-01-01
The space radiation environment can lead to extremely harsh operating conditions for spacecraft electronic systems. A hardness assurance methodology must be followed to assure that the space radiation environment does not compromise the functionality and performance of space-based systems during the mission lifetime. The methodology includes a definition of the radiation environment, assessment of the radiation sensitivity of parts, worst-case analysis of the impact of radiation effects, and part acceptance decisions which are likely to include mitigation measures.
NASA Astrophysics Data System (ADS)
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
Satellite services system analysis study. Volume 2: Satellite and services user model
NASA Technical Reports Server (NTRS)
1981-01-01
Satellite services needs are analyzed. Topics include methodology: a satellite user model; representative servicing scenarios; potential service needs; manned, remote, and automated involvement; and inactive satellites/debris. Satellite and services user model development is considered. Groundrules and assumptions, servicing, events, and sensitivity analysis are included. Selection of references satellites is also discussed.
2015-03-12
26 Table 3: Optometry Clinic Frequency Count... Optometry Clinic Frequency Count.................................................................. 86 Table 22: Probability Distribution Summary Table...Clinic, the Audiology Clinic, and the Optometry Clinic. Methodology Overview The overarching research goal is to identify feasible solutions to
Statistical sensitivity analysis of a simple nuclear waste repository model
NASA Astrophysics Data System (ADS)
Ronen, Y.; Lucius, J. L.; Blow, E. M.
1980-06-01
A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.
2014-01-01
Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Peptide biomarkers as a way to determine meat authenticity.
Sentandreu, Miguel Angel; Sentandreu, Enrique
2011-11-01
Meat fraud implies many illegal procedures affecting the composition of meat and meat products, something that is commonly done with the aim to increase profit. These practices need to be controlled by legal authorities by means of robust, accurate and sensitive methodologies capable to assure that fraudulent or accidental mislabelling does not arise. Common strategies traditionally used to assess meat authenticity have been based on methods such as chemometric analysis of a large set of data analysis, immunoassays or DNA analysis. The identification of peptide biomarkers specific of a particular meat species, tissue or ingredient by proteomic technologies constitutes an interesting and promising alternative to existing methodologies due to its high discriminating power, robustness and sensitivity. The possibility to develop standardized protein extraction protocols, together with the considerably higher resistance of peptide sequences to food processing as compared to DNA sequences, would overcome some of the limitations currently existing for quantitative determinations of highly processed food samples. The use of routine mass spectrometry equipment would make the technology suitable for control laboratories. Copyright © 2011 Elsevier Ltd. All rights reserved.
Optical diagnosis of cervical cancer by higher order spectra and boosting
NASA Astrophysics Data System (ADS)
Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Barman, Ritwik; Pratiher, Souvik; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2017-03-01
In this contribution, we report the application of higher order statistical moments using decision tree and ensemble based learning methodology for the development of diagnostic algorithms for optical diagnosis of cancer. The classification results were compared to those obtained with an independent feature extractors like linear discriminant analysis (LDA). The performance and efficacy of these methodology using higher order statistics as a classifier using boosting has higher specificity and sensitivity while being much faster as compared to other time-frequency domain based methods.
Sáiz, Jorge; García-Roa, Roberto; Martín, José; Gómara, Belén
2017-09-08
Chemical signaling is a widespread mode of communication among living organisms that is used to establish social organization, territoriality and/or for mate choice. In lizards, femoral and precloacal glands are important sources of chemical signals. These glands protrude chemical secretions used to mark territories and also, to provide valuable information from the bearer to other individuals. Ecologists have studied these chemical secretions for decades in order to increase the knowledge of chemical communication in lizards. Although several studies have focused on the chemical analysis of these secretions, there is a lack of faster, more sensitive and more selective analytical methodologies for their study. In this work a new GC coupled to tandem triple quadrupole MS (GC-QqQ (MS/MS)) methodology is developed and proposed for the target study of 12 relevant compounds often found in lizard secretions (i.e. 1-hexadecanol, palmitic acid, 1-octadecanol, oleic acid, stearic acid, 1-tetracosanol, squalene, cholesta-3,5-diene, α-tocopherol, cholesterol, ergosterol and campesterol). The method baseline-separated the analytes in less than 7min, with instrumental limits of detection ranging from 0.04 to 6.0ng/mL. It was possible to identify differences in the composition of the samples from the lizards analyzed, which depended on the species, the habitat occupied and the diet of the individuals. Moreover, α-tocopherol has been determined for the first time in a lizard species, which was thought to lack its expression in chemical secretions. Globally, the methodology has been proven to be a valuable alternative to other published methods with important improvements in terms of analysis time, sensitivity, and selectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1998-01-01
In this grant, we have proposed a three-year research effort focused on developing High Performance Computation and Communication (HPCC) methodologies for structural analysis on parallel processors and clusters of workstations, with emphasis on reducing the structural design cycle time. Besides consolidating and further improving the FETI solver technology to address plate and shell structures, we have proposed to tackle the following design related issues: (a) parallel coupling and assembly of independently designed and analyzed three-dimensional substructures with non-matching interfaces, (b) fast and smart parallel re-analysis of a given structure after it has undergone design modifications, (c) parallel evaluation of sensitivity operators (derivatives) for design optimization, and (d) fast parallel analysis of mildly nonlinear structures. While our proposal was accepted, support was provided only for one year.
Garg, Harish
2013-03-01
The main objective of the present paper is to propose a methodology for analyzing the behavior of the complex repairable industrial systems. In real-life situations, it is difficult to find the most optimal design policies for MTBF (mean time between failures), MTTR (mean time to repair) and related costs by utilizing available resources and uncertain data. For this, the availability-cost optimization model has been constructed for determining the optimal design parameters for improving the system design efficiency. The uncertainties in the data related to each component of the system are estimated with the help of fuzzy and statistical methodology in the form of the triangular fuzzy numbers. Using these data, the various reliability parameters, which affects the system performance, are obtained in the form of the fuzzy membership function by the proposed confidence interval based fuzzy Lambda-Tau (CIBFLT) methodology. The computed results by CIBFLT are compared with the existing fuzzy Lambda-Tau methodology. Sensitivity analysis on the system MTBF has also been addressed. The methodology has been illustrated through a case study of washing unit, the main part of the paper industry. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-01
... an improved understanding of methodological challenges associated with integrating existing tools and... methodological challenges associated with integrating existing tools (e.g., climate models, downscaling... sensitivity to methodological choices such as different approaches for downscaling global climate change...
Hahn, K D; Cooper, G W; Ruiz, C L; Fehl, D L; Chandler, G A; Knapp, P F; Leeper, R J; Nelson, A J; Smelser, R M; Torres, J A
2014-04-01
We present a general methodology to determine the diagnostic sensitivity that is directly applicable to neutron-activation diagnostics fielded on a wide variety of neutron-producing experiments, which include inertial-confinement fusion (ICF), dense plasma focus, and ion beam-driven concepts. This approach includes a combination of several effects: (1) non-isotropic neutron emission; (2) the 1/r(2) decrease in neutron fluence in the activation material; (3) the spatially distributed neutron scattering, attenuation, and energy losses due to the fielding environment and activation material itself; and (4) temporally varying neutron emission. As an example, we describe the copper-activation diagnostic used to measure secondary deuterium-tritium fusion-neutron yields on ICF experiments conducted on the pulsed-power Z Accelerator at Sandia National Laboratories. Using this methodology along with results from absolute calibrations and Monte Carlo simulations, we find that for the diagnostic configuration on Z, the diagnostic sensitivity is 0.037% ± 17% counts/neutron per cm(2) and is ∼ 40% less sensitive than it would be in an ideal geometry due to neutron attenuation, scattering, and energy-loss effects.
Probabilistic sizing of laminates with uncertainties
NASA Technical Reports Server (NTRS)
Shah, A. R.; Liaw, D. G.; Chamis, C. C.
1993-01-01
A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.
NASA Astrophysics Data System (ADS)
McKinney, D. C.; Cuellar, A. D.
2015-12-01
Climate change has accelerated glacial retreat in high altitude glaciated regions of Nepal leading to the growth and formation of glacier lakes. Glacial lake outburst floods (GLOF) are sudden events triggered by an earthquake, moraine failure or other shock that causes a sudden outflow of water. These floods are catastrophic because of their sudden onset, the difficulty predicting them, and enormous quantity of water and debris rapidly flooding downstream areas. Imja Lake in the Himalaya of Nepal has experienced accelerated growth since it first appeared in the 1960s. Communities threatened by a flood from Imja Lake have advocated for projects to adapt to the increasing threat of a GLOF. Nonetheless, discussions surrounding projects for Imja have not included a rigorous analysis of the potential consequences of a flood, probability of an event, or costs of mitigation projects in part because this information is unknown or uncertain. This work presents a demonstration of a decision making methodology developed to rationally analyze the risks posed by Imja Lake and the various adaptation projects proposed using available information. In this work the authors use decision analysis, data envelopement analysis (DEA), and sensitivity analysis to assess proposed adaptation measures that would mitigate damage in downstream communities from a GLOF. We use an existing hydrodynamic model of the at-risk area to determine how adaptation projects will affect downstream flooding and estimate fatalities using an empirical method developed for dam failures. The DEA methodology allows us to estimate the value of a statistical life implied by each project given the cost of the project and number of lives saved to determine which project is the most efficient. In contrast the decision analysis methodology requires fatalities to be assigned a cost but allows the inclusion of uncertainty in the decision making process. We compare the output of these two methodologies and determine the sensitivity of the conclusions to changes in uncertain input parameters including project cost, value of a statistical life, and time to a GLOF event.
PCB congener analysis with Hall electrolytic conductivity detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edstrom, R.D.
1989-01-01
This work reports the development of an analytical methodology for the analysis of PCB congeners based on integrating relative retention data provided by other researchers. The retention data were transposed into a multiple retention marker system which provided good precision in the calculation of relative retention indices for PCB congener analysis. Analytical run times for the developed methodology were approximately one hour using a commercially available GC capillary column. A Tracor Model 700A Hall Electrolytic Conductivity Detector (HECD) was employed in the GC detection of Aroclor standards and environmental samples. Responses by the HECD provided good sensitivity and were reasonablymore » predictable. Ten response factors were calculated based on the molar chlorine content of each homolog group. Homolog distributions were determined for Aroclors 1016, 1221, 1232, 1242, 1248, 1254, 1260, 1262 along with binary and ternary mixtures of the same. These distributions were compared with distributions reported by other researchers using electron capture detection as well as chemical ionization mass spectrometric methodologies. Homolog distributions acquired by the HECD methodology showed good correlation with the previously mentioned methodologies. The developed analytical methodology was used in the analysis of bluefish (Pomatomas saltatrix) and weakfish (Cynoscion regalis) collected from the York River, lower James River and lower Chesapeake Bay in Virginia. Total PCB concentrations were calculated and homolog distributions were constructed from the acquired data. Increases in total PCB concentrations were found in the analyzed fish samples during the fall of 1985 collected from the lower James River and lower Chesapeake Bay.« less
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
A methodology was developed for deriving quantitative exposure criteria useful for comparing a site or watershed to a reference condition. The prototype method used indicators of exposures to oil contamination and combustion by-products, naphthalene and benzo(a)pyrene metabolites...
Toumi, Mondher; Motrunich, Anastasiia; Millier, Aurélie; Rémuzat, Cécile; Chouaid, Christos; Falissard, Bruno; Aballéa, Samuel
2017-01-01
ABSTRACT Background: Despite the guidelines for Economic and Public Health Assessment Committee (CEESP) submission having been available for nearly six years, the dossiers submitted continue to deviate from them, potentially impacting product prices. Objective: to review the reports published by CEESP, analyse deviations from the guidelines, and discuss their implications for the pricing and reimbursement process. Study design: CEESP reports published until January 2017 were reviewed, and deviations from the guidelines were extracted. The frequency of deviations was described by type of methodological concern (minor, important or major). Results: In 19 reports, we identified 243 methodological concerns, most often concerning modelling, measurement and valuation of health states and results presentation and sensitivity analyses; nearly 63% were minor, 33% were important and 4.5% were major. All reports included minor methodological concerns, and 17 (89%) included at least one important and/or major methodological concern. Global major methodological concerns completely invalidated the analysis in seven dossiers (37%). Conclusion: The CEESP submission dossiers fail to adhere to the guidelines, potentially invalidating the health economics analysis and resulting in pricing negotiations. As these negotiations tend to be unfavourable for the manufacturer, the industry should strive to improve the quality of the analyses submitted to CEESP. PMID:28804600
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Chen, Xingyuan; Ye, Ming
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less
Malá, Zdena; Gebauer, Petr
2017-10-06
Capillary isotachophoresis (ITP) is an electrophoretic technique offering high sensitivity due to permanent stacking of the migrating analytes. Its combination with electrospray-ionization mass-spectrometric (ESI-MS) detection is limited by the narrow spectrum of ESI-compatible components but can be compensated by experienced system architecture. This work describes a methodology for sensitive analysis of hydroxyderivatives of s-triazine herbicides, based on implementation of the concepts of moving-boundary isotachophoresis and of H + as essential terminating component into cationic ITP with ESI-MS detection. Theoretical description of such kind of system is given and equations for zone-related boundary mobilities are derived, resulting in a much more general definition of the effective mobility of the terminating H + zone than used so far. Explicit equations allowing direct calculation for selected simple systems are derived. The presented theory allows prediction of stacking properties of particular systems and easy selection of suitable electrolyte setups. A simple ESI-compatible system composed of acetic acid and ammonium with H + and ammonium as a mixed terminator was selected for the analysis of 2-hydroxyatrazine and 2-hydroxyterbutylazine, degradation products of s-triazine herbicides. The proposed method was tested with direct injection without any sample pretreatment and provided excellent linearity and high sensitivity with limits of detection below 100ng/L (0.5nM). Example analyses of unspiked and spiked drinking and river water are shown. Copyright © 2017 Elsevier B.V. All rights reserved.
Culturally Sensitive Parent Education: A Critical Review of Quantitative Research.
ERIC Educational Resources Information Center
Gorman, Jean Cheng; Balter, Lawrence
1997-01-01
Critically reviews the quantitative literature on culturally sensitive parent education programs, discussing issues of research methodology and program efficacy in producing change among ethnic minority parents and their children. Culturally sensitive programs for African American and Hispanic families are described in detail. Methodological flaws…
A Methodological Review of US Budget-Impact Models for New Drugs.
Mauskopf, Josephine; Earnshaw, Stephanie
2016-11-01
A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.
The United States Environmental Protection Agency's National Exposure Research Laboratory has initiated a project to improve the methodology for modeling human exposure to motor vehicle emissions. The overall project goal is to develop improved methods for modeling the source t...
In response to growing health concerns related to atmospheric fine particles, EPA promulgated in 1997 a new particulate matter standard accompanied by new sampling methodology. Based on a review of pertinent literature, a new metric (PM;,) was adopted and its measurement method...
False-Positive Tangible Outcomes of Functional Analyses
ERIC Educational Resources Information Center
Rooker, Griffin W.; Iwata, Brian A.; Harper, Jill M.; Fahmie, Tara A.; Camp, Erin M.
2011-01-01
Functional analysis (FA) methodology is the most precise method for identifying variables that maintain problem behavior. Occasionally, however, results of an FA may be influenced by idiosyncratic sensitivity to aspects of the assessment conditions. For example, data from several studies suggest that inclusion of a tangible condition during an FA…
Palmer, Kevin B; LaFon, William; Burford, Mark D
2017-09-22
Current analytical methodology for iodopropynyl butylcarbamate (IPBC) analysis focuses on the use of liquid chromatography and mass spectrometer (LC-MS), but the high instrumentation and operator investment required has resulted in the need for a cost effective alternative methodology. Past publications investigating gas chromatography with electron capture detector (GC-ECD) for IPBC quantitation proved largely unsuccessful, likely due to the preservatives limited thermal stability. The use of pulsed injection techniques commonly used for trace analysis of thermally labile pharmaceutical compounds was successfully adapted for IPBC analysis and utilizes the selectivity of GC-ECD analysis. System optimization and sample preparation improvements resulted in substantial performance and reproducibility gains. Cosmetic formulations preserved with IPBC (50-100ppm) were solvated in toluene/isopropyl alcohol and quantified over the 0.3-1.3μg/ml calibration range. The methodology was robust (relative standard deviation 4%), accurate (98% recovery), and sensitive (limit of detection 0.25ng/ml) for use in routine testing of cosmetic formulation preservation. Copyright © 2017 Elsevier B.V. All rights reserved.
LC-MS based analysis of endogenous steroid hormones in human hair.
Gao, Wei; Kirschbaum, Clemens; Grass, Juliane; Stalder, Tobias
2016-09-01
The quantification of endogenous steroid hormone concentrations in hair is increasingly used as a method for obtaining retrospective information on long-term integrated hormone exposure. Several different analytical procedures have been employed for hair steroid analysis, with liquid chromatography-mass spectrometry (LC-MS) being recognized as a particularly powerful analytical tool. Several methodological aspects affect the performance of LC-MS systems for hair steroid analysis, including sample preparation and pretreatment, steroid extraction, post-incubation purification, LC methodology, ionization techniques and MS specifications. Here, we critically review the differential value of such protocol variants for hair steroid hormones analysis, focusing on both analytical quality and practical feasibility issues. Our results show that, when methodological challenges are adequately addressed, LC-MS protocols can not only yield excellent sensitivity and specificity but are also characterized by relatively simple sample processing and short run times. This makes LC-MS based hair steroid protocols particularly suitable as a high-quality option for routine application in research contexts requiring the processing of larger numbers of samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Application of design sensitivity analysis for greater improvement on machine structural dynamics
NASA Technical Reports Server (NTRS)
Yoshimura, Masataka
1987-01-01
Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.
Recent approaches for enhancing sensitivity in enantioseparations by CE.
Sánchez-Hernández, Laura; García-Ruiz, Carmen; Luisa Marina, María; Luis Crego, Antonio
2010-01-01
This article reviews the latest methodological and instrumental improvements for enhancing sensitivity in chiral analysis by CE. The review covers literature from March 2007 until May 2009, that is, the works published after the appearance of the latest review article on the same topic by Sánchez-Hernández et al. [Electrophoresis 2008, 29, 237-251]. Off-line and on-line sample treatment techniques, on-line sample preconcentration strategies based on electrophoretic and chromatographic principles, and alternative detection systems to the widely employed UV/Vis detection in CE are the most relevant approaches discussed for improving sensitivity. Microchip technologies are also included since they can open up great possibilities to achieve sensitive and fast enantiomeric separations.
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.
From LCAs to simplified models: a generic methodology applied to wind power electricity.
Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle
2013-02-05
This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.
Sensory and non-sensory factors and the concept of externality in obese subjects.
Gardner, R M; Brake, S J; Reyes, B; Maestas, D
1983-08-01
9 obese and 9 normal subjects performed a psychophysical task in which food- or non-food-related stimuli were briefly flashed tachistoscopically at a speed and intensity near the visual threshold. A signal was presented on one-half the trials and noise only on the other one-half of the trials. Using signal detection theory methodology, separate measures of sensory sensitivity (d') and response bias (beta) were calculated. No differences were noted between obese and normal subjects on measures of sensory sensitivity but significant differences on response bias. Obese subjects had consistently lower response criteria than normal ones. Analysis for subjects categorized by whether they were restrained or unrestrained eaters gave findings identical to those for obese and normal. The importance of using a methodology that separates sensory and non-sensory factors in research on obesity is discussed.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
Reliability analysis of repairable systems using Petri nets and vague Lambda-Tau methodology.
Garg, Harish
2013-01-01
The main objective of the paper is to developed a methodology, named as vague Lambda-Tau, for reliability analysis of repairable systems. Petri net tool is applied to represent the asynchronous and concurrent processing of the system instead of fault tree analysis. To enhance the relevance of the reliability study, vague set theory is used for representing the failure rate and repair times instead of classical(crisp) or fuzzy set theory because vague sets are characterized by a truth membership function and false membership functions (non-membership functions) so that sum of both values is less than 1. The proposed methodology involves qualitative modeling using PN and quantitative analysis using Lambda-Tau method of solution with the basic events represented by intuitionistic fuzzy numbers of triangular membership functions. Sensitivity analysis has also been performed and the effects on system MTBF are addressed. The methodology improves the shortcomings of the existing probabilistic approaches and gives a better understanding of the system behavior through its graphical representation. The washing unit of a paper mill situated in a northern part of India, producing approximately 200 ton of paper per day, has been considered to demonstrate the proposed approach. The results may be helpful for the plant personnel for analyzing the systems' behavior and to improve their performance by adopting suitable maintenance strategies. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Alexakis, Dimitrios D.; Mexis, Filippos-Dimitrios K.; Vozinaki, Anthi-Eirini K.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.
2017-01-01
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R2 values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies. PMID:28635625
Alexakis, Dimitrios D; Mexis, Filippos-Dimitrios K; Vozinaki, Anthi-Eirini K; Daliakopoulos, Ioannis N; Tsanis, Ioannis K
2017-06-21
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R² values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies.
Rosa C. Goodman; Douglass F. Jacobs; Robert P. Karrfalt
2006-01-01
This paper discusses the potential to use X-ray image analysis as a rapid and nondestructive test of viability of northern red oak (Quercus rubra L.) acorns and the methodology to do so. Acorns are sensitive to desiccation and lose viability as moisture content (MC) decreases, so we examined X-ray images for cotyledon damage in dried acorns to...
Methodology for determining major constituents of ayahuasca and their metabolites in blood.
McIlhenny, Ethan H; Riba, Jordi; Barbanoj, Manel J; Strassman, Rick; Barker, Steven A
2012-03-01
There is an increasing interest in potential medical applications of ayahuasca, a South American psychotropic plant tea with a long cultural history of indigenous medical and religious use. Clinical research into ayahuasca will require specific, sensitive and comprehensive methods for the characterization and quantitation of these compounds and their metabolites in blood. A combination of two analytical techniques (high-performance liquid chromatography with ultraviolet and/or fluorescence detection and gas chromatography with nitrogen-phosphorus detection) has been used for the analysis of some of the constituents of ayahuasca in blood following its oral consumption. We report here a single methodology for the direct analysis of 14 of the major alkaloid components of ayahuasca, including several known and potential metabolites of N,N-dimethyltryptamine and the harmala alkaloids in blood. The method uses 96-well plate/protein precipitation/filtration for plasma samples, and analysis by HPLC-ion trap-ion trap-mass spectrometry using heated electrospray ionization to reduce matrix effects. The method expands the list of compounds capable of being monitored in blood following ayahuasca administration while providing a simplified approach to their analysis. The method has adequate sensitivity, specificity and reproducibility to make it useful for clinical research with ayahuasca. Copyright © 2011 John Wiley & Sons, Ltd.
DESIGN ANALYSIS FOR THE NAVAL SNF WASTE PACKAGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
T.L. Mitchell
2000-05-31
The purpose of this analysis is to demonstrate the design of the naval spent nuclear fuel (SNF) waste package (WP) using the Waste Package Department's (WPD) design methodologies and processes described in the ''Waste Package Design Methodology Report'' (CRWMS M&O [Civilian Radioactive Waste Management System Management and Operating Contractor] 2000b). The calculations that support the design of the naval SNF WP will be discussed; however, only a sub-set of such analyses will be presented and shall be limited to those identified in the ''Waste Package Design Sensitivity Report'' (CRWMS M&O 2000c). The objective of this analysis is to describe themore » naval SNF WP design method and to show that the design of the naval SNF WP complies with the ''Naval Spent Nuclear Fuel Disposal Container System Description Document'' (CRWMS M&O 1999a) and Interface Control Document (ICD) criteria for Site Recommendation. Additional criteria for the design of the naval SNF WP have been outlined in Section 6.2 of the ''Waste Package Design Sensitivity Report'' (CRWMS M&O 2000c). The scope of this analysis is restricted to the design of the naval long WP containing one naval long SNF canister. This WP is representative of the WPs that will contain both naval short SNF and naval long SNF canisters. The following items are included in the scope of this analysis: (1) Providing a general description of the applicable design criteria; (2) Describing the design methodology to be used; (3) Presenting the design of the naval SNF waste package; and (4) Showing compliance with all applicable design criteria. The intended use of this analysis is to support Site Recommendation reports and assist in the development of WPD drawings. Activities described in this analysis were conducted in accordance with the technical product development plan (TPDP) ''Design Analysis for the Naval SNF Waste Package (CRWMS M&O 2000a).« less
Simulation of Attacks for Security in Wireless Sensor Network.
Diaz, Alvaro; Sanchez, Pablo
2016-11-18
The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.
Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P
2016-10-01
An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.
Developments in Sensitivity Methodologies and the Validation of Reactor Physics Calculations
Palmiotti, Giuseppe; Salvatores, Massimo
2012-01-01
The sensitivity methodologies have been a remarkable story when adopted in the reactor physics field. Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. A review of the methods used is provided, and several examples illustrate the success of the methodology in reactor physics. A new application as the improvement of nuclear basic parameters using integral experiments is also described.
Conceptualization of the Complex Outcomes of Sexual Abuse: A Signal Detection Analysis
ERIC Educational Resources Information Center
Pechtel, Pia; Evans, Ian M.; Podd, John V.
2011-01-01
Eighty-five New Zealand based practitioners experienced in treating adults with a history of child sexual abuse participated in an online judgment study of child sexual abuse outcomes using signal detection theory methodology. Participants' level of sensitivity was assessed independent of their degree of response bias when discriminating (a) known…
In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...
Prevailing methodologies in the analysis of gene expression data often neglect to incorporate full concentration and time response due to limitations in throughput and sensitivity with traditional microarray approaches. We have developed a high throughput assay suite using primar...
Geomatics for Maritime Parks and Preserved Areas
NASA Astrophysics Data System (ADS)
Lo Tauro, Agata
2009-11-01
The aim of this research is to use hyperspectral MIVIS data for protection of sensitive cultural, natural resources, Nature Reserves and maritime parks. A knowledge of the distribution of submerged vegetation is useful to monitor the health of ecosystems in coastal areas. The objective of this project was to develop a new methodology within geomatic environment to facilitate the analysis and application of Local Institutions who are not familiar with Spatial Analysis softwares in order to implement new research activities in this field of study. Field controls may be carried out with the support of accurate and novel in situ analysis in order to determine the training sites for the novel tested classification. The methodology applied demonstrates that the combination of hyperspectral sensors and ESA Remote Sensing (RS) data can be used to analyse thematic cartography of submerged vegetation and land use analysis for Sustainable Development. This project will be implemented for Innovative Educational and Research Programmes.
Analytical Round Robin for Elastic-Plastic Analysis of Surface Cracked Plates: Phase I Results
NASA Technical Reports Server (NTRS)
Wells, D. N.; Allen, P. A.
2012-01-01
An analytical round robin for the elastic-plastic analysis of surface cracks in flat plates was conducted with 15 participants. Experimental results from a surface crack tension test in 2219-T8 aluminum plate provided the basis for the inter-laboratory study (ILS). The study proceeded in a blind fashion given that the analysis methodology was not specified to the participants, and key experimental results were withheld. This approach allowed the ILS to serve as a current measure of the state of the art for elastic-plastic fracture mechanics analysis. The analytical results and the associated methodologies were collected for comparison, and sources of variability were studied and isolated. The results of the study revealed that the J-integral analysis methodology using the domain integral method is robust, providing reliable J-integral values without being overly sensitive to modeling details. General modeling choices such as analysis code, model size (mesh density), crack tip meshing, or boundary conditions, were not found to be sources of significant variability. For analyses controlled only by far-field boundary conditions, the greatest source of variability in the J-integral assessment is introduced through the constitutive model. This variability can be substantially reduced by using crack mouth opening displacements to anchor the assessment. Conclusions provide recommendations for analysis standardization.
Soller, Jeffrey A; Eftim, Sorina E; Nappier, Sharon P
2018-01-01
Understanding pathogen risks is a critically important consideration in the design of water treatment, particularly for potable reuse projects. As an extension to our published microbial risk assessment methodology to estimate infection risks associated with Direct Potable Reuse (DPR) treatment train unit process combinations, herein, we (1) provide an updated compilation of pathogen density data in raw wastewater and dose-response models; (2) conduct a series of sensitivity analyses to consider potential risk implications using updated data; (3) evaluate the risks associated with log credit allocations in the United States; and (4) identify reference pathogen reductions needed to consistently meet currently applied benchmark risk levels. Sensitivity analyses illustrated changes in cumulative annual risks estimates, the significance of which depends on the pathogen group driving the risk for a given treatment train. For example, updates to norovirus (NoV) raw wastewater values and use of a NoV dose-response approach, capturing the full range of uncertainty, increased risks associated with one of the treatment trains evaluated, but not the other. Additionally, compared to traditional log-credit allocation approaches, our results indicate that the risk methodology provides more nuanced information about how consistently public health benchmarks are achieved. Our results indicate that viruses need to be reduced by 14 logs or more to consistently achieve currently applied benchmark levels of protection associated with DPR. The refined methodology, updated model inputs, and log credit allocation comparisons will be useful to regulators considering DPR projects and design engineers as they consider which unit treatment processes should be employed for particular projects. Published by Elsevier Ltd.
Imai, Kosuke; Jiang, Zhichao
2018-04-29
The matched-pairs design enables researchers to efficiently infer causal effects from randomized experiments. In this paper, we exploit the key feature of the matched-pairs design and develop a sensitivity analysis for missing outcomes due to truncation by death, in which the outcomes of interest (e.g., quality of life measures) are not even well defined for some units (e.g., deceased patients). Our key idea is that if 2 nearly identical observations are paired prior to the randomization of the treatment, the missingness of one unit's outcome is informative about the potential missingness of the other unit's outcome under an alternative treatment condition. We consider the average treatment effect among always-observed pairs (ATOP) whose units exhibit no missing outcome regardless of their treatment status. The naive estimator based on available pairs is unbiased for the ATOP if 2 units of the same pair are identical in terms of their missingness patterns. The proposed sensitivity analysis characterizes how the bounds of the ATOP widen as the degree of the within-pair similarity decreases. We further extend the methodology to the matched-pairs design in observational studies. Our simulation studies show that informative bounds can be obtained under some scenarios when the proportion of missing data is not too large. The proposed methodology is also applied to the randomized evaluation of the Mexican universal health insurance program. An open-source software package is available for implementing the proposed research. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Siadaty, Moein; Kazazi, Mohsen
2018-04-01
Convective heat transfer, entropy generation and pressure drop of two water based nanofluids (Cu-water and Al2O3-water) in horizontal annular tubes are scrutinized by means of computational fluids dynamics, response surface methodology and sensitivity analysis. First, central composite design is used to perform a series of experiments with diameter ratio, length to diameter ratio, Reynolds number and solid volume fraction. Then, CFD is used to calculate the Nusselt Number, Euler number and entropy generation. After that, RSM is applied to fit second order polynomials on responses. Finally, sensitivity analysis is conducted to manage the above mentioned parameters inside tube. Totally, 62 different cases are examined. CFD results show that Cu-water and Al2O3-water have the highest and lowest heat transfer rate, respectively. In addition, analysis of variances indicates that increase in solid volume fraction increases dimensionless pressure drop for Al2O3-water. Moreover, it has a significant negative and insignificant effects on Cu-water Nusselt and Euler numbers, respectively. Analysis of Bejan number indicates that frictional and thermal entropy generations are the dominant irreversibility in Al2O3-water and Cu-water flows, respectively. Sensitivity analysis indicates dimensionless pressure drop sensitivity to tube length for Cu-water is independent of its diameter ratio at different Reynolds numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekmekcioglu, Mehmet, E-mail: meceng3584@yahoo.co; Kaya, Tolga; Kahraman, Cengiz
The use of fuzzy multiple criteria analysis (MCA) in solid waste management has the advantage of rendering subjective and implicit decision making more objective and analytical, with its ability to accommodate both quantitative and qualitative data. In this paper a modified fuzzy TOPSIS methodology is proposed for the selection of appropriate disposal method and site for municipal solid waste (MSW). Our method is superior to existing methods since it has capability of representing vague qualitative data and presenting all possible results with different degrees of membership. In the first stage of the proposed methodology, a set of criteria of cost,more » reliability, feasibility, pollution and emission levels, waste and energy recovery is optimized to determine the best MSW disposal method. Landfilling, composting, conventional incineration, and refuse-derived fuel (RDF) combustion are the alternatives considered. The weights of the selection criteria are determined by fuzzy pairwise comparison matrices of Analytic Hierarchy Process (AHP). It is found that RDF combustion is the best disposal method alternative for Istanbul. In the second stage, the same methodology is used to determine the optimum RDF combustion plant location using adjacent land use, climate, road access and cost as the criteria. The results of this study illustrate the importance of the weights on the various factors in deciding the optimized location, with the best site located in Catalca. A sensitivity analysis is also conducted to monitor how sensitive our model is to changes in the various criteria weights.« less
Ultrasound for Distal Forearm Fracture: A Systematic Review and Diagnostic Meta-Analysis
Douma-den Hamer, Djoke; Blanker, Marco H.; Edens, Mireille A.; Buijteweg, Lonneke N.; Boomsma, Martijn F.; van Helden, Sven H.; Mauritz, Gert-Jan
2016-01-01
Study Objective To determine the diagnostic accuracy of ultrasound for detecting distal forearm fractures. Methods A systematic review and diagnostic meta-analysis was performed according to the PRISMA statement. We searched MEDLINE, Web of Science and the Cochrane Library from inception to September 2015. All prospective studies of the diagnostic accuracy of ultrasound versus radiography as the reference standard were included. We excluded studies with a retrospective design and those with evidence of verification bias. We assessed the methodological quality of the included studies with the QUADAS-2 tool. We performed a meta-analysis of studies evaluating ultrasound to calculate the pooled sensitivity and specificity with 95% confidence intervals (CI95%) using a bivariate model with random effects. Subgroup and sensitivity analysis were used to examine the effect of methodological differences and other study characteristics. Results Out of 867 publications we included 16 studies with 1,204 patients and 641 fractures. The pooled test characteristics for ultrasound were: sensitivity 97% (CI95% 93–99%), specificity 95% (CI95% 89–98%), positive likelihood ratio (LR) 20.0 (8.5–47.2) and negative LR 0.03 (0.01–0.08). The corresponding pooled diagnostic odds ratio (DOR) was 667 (142–3,133). Apparent differences were shown for method of viewing, with the 6-view method showing higher specificity, positive LR, and DOR, compared to the 4-view method. Conclusion The present meta-analysis showed that ultrasound has a high accuracy for the diagnosis of distal forearm fractures in children when used by proper viewing method. Based on this, ultrasound should be considered a reliable alternative, which has the advantages of being radiation free. PMID:27196439
NASA Astrophysics Data System (ADS)
Pommatau, Gilles
2014-06-01
The present paper deals with the industrial application, via a software developed by Thales Alenia Space, of a new failure criterion named "Tsai-Hill equivalent criterion" for composite structural parts of satellites. The first part of the paper briefly describes the main hypothesis and the possibilities in terms of failure analysis of the software. The second parts reminds the quadratic and conservative nature of the new failure criterion, already presented in ESA conference in a previous paper. The third part presents the statistical calculation possibilities of the software, and the associated sensitivity analysis, via results obtained on different composites. Then a methodology, proposed to customers and agencies, is presented with its limitations and advantages. It is then conclude that this methodology is an efficient industrial way to perform mechanical analysis on quasi-isotropic composite parts.
NASA Astrophysics Data System (ADS)
Chen, Daniel; Chen, Damian; Yen, Ray; Cheng, Mingjen; Lan, Andy; Ghaskadvi, Rajesh
2008-11-01
Identifying hotspots--structures that limit the lithography process window--become increasingly important as the industry relies heavily on RET to print sub-wavelength designs. KLA-Tencor's patented Process Window Qualification (PWQ) methodology has been used for this purpose in various fabs. PWQ methodology has three key advantages (a) PWQ Layout--to obtain the best sensitivity (b) Design Based Binning--for pattern repeater analysis (c) Intelligent sampling--for the best DOI sampling rate. This paper evaluates two different analysis strategies for SEM review sampling successfully deployed at Inotera Memories, Inc. We propose a new approach combining the location repeater and pattern repeaters. Based on a recent case study the new sampling flow reduces the data analysis and sampling time from 6 hours to 1.5 hour maintaining maximum DOI sample rate.
Collagen morphology and texture analysis: from statistics to classification
Mostaço-Guidolin, Leila B.; Ko, Alex C.-T.; Wang, Fei; Xiang, Bo; Hewko, Mark; Tian, Ganghong; Major, Arkady; Shiomi, Masashi; Sowa, Michael G.
2013-01-01
In this study we present an image analysis methodology capable of quantifying morphological changes in tissue collagen fibril organization caused by pathological conditions. Texture analysis based on first-order statistics (FOS) and second-order statistics such as gray level co-occurrence matrix (GLCM) was explored to extract second-harmonic generation (SHG) image features that are associated with the structural and biochemical changes of tissue collagen networks. Based on these extracted quantitative parameters, multi-group classification of SHG images was performed. With combined FOS and GLCM texture values, we achieved reliable classification of SHG collagen images acquired from atherosclerosis arteries with >90% accuracy, sensitivity and specificity. The proposed methodology can be applied to a wide range of conditions involving collagen re-modeling, such as in skin disorders, different types of fibrosis and muscular-skeletal diseases affecting ligaments and cartilage. PMID:23846580
Malá, Zdena; Gebauer, Petr; Boček, Petr
2016-09-07
This article describes for the first time the combination of electrophoretic focusing on inverse electromigration dispersion (EMD) gradient, a new separation principle described in 2010, with electrospray-ionization (ESI) mass spectrometric detection. The separation of analytes along the electromigrating EMD profile proceeds so that each analyte is focused and concentrated within the profile at a particular position given by its pKa and ionic mobility. The proposed methodology combines this principle with the transport of the focused zones to the capillary end by superimposed electromigration, electroosmotic flow and ESI suction, and their detection by the MS detector. The designed electrolyte system based on maleic acid and 2,6-lutidine is suitable to create an inverse EMD gradient of required properties and its components are volatile enough to be compatible with the ESI interface. The characteristic properties of the proposed electrolyte system and of the formed inverse gradient are discussed in detail using calculated diagrams and computer simulations. It is shown that the system is surprisingly robust and allows sensitive analyses of trace amounts of weak acids in the pKa range between approx. 6 and 9. As a first practical application of electrophoretic focusing on inverse EMD gradient, the analysis of several sulfonamides in waters is reported. It demonstrates the potential of the developed methodology for fast and high-sensitivity analyses of ionic trace analytes, with reached LODs around 3 × 10(-9) M (0.8 ng mL(-1)) of sulfonamides in spiked drinking water without any sample pretreatment. Copyright © 2016 Elsevier B.V. All rights reserved.
Sensitivity analysis of navy aviation readiness based sparing model
2017-09-01
variability. (See Figure 4.) Figure 4. Research design flowchart 18 Figure 4 lays out the four steps of the methodology , starting in the upper left-hand...as a function of changes in key inputs. We develop NAVARM Experimental Designs (NED), a computational tool created by applying a state-of-the-art...experimental design to the NAVARM model. Statistical analysis of the resulting data identifies the most influential cost factors. Those are, in order of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sung, Yixing; Adams, Brian M.; Secker, Jeffrey R.
2011-12-01
The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. Thismore » report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.« less
Ma, Feng-Li; Jiang, Bo; Song, Xiao-Xiao; Xu, An-Gao
2011-01-01
Background High Resolution Melting Analysis (HRMA) is becoming the preferred method for mutation detection. However, its accuracy in the individual clinical diagnostic setting is variable. To assess the diagnostic accuracy of HRMA for human mutations in comparison to DNA sequencing in different routine clinical settings, we have conducted a meta-analysis of published reports. Methodology/Principal Findings Out of 195 publications obtained from the initial search criteria, thirty-four studies assessing the accuracy of HRMA were included in the meta-analysis. We found that HRMA was a highly sensitive test for detecting disease-associated mutations in humans. Overall, the summary sensitivity was 97.5% (95% confidence interval (CI): 96.8–98.5; I2 = 27.0%). Subgroup analysis showed even higher sensitivity for non-HR-1 instruments (sensitivity 98.7% (95%CI: 97.7–99.3; I2 = 0.0%)) and an eligible sample size subgroup (sensitivity 99.3% (95%CI: 98.1–99.8; I2 = 0.0%)). HRMA specificity showed considerable heterogeneity between studies. Sensitivity of the techniques was influenced by sample size and instrument type but by not sample source or dye type. Conclusions/Significance These findings show that HRMA is a highly sensitive, simple and low-cost test to detect human disease-associated mutations, especially for samples with mutations of low incidence. The burden on DNA sequencing could be significantly reduced by the implementation of HRMA, but it should be recognized that its sensitivity varies according to the number of samples with/without mutations, and positive results require DNA sequencing for confirmation. PMID:22194806
Integrated Design Methodology for Highly Reliable Liquid Rocket Engine
NASA Astrophysics Data System (ADS)
Kuratani, Naoshi; Aoki, Hiroshi; Yasui, Masaaki; Kure, Hirotaka; Masuya, Goro
The Integrated Design Methodology is strongly required at the conceptual design phase to achieve the highly reliable space transportation systems, especially the propulsion systems, not only in Japan but also all over the world in these days. Because in the past some catastrophic failures caused some losses of mission and vehicle (LOM/LOV) at the operational phase, moreover did affect severely the schedule delays and cost overrun at the later development phase. Design methodology for highly reliable liquid rocket engine is being preliminarily established and investigated in this study. The sensitivity analysis is systematically performed to demonstrate the effectiveness of this methodology, and to clarify and especially to focus on the correlation between the combustion chamber, turbopump and main valve as main components. This study describes the essential issues to understand the stated correlations, the need to apply this methodology to the remaining critical failure modes in the whole engine system, and the perspective on the engine development in the future.
Predicting Failure Progression and Failure Loads in Composite Open-Hole Tension Coupons
NASA Technical Reports Server (NTRS)
Arunkumar, Satyanarayana; Przekop, Adam
2010-01-01
Failure types and failure loads in carbon-epoxy [45n/90n/-45n/0n]ms laminate coupons with central circular holes subjected to tensile load are simulated using progressive failure analysis (PFA) methodology. The progressive failure methodology is implemented using VUMAT subroutine within the ABAQUS(TradeMark)/Explicit nonlinear finite element code. The degradation model adopted in the present PFA methodology uses an instantaneous complete stress reduction (COSTR) approach to simulate damage at a material point when failure occurs. In-plane modeling parameters such as element size and shape are held constant in the finite element models, irrespective of laminate thickness and hole size, to predict failure loads and failure progression. Comparison to published test data indicates that this methodology accurately simulates brittle, pull-out and delamination failure types. The sensitivity of the failure progression and the failure load to analytical loading rates and solvers precision is demonstrated.
NASA Astrophysics Data System (ADS)
Baietto, Oliviero; Amodeo, Francesco; Giorgis, Ilaria; Vitaliti, Martina
2017-04-01
The quantification of NOA (Naturally Occurring Asbestos) in a rock or soil matrix is complex and subject to numerous errors. The purpose of this study is to compare two fundamental methodologies used for the analysis: the first one uses Phase Contrast Optical Microscope (PCOM) while the second one uses Scanning Electron Microscope (SEM). The two methods, although they provide the same result, which is the asbestos mass to total mass ratio, have completely different characteristics and both present pros and cons. The current legislation in Italy involves the use of SEM, DRX, FTIR, PCOM (DM 6/9/94) for the quantification of asbestos in bulk materials and soils and the threshold beyond which the material is considered as hazardous waste is a concentration of asbestos fiber of 1000 mg/kg.(DM 161/2012). The most used technology is the SEM which is the one among these with the better analytical sensitivity.(120mg/Kg DM 6 /9/94) The fundamental differences among the analyses are mainly: - Amount of analyzed sample portion - Representativeness of the sample - Resolution - Analytical precision - Uncertainty of the methodology - Operator errors Due to the problem of quantification of DRX and FTIR (1% DM 6/9/94) our Asbestos Laboratory (DIATI POLITO) since more than twenty years apply the PCOM methodology and in the last years the SEM methodology for quantification of asbestos content. The aim of our research is to compare the results obtained from a PCOM analysis with the results provided by SEM analysis on the base of more than 100 natural samples both from cores (tunnel-boring or explorative-drilling) and from tunnelling excavation . The results obtained show, in most cases, a good correlation between the two techniques. Of particular relevance is the fact that both techniques are reliable for very low quantities of asbestos, even lower than the analytical sensitivity. This work highlights the comparison between the two techniques emphasizing strengths and weaknesses of the two procedures and suggests how an integrated approach, together with the skills and experience of the operator may be the best way forward in order to obtain a constructive improvement of analysis techniques.
Healy, Sinead; McMahon, Jill; Owens, Peter; Dockery, Peter; FitzGerald, Una
2018-02-01
Image segmentation is often imperfect, particularly in complex image sets such z-stack micrographs of slice cultures and there is a need for sufficient details of parameters used in quantitative image analysis to allow independent repeatability and appraisal. For the first time, we have critically evaluated, quantified and validated the performance of different segmentation methodologies using z-stack images of ex vivo glial cells. The BioVoxxel toolbox plugin, available in FIJI, was used to measure the relative quality, accuracy, specificity and sensitivity of 16 global and 9 local threshold automatic thresholding algorithms. Automatic thresholding yields improved binary representation of glial cells compared with the conventional user-chosen single threshold approach for confocal z-stacks acquired from ex vivo slice cultures. The performance of threshold algorithms varies considerably in quality, specificity, accuracy and sensitivity with entropy-based thresholds scoring highest for fluorescent staining. We have used the BioVoxxel toolbox to correctly and consistently select the best automated threshold algorithm to segment z-projected images of ex vivo glial cells for downstream digital image analysis and to define segmentation quality. The automated OLIG2 cell count was validated using stereology. As image segmentation and feature extraction can quite critically affect the performance of successive steps in the image analysis workflow, it is becoming increasingly necessary to consider the quality of digital segmenting methodologies. Here, we have applied, validated and extended an existing performance-check methodology in the BioVoxxel toolbox to z-projected images of ex vivo glia cells. Copyright © 2017 Elsevier B.V. All rights reserved.
Dynamic analysis of process reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadle, L.J.; Lawson, L.O.; Noel, S.D.
1995-06-01
The approach and methodology of conducting a dynamic analysis is presented in this poster session in order to describe how this type of analysis can be used to evaluate the operation and control of process reactors. Dynamic analysis of the PyGas{trademark} gasification process is used to illustrate the utility of this approach. PyGas{trademark} is the gasifier being developed for the Gasification Product Improvement Facility (GPIF) by Jacobs-Siffine Engineering and Riley Stoker. In the first step of the analysis, process models are used to calculate the steady-state conditions and associated sensitivities for the process. For the PyGas{trademark} gasifier, the process modelsmore » are non-linear mechanistic models of the jetting fluidized-bed pyrolyzer and the fixed-bed gasifier. These process sensitivities are key input, in the form of gain parameters or transfer functions, to the dynamic engineering models.« less
On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics
Calcagno, Cristina; Coppo, Mario
2014-01-01
The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed. PMID:25050327
On designing multicore-aware simulators for systems biology endowed with OnLine statistics.
Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo
2014-01-01
The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.
Mosely, Jackie A; Stokes, Peter; Parker, David; Dyer, Philip W; Messinis, Antonis M
2018-02-01
A novel method has been developed that enables chemical compounds to be transferred from an inert atmosphere glove box and into the atmospheric pressure ion source of a mass spectrometer whilst retaining a controlled chemical environment. This innovative method is simple and cheap to implement on some commercially available mass spectrometers. We have termed this approach inert atmospheric pressure solids analysis probe ( iASAP) and demonstrate the benefit of this methodology for two air-/moisture-sensitive chemical compounds whose characterisation by mass spectrometry is now possible and easily achieved. The simplicity of the design means that moving between iASAP and standard ASAP is straightforward and quick, providing a highly flexible platform with rapid sample turnaround.
Sabharwal, Sanjeeve; Carter, Alexander; Darzi, Lord Ara; Reilly, Peter; Gupte, Chinmay M
2015-06-01
Approximately 76,000 people a year sustain a hip fracture in the UK and the estimated cost to the NHS is £1.4 billion a year. Health economic evaluations (HEEs) are one of the methods employed by decision makers to deliver healthcare policy supported by clinical and economic evidence. The objective of this study was to (1) identify and characterize HEEs for the management of patients with hip fractures, and (2) examine their methodological quality. A literature search was performed in MEDLINE, EMBASE and the NHS Economic Evaluation Database. Studies that met the specified definition for a HEE and evaluated hip fracture management were included. Methodological quality was assessed using the Consensus on Health Economic Criteria (CHEC). Twenty-seven publications met the inclusion criteria of this study and were included in our descriptive and methodological analysis. Domains of methodology that performed poorly included use of an appropriate time horizon (66.7% of studies), incremental analysis of costs and outcomes (63%), future discounting (44.4%), sensitivity analysis (40.7%), declaration of conflicts of interest (37%) and discussion of ethical considerations (29.6%). HEEs for patients with hip fractures are increasing in publication in recent years. Most of these studies fail to adopt a societal perspective and key aspects of their methodology are poor. The development of future HEEs in this field must adhere to established principles of methodology, so that better quality research can be used to inform health policy on the management of patients with a hip fracture. Copyright © 2014 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.
Advanced proteomic liquid chromatography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fang; Smith, Richard D.; Shen, Yufeng
2012-10-26
Liquid chromatography coupled with mass spectrometry is the predominant platform used to analyze proteomics samples consisting of large numbers of proteins and their proteolytic products (e.g., truncated polypeptides) and spanning a wide range of relative concentrations. This review provides an overview of advanced capillary liquid chromatography techniques and methodologies that greatly improve separation resolving power and proteomics analysis coverage, sensitivity, and throughput.
ERIC Educational Resources Information Center
Wang, Yan Z.; Wiley, Angela R.; Zhou, Xiaobin
2007-01-01
This study used a mixed methodology to investigate reliability, validity, and analysis level with Chinese immigrant observational data. European-American and Chinese coders quantitatively rated 755 minutes of Chinese immigrant parent-toddler dinner interactions on parental sensitivity, intrusiveness, detachment, negative affect, positive affect,…
Applications of Quantum Cascade Laser Spectroscopy in the Analysis of Pharmaceutical Formulations.
Galán-Freyle, Nataly J; Pacheco-Londoño, Leonardo C; Román-Ospino, Andrés D; Hernandez-Rivera, Samuel P
2016-09-01
Quantum cascade laser spectroscopy was used to quantify active pharmaceutical ingredient content in a model formulation. The analyses were conducted in non-contact mode by mid-infrared diffuse reflectance. Measurements were carried out at a distance of 15 cm, covering the spectral range 1000-1600 cm(-1) Calibrations were generated by applying multivariate analysis using partial least squares models. Among the figures of merit of the proposed methodology are the high analytical sensitivity equivalent to 0.05% active pharmaceutical ingredient in the formulation, high repeatability (2.7%), high reproducibility (5.4%), and low limit of detection (1%). The relatively high power of the quantum-cascade-laser-based spectroscopic system resulted in the design of detection and quantification methodologies for pharmaceutical applications with high accuracy and precision that are comparable to those of methodologies based on near-infrared spectroscopy, attenuated total reflection mid-infrared Fourier transform infrared spectroscopy, and Raman spectroscopy. © The Author(s) 2016.
Sensitivity analysis of reactive ecological dynamics.
Verdy, Ariane; Caswell, Hal
2008-08-01
Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.
Li, Guoliang; Cui, Yanyan; You, Jinmao; Zhao, Xianen; Sun, Zhiwei; Xia, Lian; Suo, Yourui; Wang, Xiao
2011-04-01
Analysis of trace amino acids (AA) in physiological fluids has received more attention, because the analysis of these compounds could provide fundamental and important information for medical, biological, and clinical researches. More accurate method for the determination of those compounds is highly desirable and valuable. In the present study, we developed a selective and sensitive method for trace AA determination in biological samples using 2-[2-(7H-dibenzo [a,g]carbazol-7-yl)-ethoxy] ethyl chloroformate (DBCEC) as labeling reagent by HPLC-FLD-MS/MS. Response surface methodology (RSM) was first employed to optimize the derivatization reaction between DBCEC and AA. Compared with traditional single-factor design, RSM was capable of lessening laborious, time and reagents consumption. The complete derivatization can be achieved within 6.3 min at room temperature. In conjunction with a gradient elution, a baseline resolution of 20 AA containing acidic, neutral, and basic AA was achieved on a reversed-phase Hypersil BDS C(18) column. This method showed excellent reproducibility and correlation coefficient, and offered the exciting detection limits of 0.19-1.17 fmol/μL. The developed method was successfully applied to determinate AA in human serum. The sensitive and prognostic index of serum AA for liver diseases has also been discussed.
Vázquez-Morón, Sonia; Ryan, Pablo; Ardizone-Jiménez, Beatriz; Martín, Dolores; Troya, Jesus; Cuevas, Guillermo; Valencia, Jorge; Jimenez-Sousa, María A; Avellón, Ana; Resino, Salvador
2018-01-30
Both hepatitis C virus (HCV) infection and human immunodeficiency virus (HIV) infection are underdiagnosed, particularly in low-income countries and in difficult-to-access populations. Our aim was to develop and evaluate a methodology for the detection of HCV and HIV infection based on capillary dry blood spot (DBS) samples taken under real-world conditions. We carried out a cross-sectional study of 139 individuals (31 healthy controls, 68 HCV-monoinfected patients, and 40 HCV/HIV-coinfected patients). ELISA was used for anti-HCV and anti-HIV antibody detection; and SYBR Green RT-PCR was used for HCV-RNA detection. The HIV serological analysis revealed 100% sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). The HCV serological analysis revealed a sensitivity of 92.6%, specificity of 100%, PPV of 100%, and NPV of 79.5%. Finally, the HCV-RNA detection test revealed a detection limit of 5 copies/µl with an efficiency of 100% and sensitivity of 99.1%, specificity of 100%, PPV of 100%, and NPV of 96.9%. In conclusion, our methodology was able to detect both HCV infection and HIV infection from the same DBS sample with good diagnostic performance. Screening for HCV and HIV using DBS might be a key strategy in the implementation of national programs for the control of both infections.
Simulation of Attacks for Security in Wireless Sensor Network
Diaz, Alvaro; Sanchez, Pablo
2016-01-01
The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. PMID:27869710
Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi
2015-01-01
Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107
Swider, Paweł; Lewtak, Jan P; Gryko, Daniel T; Danikiewicz, Witold
2013-10-01
The porphyrinoids chemistry is greatly dependent on the data obtained in mass spectrometry. For this reason, it is essential to determine the range of applicability of mass spectrometry ionization methods. In this study, the sensitivity of three different atmospheric pressure ionization techniques, electrospray ionization, atmospheric pressure chemical ionization and atmospheric pressure photoionization, was tested for several porphyrinods and their metallocomplexes. Electrospray ionization method was shown to be the best ionization technique because of its high sensitivity for derivatives of cyanocobalamin, free-base corroles and porphyrins. In the case of metallocorroles and metalloporphyrins, atmospheric pressure photoionization with dopant proved to be the most sensitive ionization method. It was also shown that for relatively acidic compounds, particularly for corroles, the negative ion mode provides better sensitivity than the positive ion mode. The results supply a lot of relevant information on the methodology of porphyrinoids analysis carried out by mass spectrometry. The information can be useful in designing future MS or liquid chromatography-MS experiments. Copyright © 2013 John Wiley & Sons, Ltd.
Non-destructive fraud detection in rosehip oil by MIR spectroscopy and chemometrics.
Santana, Felipe Bachion de; Gontijo, Lucas Caixeta; Mitsutake, Hery; Mazivila, Sarmento Júnior; Souza, Leticia Maria de; Borges Neto, Waldomiro
2016-10-15
Rosehip oil (Rosa eglanteria L.) is an important oil in the food, pharmaceutical and cosmetic industries. However, due to its high added value, it is liable to adulteration with other cheaper or lower quality oils. With this perspective, this work provides a new simple, fast and accurate methodology using mid-infrared (MIR) spectroscopy and partial least squares discriminant analysis (PLS-DA) as a means to discriminate authentic rosehip oil from adulterated rosehip oil containing soybean, corn and sunflower oils in different proportions. The model showed excellent sensitivity and specificity with 100% correct classification. Therefore, the developed methodology is a viable alternative for use in the laboratory and industry for standard quality analysis of rosehip oil since it is fast, accurate and non-destructive. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tsunami and shelf resonance on the northern Chile coast
NASA Astrophysics Data System (ADS)
Cortés, Pablo; Catalán, Patricio A.; Aránguiz, Rafael; Bellotti, Giorgio
2017-09-01
This work presents the analysis of long waves resonance in two of the main cities along the northern coast of Chile, Arica, and Iquique, where a large tsunamigenic potential remains despite recent earthquakes. By combining a modal analysis solving the equation of free surface oscillations, with the analysis of background spectra derived from in situ measurements, the spatial and temporal structures of the modes are recovered. Comparison with spectra from three tsunamis of different characteristics shows that the modes found have been excited by past events. Moreover, the two locations show different response patterns. Arica is more sensitive to the characteristics of the tsunami source, whereas Iquique shows a smaller dependency and similar response for different tsunami events. Results are further compared with other methodologies with good agreement. These findings are relevant in characterizing the tsunami hazard in the area, and the methodology can be further extended to other regions along the Chilean coast.
Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde
2017-01-01
Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.
Dimitrov, S; Detroyer, A; Piroird, C; Gomes, C; Eilstein, J; Pauloin, T; Kuseva, C; Ivanova, H; Popova, I; Karakolev, Y; Ringeissen, S; Mekenyan, O
2016-12-01
When searching for alternative methods to animal testing, confidently rescaling an in vitro result to the corresponding in vivo classification is still a challenging problem. Although one of the most important factors affecting good correlation is sample characteristics, they are very rarely integrated into correlation studies. Usually, in these studies, it is implicitly assumed that both compared values are error-free numbers, which they are not. In this work, we propose a general methodology to analyze and integrate data variability and thus confidence estimation when rescaling from one test to another. The methodology is demonstrated through the case study of rescaling the in vitro Direct Peptide Reactivity Assay (DPRA) reactivity to the in vivo Local Lymph Node Assay (LLNA) skin sensitization potency classifications. In a first step, a comprehensive statistical analysis evaluating the reliability and variability of LLNA and DPRA as such was done. These results allowed us to link the concept of gray zones and confidence probability, which in turn represents a new perspective for a more precise knowledge of the classification of chemicals within their in vivo OR in vitro test. Next, the novelty and practical value of our methodology introducing variability into the threshold optimization between the in vitro AND in vivo test resides in the fact that it attributes a confidence probability to the predicted classification. The methodology, classification and screening approach presented in this study are not restricted to skin sensitization only. They could be helpful also for fate, toxicity and health hazard assessment where plenty of in vitro and in chemico assays and/or QSARs models are available. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Wäscher, Sebastian; Salloch, Sabine; Ritter, Peter; Vollmann, Jochen; Schildmann, Jan
2017-05-01
This article describes a process of developing, implementing and evaluating a clinical ethics support service intervention with the goal of building up a context-sensitive structure of minimal clinical-ethics in an oncology department without prior clinical ethics structure. Scholars from different disciplines have called for an improvement in the evaluation of clinical ethics support services (CESS) for different reasons over several decades. However, while a lot has been said about the concepts and methodological challenges of evaluating CESS up to the present time, relatively few empirical studies have been carried out. The aim of this article is twofold. On the one hand, it describes a process of development, modifying and evaluating a CESS intervention as part of the ETHICO research project, using the approach of qualitative-formative evaluation. On the other hand, it provides a methodological analysis which specifies the contribution of qualitative empirical methods to the (formative) evaluation of CESS. We conclude with a consideration of the strengths and limitations of qualitative evaluation research with regards to the evaluation and development of context sensitive CESS. We further discuss our own approach in contrast to rather traditional consult or committee models. © 2017 John Wiley & Sons Ltd.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
[Enzymatic analysis of the quality of foodstuffs].
Kolesnov, A Iu
1997-01-01
Enzymatic analysis is an independent and separate branch of enzymology and analytical chemistry. It has become one of the most important methodologies used in food analysis. Enzymatic analysis allows the quick, reliable determination of many food ingredients. Often these contents cannot be determined by conventional methods, or if methods are available, they are determined only with limited accuracy. Today, methods of enzymatic analysis are being increasingly used in the investigation of foodstuffs. Enzymatic measurement techniques are used in industry, scientific and food inspection laboratories for quality analysis. This article describes the requirements of an optimal analytical method: specificity, sample preparation, assay performance, precision, sensitivity, time requirement, analysis cost, safety of reagents.
PCR methodology as a valuable tool for identification of endodontic pathogens.
Siqueira, José F; Rôças, Isabela N
2003-07-01
This paper reviews the principles of polymerase chain reaction (PCR) methodology, its application in identification of endodontic pathogens and the perspectives regarding the knowledge to be reached with the use of this highly sensitive, specific and accurate methodology as a microbial identification test. Studies published in the medical, dental and biological literature. Evaluation of published epidemiological studies examining the endodontic microbiota through PCR methodology. PCR technology has enabled the detection of bacterial species that are difficult or even impossible to culture as well as cultivable bacterial strains showing a phenotypically divergent or convergent behaviour. Moreover, PCR is more rapid, much more sensitive, and more accurate when compared with culture. Its use in endodontics to investigate the microbiota associated with infected root canals has expanded the knowledge on the bacteria involved in the pathogenesis of periradicular diseases. For instance, Tannerella forsythensis (formerly Bacteroides forsythus), Treponema denticola, other Treponema species, Dialister pneumosintes, and Prevotella tannerae were detected in infected root canals for the first time and in high prevalence when using PCR analysis. The diversity of endodontic microbiota has been demonstrated by studies using PCR amplification, cloning and sequencing of the PCR products. Moreover, other fastidious bacterial species, such as Porphyromonas endodontalis, Porphyromonas gingivalis and some Eubacterium spp., have been reported in endodontic infections at a higher prevalence than those reported by culture procedures.
A Screening Method for Assessing Cumulative Impacts
Alexeeff, George V.; Faust, John B.; August, Laura Meehan; Milanes, Carmen; Randles, Karen; Zeise, Lauren; Denton, Joan
2012-01-01
The California Environmental Protection Agency (Cal/EPA) Environmental Justice Action Plan calls for guidelines for evaluating “cumulative impacts.” As a first step toward such guidelines, a screening methodology for assessing cumulative impacts in communities was developed. The method, presented here, is based on the working definition of cumulative impacts adopted by Cal/EPA [1]: “Cumulative impacts means exposures, public health or environmental effects from the combined emissions and discharges in a geographic area, including environmental pollution from all sources, whether single or multi-media, routinely, accidentally, or otherwise released. Impacts will take into account sensitive populations and socio-economic factors, where applicable and to the extent data are available.” The screening methodology is built on this definition as well as current scientific understanding of environmental pollution and its adverse impacts on health, including the influence of both intrinsic, biological factors and non-intrinsic socioeconomic factors in mediating the effects of pollutant exposures. It addresses disparities in the distribution of pollution and health outcomes. The methodology provides a science-based tool to screen places for relative cumulative impacts, incorporating both the pollution burden on a community- including exposures to pollutants, their public health and environmental effects- and community characteristics, specifically sensitivity and socioeconomic factors. The screening methodology provides relative rankings to distinguish more highly impacted communities from less impacted ones. It may also help identify which factors are the greatest contributors to a community’s cumulative impact. It is not designed to provide quantitative estimates of community-level health impacts. A pilot screening analysis is presented here to illustrate the application of this methodology. Once guidelines are adopted, the methodology can serve as a screening tool to help Cal/EPA programs prioritize their activities and target those communities with the greatest cumulative impacts. PMID:22470315
Assessing the risk profiles of potentially sensitive populations requires a 'tool chest' of methodological approaches to adequately characterize and evaluate these populations. At present, there is an extensive body of literature on methodologies that apply to the evaluation of...
Herrera, Melina E; Mobilia, Liliana N; Posse, Graciela R
2011-01-01
The objective of this study is to perform a comparative evaluation of the prediffusion and minimum inhibitory concentration (MIC) methods for the detection of sensitivity to colistin, and to detect Acinetobacter baumanii-calcoaceticus complex (ABC) heteroresistant isolates to colistin. We studied 75 isolates of ABC recovered from clinically significant samples obtained from various centers. Sensitivity to colistin was determined by prediffusion as well as by MIC. All the isolates were sensitive to colistin, with MIC = 2µg/ml. The results were analyzed by dispersion graph and linear regression analysis, revealing that the prediffusion method did not correlate with the MIC values for isolates sensitive to colistin (r² = 0.2017). Detection of heteroresistance to colistin was determined by plaque efficiency of all the isolates with the same initial MICs of 2, 1, and 0.5 µg/ml, which resulted in 14 of them with a greater than 8-fold increase in the MIC in some cases. When the sensitivity of these resistant colonies was determined by prediffusion, the resulting dispersion graph and linear regression analysis yielded an r² = 0.604, which revealed a correlation between the methodologies used.
Fuzzy multicriteria disposal method and site selection for municipal solid waste.
Ekmekçioğlu, Mehmet; Kaya, Tolga; Kahraman, Cengiz
2010-01-01
The use of fuzzy multiple criteria analysis (MCA) in solid waste management has the advantage of rendering subjective and implicit decision making more objective and analytical, with its ability to accommodate both quantitative and qualitative data. In this paper a modified fuzzy TOPSIS methodology is proposed for the selection of appropriate disposal method and site for municipal solid waste (MSW). Our method is superior to existing methods since it has capability of representing vague qualitative data and presenting all possible results with different degrees of membership. In the first stage of the proposed methodology, a set of criteria of cost, reliability, feasibility, pollution and emission levels, waste and energy recovery is optimized to determine the best MSW disposal method. Landfilling, composting, conventional incineration, and refuse-derived fuel (RDF) combustion are the alternatives considered. The weights of the selection criteria are determined by fuzzy pairwise comparison matrices of Analytic Hierarchy Process (AHP). It is found that RDF combustion is the best disposal method alternative for Istanbul. In the second stage, the same methodology is used to determine the optimum RDF combustion plant location using adjacent land use, climate, road access and cost as the criteria. The results of this study illustrate the importance of the weights on the various factors in deciding the optimized location, with the best site located in Catalca. A sensitivity analysis is also conducted to monitor how sensitive our model is to changes in the various criteria weights. 2010 Elsevier Ltd. All rights reserved.
Biederman, J; Hammerness, P; Sadeh, B; Peremen, Z; Amit, A; Or-Ly, H; Stern, Y; Reches, A; Geva, A; Faraone, S V
2017-05-01
A previous small study suggested that Brain Network Activation (BNA), a novel ERP-based brain network analysis, may have diagnostic utility in attention deficit hyperactivity disorder (ADHD). In this study we examined the diagnostic capability of a new advanced version of the BNA methodology on a larger population of adults with and without ADHD. Subjects were unmedicated right-handed 18- to 55-year-old adults of both sexes with and without a DSM-IV diagnosis of ADHD. We collected EEG while the subjects were performing a response inhibition task (Go/NoGo) and then applied a spatio-temporal Brain Network Activation (BNA) analysis of the EEG data. This analysis produced a display of qualitative measures of brain states (BNA scores) providing information on cortical connectivity. This complex set of scores was then fed into a machine learning algorithm. The BNA analysis of the EEG data recorded during the Go/NoGo task demonstrated a high discriminative capacity between ADHD patients and controls (AUC = 0.92, specificity = 0.95, sensitivity = 0.86 for the Go condition; AUC = 0.84, specificity = 0.91, sensitivity = 0.76 for the NoGo condition). BNA methodology can help differentiate between ADHD and healthy controls based on functional brain connectivity. The data support the utility of the tool to augment clinical examinations by objective evaluation of electrophysiological changes associated with ADHD. Results also support a network-based approach to the study of ADHD.
Aramendía, Maite; Rello, Luis; Vanhaecke, Frank; Resano, Martín
2012-10-16
Collection of biological fluids on clinical filter papers shows important advantages from a logistic point of view, although analysis of these specimens is far from straightforward. Concerning urine analysis, and particularly when direct trace elemental analysis by laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS) is aimed at, several problems arise, such as lack of sensitivity or different distribution of the analytes on the filter paper, rendering obtaining reliable quantitative results quite difficult. In this paper, a novel approach for urine collection is proposed, which circumvents many of these problems. This methodology consists on the use of precut filter paper discs where large amounts of sample can be retained upon a single deposition. This provides higher amounts of the target analytes and, thus, sufficient sensitivity, and allows addition of an adequate internal standard at the clinical lab prior to analysis, therefore making it suitable for a strategy based on unsupervised sample collection and ulterior analysis at referral centers. On the basis of this sampling methodology, an analytical method was developed for the direct determination of several elements in urine (Be, Bi, Cd, Co, Cu, Ni, Sb, Sn, Tl, Pb, and V) at the low μg L(-1) level by means of LA-ICPMS. The method developed provides good results in terms of accuracy and LODs (≤1 μg L(-1) for most of the analytes tested), with a precision in the range of 15%, fit-for-purpose for clinical control analysis.
Woo, Sungmin; Suh, Chong Hyun; Cho, Jeong Yeon; Kim, Sang Youn; Kim, Seung Hyup
2017-11-01
The purpose of this article is to systematically review and perform a meta-analysis of the diagnostic performance of CT for diagnosis of fat-poor angiomyolipoma (AML) in patients with renal masses. MEDLINE and EMBASE were systematically searched up to February 2, 2017. We included diagnostic accuracy studies that used CT for diagnosis of fat-poor AML in patients with renal masses, using pathologic examination as the reference standard. Two independent reviewers assessed the methodologic quality using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. Sensitivity and specificity of included studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Sensitivity analyses using several clinically relevant covariates were performed to explore heterogeneity. Fifteen studies (2258 patients) were included. Pooled sensitivity and specificity were 0.67 (95% CI, 0.48-0.81) and 0.97 (95% CI, 0.89-0.99), respectively. Substantial and considerable heterogeneity was present with regard to sensitivity and specificity (I 2 = 91.21% and 78.53%, respectively). At sensitivity analyses, the specificity estimates were comparable and consistently high across all subgroups (0.93-1.00), but sensitivity estimates showed significant variation (0.14-0.82). Studies using pixel distribution analysis (n = 3) showed substantially lower sensitivity estimates (0.14; 95% CI, 0.04-0.40) compared with the remaining 12 studies (0.81; 95% CI, 0.76-0.85). CT shows moderate sensitivity and excellent specificity for diagnosis of fat-poor AML in patients with renal masses. When methods other than pixel distribution analysis are used, better sensitivity can be achieved.
Advanced proteomic liquid chromatography
Xie, Fang; Smith, Richard D.; Shen, Yufeng
2012-01-01
Liquid chromatography coupled with mass spectrometry is the predominant platform used to analyze proteomics samples consisting of large numbers of proteins and their proteolytic products (e.g., truncated polypeptides) and spanning a wide range of relative concentrations. This review provides an overview of advanced capillary liquid chromatography techniques and methodologies that greatly improve separation resolving power and proteomics analysis coverage, sensitivity, and throughput. PMID:22840822
Blending protein separation and peptide analysis through real-time proteolytic digestion.
Slysz, Gordon W; Schriemer, David C
2005-03-15
Typical liquid- or gel-based protein separations require enzymatic digestion as an important first step in generating protein identifications. Traditional protocols involve long-term proteolytic digestion of the separated protein, often leading to sample loss and reduced sensitivity. Previously, we presented a rapid method of proteolytic digestion that showed excellent digestion of resistant and low concentrations of protein without requiring reduction and alkylation. Here, we demonstrate on-line, real-time tryptic digestion in conjunction with reversed-phase protein separation. The studies were aimed at optimizing pH and ionic strength and the size of the digestion element, to produce maximal protein digestion with minimal effects on chromatographic integrity. Upon establishing optimal conditions, the digestion element was attached downstream from a capillary C4 reversed-phase column. A four-protein mixture was processed through the combined system, and the resulting peptides were analyzed on-line by electrospray mass spectrometry. Extracted ion chromatograms for protein chromatography based on peptide elution were generated. These were shown to emulate ion chromatograms produced in a subsequent run without the digestion element, based on protein elution. The methodology will enable rapid and sensitive analysis of liquid-based protein separations using the power of bottom-up proteomics methodologies.
Assessing the risk profiles of potentially sensitive populations requires a "tool chest" of methodological approaches to adequately characterize and evaluate these populations. At present, there is an extensive body of literature on methodologies that apply to the evaluation of t...
Sensitivity of surface meteorological analyses to observation networks
NASA Astrophysics Data System (ADS)
Tyndall, Daniel Paul
A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.
Pool, Jan J. M.; van Tulder, Maurits W.; Riphagen, Ingrid I.; de Vet, Henrica C. W.
2006-01-01
Clinical provocative tests of the neck, which position the neck and arm inorder to aggravate or relieve arm symptoms, are commonly used in clinical practice in patients with a suspected cervical radiculopathy. Their diagnostic accuracy, however, has never been examined in a systematic review. A comprehensive search was conducted in order to identify all possible studies fulfilling the inclusion criteria. A study was included if: (1) any provocative test of the neck for diagnosing cervical radiculopathy was identified; (2) any reference standard was used; (3) sensitivity and specificity were reported or could be (re-)calculated; and, (4) the publication was a full report. Two reviewers independently selected studies, and assessed methodological quality. Only six studies met the inclusion criteria, which evaluated five provocative tests. In general, Spurling’s test demonstrated low to moderate sensitivity and high specificity, as did traction/neck distraction, and Valsalva’s maneuver. The upper limb tension test (ULTT) demonstrated high sensitivity and low specificity, while the shoulder abduction test demonstrated low to moderate sensitivity and moderate to high specificity. Common methodological flaws included lack of an optimal reference standard, disease progression bias, spectrum bias, and review bias. Limitations include few primary studies, substantial heterogeneity, and numerous methodological flaws among the studies; therefore, a meta-analysis was not conducted. This review suggests that, when consistent with the history and other physical findings, a positive Spurling’s, traction/neck distraction, and Valsalva’s might be indicative of a cervical radiculopathy, while a negative ULTT might be used to rule it out. However, the lack of evidence precludes any firm conclusions regarding their diagnostic value, especially when used in primary care. More high quality studies are necessary in order to resolve this issue. PMID:17013656
Spatial pattern recognition of seismic events in South West Colombia
NASA Astrophysics Data System (ADS)
Benítez, Hernán D.; Flórez, Juan F.; Duque, Diana P.; Benavides, Alberto; Lucía Baquero, Olga; Quintero, Jiber
2013-09-01
Recognition of seismogenic zones in geographical regions supports seismic hazard studies. This recognition is usually based on visual, qualitative and subjective analysis of data. Spatial pattern recognition provides a well founded means to obtain relevant information from large amounts of data. The purpose of this work is to identify and classify spatial patterns in instrumental data of the South West Colombian seismic database. In this research, clustering tendency analysis validates whether seismic database possesses a clustering structure. A non-supervised fuzzy clustering algorithm creates groups of seismic events. Given the sensitivity of fuzzy clustering algorithms to centroid initial positions, we proposed a methodology to initialize centroids that generates stable partitions with respect to centroid initialization. As a result of this work, a public software tool provides the user with the routines developed for clustering methodology. The analysis of the seismogenic zones obtained reveals meaningful spatial patterns in South-West Colombia. The clustering analysis provides a quantitative location and dispersion of seismogenic zones that facilitates seismological interpretations of seismic activities in South West Colombia.
Label-free SPR detection of gluten peptides in urine for non-invasive celiac disease follow-up.
Soler, Maria; Estevez, M-Carmen; Moreno, Maria de Lourdes; Cebolla, Angel; Lechuga, Laura M
2016-05-15
Motivated by the necessity of new and efficient methods for dietary gluten control of celiac patients, we have developed a simple and highly sensitive SPR biosensor for the detection of gluten peptides in urine. The sensing methodology enables rapid and label-free quantification of the gluten immunogenic peptides (GIP) by using G12 mAb. The overall performance of the biosensor has been in-depth optimized and evaluated in terms of sensitivity, selectivity and reproducibility, reaching a limit of detection of 0.33 ng mL(-1). Besides, the robustness and stability of the methodology permit the continuous use of the biosensor for more than 100 cycles with excellent repeatability. Special efforts have been focused on preventing and minimizing possible interferences coming from urine matrix enabling a direct analysis in this fluid without requiring extraction or purification procedures. Our SPR biosensor has proven to detect and identify gluten consumption by evaluating urine samples from healthy and celiac individuals with different dietary gluten conditions. This novel biosensor methodology represents a novel approach to quantify the digested gluten peptides in human urine with outstanding sensitivity in a rapid and non-invasive manner. Our technique should be considered as a promising opportunity to develop Point-of-Care (POC) devices for an efficient, simple and accurate gluten free diet (GFD) monitoring as well as therapy follow-up of celiac disease patients. Copyright © 2015 Elsevier B.V. All rights reserved.
Functional-diversity indices can be driven by methodological choices and species richness.
Poos, Mark S; Walker, Steven C; Jackson, Donald A
2009-02-01
Functional diversity is an important concept in community ecology because it captures information on functional traits absent in measures of species diversity. One popular method of measuring functional diversity is the dendrogram-based method, FD. To calculate FD, a variety of methodological choices are required, and it has been debated about whether biological conclusions are sensitive to such choices. We studied the probability that conclusions regarding FD were sensitive, and that patterns in sensitivity were related to alpha and beta components of species richness. We developed a randomization procedure that iteratively calculated FD by assigning species into two assemblages and calculating the probability that the community with higher FD varied across methods. We found evidence of sensitivity in all five communities we examined, ranging from a probability of sensitivity of 0 (no sensitivity) to 0.976 (almost completely sensitive). Variations in these probabilities were driven by differences in alpha diversity between assemblages and not by beta diversity. Importantly, FD was most sensitive when it was most useful (i.e., when differences in alpha diversity were low). We demonstrate that trends in functional-diversity analyses can be largely driven by methodological choices or species richness, rather than functional trait information alone.
A Comparative Analysis of Disaster Risk, Vulnerability and Resilience Composite Indicators.
Beccari, Benjamin
2016-03-14
In the past decade significant attention has been given to the development of tools that attempt to measure the vulnerability, risk or resilience of communities to disasters. Particular attention has been given to the development of composite indices to quantify these concepts mirroring their deployment in other fields such as sustainable development. Whilst some authors have published reviews of disaster vulnerability, risk and resilience composite indicator methodologies, these have been of a limited nature. This paper seeks to dramatically expand these efforts by analysing 106 composite indicator methodologies to understand the breadth and depth of practice. An extensive search of the academic and grey literature was undertaken for composite indicator and scorecard methodologies that addressed multiple/all hazards; included social and economic aspects of risk, vulnerability or resilience; were sub-national in scope; explained the method and variables used; focussed on the present-day; and, had been tested or implemented. Information on the index construction, geographic areas of application, variables used and other relevant data was collected and analysed. Substantial variety in construction practices of composite indicators of risk, vulnerability and resilience were found. Five key approaches were identified in the literature, with the use of hierarchical or deductive indices being the most common. Typically variables were chosen by experts, came from existing statistical datasets and were combined by simple addition with equal weights. A minimum of 2 variables and a maximum of 235 were used, although approximately two thirds of methodologies used less than 40 variables. The 106 methodologies used 2298 unique variables, the most frequently used being common statistical variables such as population density and unemployment rate. Classification of variables found that on average 34% of the variables used in each methodology related to the social environment, 25% to the disaster environment, 20% to the economic environment, 13% to the built environment, 6% to the natural environment and 3% were other indices. However variables specifically measuring action to mitigate or prepare for disasters only comprised 12%, on average, of the total number of variables in each index. Only 19% of methodologies employed any sensitivity or uncertainty analysis and in only a single case was this comprehensive. A number of potential limitations of the present state of practice and how these might impact on decision makers are discussed. In particular the limited deployment of sensitivity and uncertainty analysis and the low use of direct measures of disaster risk, vulnerability and resilience could significantly limit the quality and reliability of existing methodologies. Recommendations for improvements to indicator development and use are made, as well as suggested future research directions to enhance the theoretical and empirical knowledge base for composite indicator development.
A Comparative Analysis of Disaster Risk, Vulnerability and Resilience Composite Indicators
Beccari, Benjamin
2016-01-01
Introduction: In the past decade significant attention has been given to the development of tools that attempt to measure the vulnerability, risk or resilience of communities to disasters. Particular attention has been given to the development of composite indices to quantify these concepts mirroring their deployment in other fields such as sustainable development. Whilst some authors have published reviews of disaster vulnerability, risk and resilience composite indicator methodologies, these have been of a limited nature. This paper seeks to dramatically expand these efforts by analysing 106 composite indicator methodologies to understand the breadth and depth of practice. Methods: An extensive search of the academic and grey literature was undertaken for composite indicator and scorecard methodologies that addressed multiple/all hazards; included social and economic aspects of risk, vulnerability or resilience; were sub-national in scope; explained the method and variables used; focussed on the present-day; and, had been tested or implemented. Information on the index construction, geographic areas of application, variables used and other relevant data was collected and analysed. Results: Substantial variety in construction practices of composite indicators of risk, vulnerability and resilience were found. Five key approaches were identified in the literature, with the use of hierarchical or deductive indices being the most common. Typically variables were chosen by experts, came from existing statistical datasets and were combined by simple addition with equal weights. A minimum of 2 variables and a maximum of 235 were used, although approximately two thirds of methodologies used less than 40 variables. The 106 methodologies used 2298 unique variables, the most frequently used being common statistical variables such as population density and unemployment rate. Classification of variables found that on average 34% of the variables used in each methodology related to the social environment, 25% to the disaster environment, 20% to the economic environment, 13% to the built environment, 6% to the natural environment and 3% were other indices. However variables specifically measuring action to mitigate or prepare for disasters only comprised 12%, on average, of the total number of variables in each index. Only 19% of methodologies employed any sensitivity or uncertainty analysis and in only a single case was this comprehensive. Discussion: A number of potential limitations of the present state of practice and how these might impact on decision makers are discussed. In particular the limited deployment of sensitivity and uncertainty analysis and the low use of direct measures of disaster risk, vulnerability and resilience could significantly limit the quality and reliability of existing methodologies. Recommendations for improvements to indicator development and use are made, as well as suggested future research directions to enhance the theoretical and empirical knowledge base for composite indicator development. PMID:27066298
Pujol-Vila, F; Vigués, N; Díaz-González, M; Muñoz-Berbel, X; Mas, J
2015-05-15
Global urban and industrial growth, with the associated environmental contamination, is promoting the development of rapid and inexpensive general toxicity methods. Current microbial methodologies for general toxicity determination rely on either bioluminescent bacteria and specific medium solution (i.e. Microtox(®)) or low sensitivity and diffusion limited protocols (i.e. amperometric microbial respirometry). In this work, fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics is presented, using Escherichia coli as a bacterial model. Ferricyanide reduction kinetic analysis (variation of ferricyanide absorption with time), much more sensitive than single absorbance measurements, allowed for direct and fast toxicity determination without pre-incubation steps (assay time=10 min) and minimizing biomass interference. Dual wavelength analysis at 405 (ferricyanide and biomass) and 550 nm (biomass), allowed for ferricyanide monitoring without interference of biomass scattering. On the other hand, refractive index (RI) matching with saccharose reduced bacterial light scattering around 50%, expanding the analytical linear range in the determination of absorbent molecules. With this method, different toxicants such as metals and organic compounds were analyzed with good sensitivities. Half maximal effective concentrations (EC50) obtained after 10 min bioassay, 2.9, 1.0, 0.7 and 18.3 mg L(-1) for copper, zinc, acetic acid and 2-phenylethanol respectively, were in agreement with previously reported values for longer bioassays (around 60 min). This method represents a promising alternative for fast and sensitive water toxicity monitoring, opening the possibility of quick in situ analysis. Copyright © 2014 Elsevier B.V. All rights reserved.
Simoens, Steven
2013-01-01
Objectives This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. Materials and Methods For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Results Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Conclusions Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation. PMID:24386474
Simoens, Steven
2013-01-01
This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2015-01-01
The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less
NASA Technical Reports Server (NTRS)
Campbell, B. H.
1974-01-01
A methodology which was developed for balanced designing of spacecraft subsystems and interrelates cost, performance, safety, and schedule considerations was refined. The methodology consists of a two-step process: the first step is one of selecting all hardware designs which satisfy the given performance and safety requirements, the second step is one of estimating the cost and schedule required to design, build, and operate each spacecraft design. Using this methodology to develop a systems cost/performance model allows the user of such a model to establish specific designs and the related costs and schedule. The user is able to determine the sensitivity of design, costs, and schedules to changes in requirements. The resulting systems cost performance model is described and implemented as a digital computer program.
Fines classification based on sensitivity to pore-fluid chemistry
Jang, Junbong; Santamarina, J. Carlos
2016-01-01
The 75-μm particle size is used to discriminate between fine and coarse grains. Further analysis of fine grains is typically based on the plasticity chart. Whereas pore-fluid-chemistry-dependent soil response is a salient and distinguishing characteristic of fine grains, pore-fluid chemistry is not addressed in current classification systems. Liquid limits obtained with electrically contrasting pore fluids (deionized water, 2-M NaCl brine, and kerosene) are combined to define the soil “electrical sensitivity.” Liquid limit and electrical sensitivity can be effectively used to classify fine grains according to their fluid-soil response into no-, low-, intermediate-, or high-plasticity fine grains of low, intermediate, or high electrical sensitivity. The proposed methodology benefits from the accumulated experience with liquid limit in the field and addresses the needs of a broader range of geotechnical engineering problems.
Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.
2017-08-01
The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.
Micro-resonator-based electric field sensors with long durations of sensitivity
NASA Astrophysics Data System (ADS)
Ali, Amir R.
2017-05-01
In this paper, we present a new fabrication method for the whispering gallery mode (WGM) micro-sphere based electric field sensor that which allows for longer time periods of sensitivity. Recently, a WGM-based photonic electric field sensor was proposed using a coupled dielectric microsphere-beam. The external electric field imposes an electrtrostriction force on the dielectric beam, deflecting it. The beam, in turn compresses the sphere causing a shift in its WGM. As part of the fabrication process, the PDMS micro-beams and the spheres are curied at high-temperature (100oC) and subsequently poled by exposing to strong external electric field ( 8 MV/m) for two hours. The poling process allows for the deposition of surface charges thereby increasing the electrostriction effect. This methodology is called curing-then-poling (CTP). Although the sensors do become sufficiently sensitive to electric field, they start de-poling after a short period (within 10 minutes) after poling, hence losing sensitivity. In an attempt to mitigate this problem and to lock the polarization for a longer period, we use an alternate methodology whereby the beam is poled and cured simultaneously (curing-while-poling or CWP). The new fabrication method allows for the retention of polarization (and hence, sensitivity to electric field) longer ( 1500 minutes). An analysis is carried out along with preliminary experiments. Results show that electric fields as small as 100 V/m can be detected with a 300 μm diameter sphere sensor a day after poling.
Accelerated Insertion of Materials - Composites
2001-08-28
Details • Damage Tolerance • Repair • Validation of Analysis Methodology • Fatigue • Static • Acoustic • Configuration Details • Damage Tolerance...Sensitivity – Fatigue – Adhesion – Damage Tolerance – All critical modes and environments Products: Material Specifications, B-Basis Design Allowables...Demonstrate damage tolerance AIM-C DARPA DARPA Workshop, Annapolis, August 27-28, 2001 Requalification of Polymer / Composite Parts • Material Changes – Raw
Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra.
Claxton, Karl; Sculpher, Mark; McCabe, Chris; Briggs, Andrew; Akehurst, Ron; Buxton, Martin; Brazier, John; O'Hagan, Tony
2005-04-01
Recently the National Institute for Clinical Excellence (NICE) updated its methods guidance for technology assessment. One aspect of the new guidance is to require the use of probabilistic sensitivity analysis with all cost-effectiveness models submitted to the Institute. The purpose of this paper is to place the NICE guidance on dealing with uncertainty into a broader context of the requirements for decision making; to explain the general approach that was taken in its development; and to address each of the issues which have been raised in the debate about the role of probabilistic sensitivity analysis in general. The most appropriate starting point for developing guidance is to establish what is required for decision making. On the basis of these requirements, the methods and framework of analysis which can best meet these needs can then be identified. It will be argued that the guidance on dealing with uncertainty and, in particular, the requirement for probabilistic sensitivity analysis, is justified by the requirements of the type of decisions that NICE is asked to make. Given this foundation, the main issues and criticisms raised during and after the consultation process are reviewed. Finally, some of the methodological challenges posed by the need fully to characterise decision uncertainty and to inform the research agenda will be identified and discussed. Copyright (c) 2005 John Wiley & Sons, Ltd.
Holloway, Andrew J; Oshlack, Alicia; Diyagama, Dileepa S; Bowtell, David DL; Smyth, Gordon K
2006-01-01
Background Concerns are often raised about the accuracy of microarray technologies and the degree of cross-platform agreement, but there are yet no methods which can unambiguously evaluate precision and sensitivity for these technologies on a whole-array basis. Results A methodology is described for evaluating the precision and sensitivity of whole-genome gene expression technologies such as microarrays. The method consists of an easy-to-construct titration series of RNA samples and an associated statistical analysis using non-linear regression. The method evaluates the precision and responsiveness of each microarray platform on a whole-array basis, i.e., using all the probes, without the need to match probes across platforms. An experiment is conducted to assess and compare four widely used microarray platforms. All four platforms are shown to have satisfactory precision but the commercial platforms are superior for resolving differential expression for genes at lower expression levels. The effective precision of the two-color platforms is improved by allowing for probe-specific dye-effects in the statistical model. The methodology is used to compare three data extraction algorithms for the Affymetrix platforms, demonstrating poor performance for the commonly used proprietary algorithm relative to the other algorithms. For probes which can be matched across platforms, the cross-platform variability is decomposed into within-platform and between-platform components, showing that platform disagreement is almost entirely systematic rather than due to measurement variability. Conclusion The results demonstrate good precision and sensitivity for all the platforms, but highlight the need for improved probe annotation. They quantify the extent to which cross-platform measures can be expected to be less accurate than within-platform comparisons for predicting disease progression or outcome. PMID:17118209
Alonso, Monica; Cerdan, Laura; Godayol, Anna; Anticó, Enriqueta; Sanchez, Juan M
2011-11-11
Combining headspace (HS) sampling with a needle-trap device (NTD) to determine priority volatile organic compounds (VOCs) in water samples results in improved sensitivity and efficiency when compared to conventional static HS sampling. A 22 gauge stainless steel, 51-mm needle packed with Tenax TA and Carboxen 1000 particles is used as the NTD. Three different HS-NTD sampling methodologies are evaluated and all give limits of detection for the target VOCs in the ng L⁻¹ range. Active (purge-and-trap) HS-NTD sampling is found to give the best sensitivity but requires exhaustive control of the sampling conditions. The use of the NTD to collect the headspace gas sample results in a combined adsorption/desorption mechanism. The testing of different temperatures for the HS thermostating reveals a greater desorption effect when the sample is allowed to diffuse, whether passively or actively, through the sorbent particles. The limits of detection obtained in the simplest sampling methodology, static HS-NTD (5 mL aqueous sample in 20 mL HS vials, thermostating at 50 °C for 30 min with agitation), are sufficiently low as to permit its application to the analysis of 18 priority VOCs in natural and waste waters. In all cases compounds were detected below regulated levels. Copyright © 2011 Elsevier B.V. All rights reserved.
Penn, Alexandra S.; Knight, Christopher J. K.; Lloyd, David J. B.; Avitabile, Daniele; Kok, Kasper; Schiller, Frank; Woodward, Amy; Druckman, Angela; Basson, Lauren
2013-01-01
Fuzzy Cognitive Mapping (FCM) is a widely used participatory modelling methodology in which stakeholders collaboratively develop a ‘cognitive map’ (a weighted, directed graph), representing the perceived causal structure of their system. This can be directly transformed by a workshop facilitator into simple mathematical models to be interrogated by participants by the end of the session. Such simple models provide thinking tools which can be used for discussion and exploration of complex issues, as well as sense checking the implications of suggested causal links. They increase stakeholder motivation and understanding of whole systems approaches, but cannot be separated from an intersubjective participatory context. Standard FCM methodologies make simplifying assumptions, which may strongly influence results, presenting particular challenges and opportunities. We report on a participatory process, involving local companies and organisations, focussing on the development of a bio-based economy in the Humber region. The initial cognitive map generated consisted of factors considered key for the development of the regional bio-based economy and their directional, weighted, causal interconnections. A verification and scenario generation procedure, to check the structure of the map and suggest modifications, was carried out with a second session. Participants agreed on updates to the original map and described two alternate potential causal structures. In a novel analysis all map structures were tested using two standard methodologies usually used independently: linear and sigmoidal FCMs, demonstrating some significantly different results alongside some broad similarities. We suggest a development of FCM methodology involving a sensitivity analysis with different mappings and discuss the use of this technique in the context of our case study. Using the results and analysis of our process, we discuss the limitations and benefits of the FCM methodology in this case and in general. We conclude by proposing an extended FCM methodology, including multiple functional mappings within one participant-constructed graph. PMID:24244303
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
Characterizing performance of ultra-sensitive accelerometers
NASA Technical Reports Server (NTRS)
Sebesta, Henry
1990-01-01
An overview is given of methodology and test results pertaining to the characterization of ultra sensitive accelerometers. Two issues are of primary concern. The terminology ultra sensitive accelerometer is used to imply instruments whose noise floors and resolution are at the state of the art. Hence, the typical approach of verifying an instrument's performance by measuring it with a yet higher quality instrument (or standard) is not practical. Secondly, it is difficult to find or create an environment with sufficiently low background acceleration. The typical laboratory acceleration levels will be at several orders of magnitude above the noise floor of the most sensitive accelerometers. Furthermore, this background must be treated as unknown since the best instrument available is the one to be tested. A test methodology was developed in which two or more like instruments are subjected to the same but unknown background acceleration. Appropriately selected spectral analysis techniques were used to separate the sensors' output spectra into coherent components and incoherent components. The coherent part corresponds to the background acceleration being measured by the sensors being tested. The incoherent part is attributed to sensor noise and data acquisition and processing noise. The method works well for estimating noise floors that are 40 to 50 dB below the motion applied to the test accelerometers. The accelerometers being tested are intended for use as feedback sensors in a system to actively stabilize an inertial guidance component test platform.
Vestibular (dys)function in children with sensorineural hearing loss: a systematic review.
Verbecque, Evi; Marijnissen, Tessa; De Belder, Niels; Van Rompaey, Vincent; Boudewyns, An; Van de Heyning, Paul; Vereeck, Luc; Hallemans, Ann
2017-06-01
The objective of this study is to provide an overview of the prevalence of vestibular dysfunction in children with SNHL classified according to the applied test and its corresponding sensitivity and specificity. Data were gathered using a systematic search query including reference screening. Pubmed, Web of Science and Embase were searched. Strategy and reporting of this review was based on the Meta-analysis of Observational Studies in Epidemiology (MOOSE) guidelines. Methodological quality was assessed with the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. All studies, regardless the applied vestibular test, showed that vestibular function differs significantly between children with hearing loss and normal hearing (p < 0.05). Compared with caloric testing, the sensitivity of the Rotational Chair Test (RCT) varies between 61 and 80% and specificity between 21 and 80%, whereas this was, respectively, 71-100% and 30-100% for collic Vestibular Evoked Myogenic Potentials (cVEMP). Compared with RCT, the sensitivity was 88-100% and the specificity was 69-100% for the Dynamic Visual Acuity test, respectively, 67-100% and 71-100% for the (video) Head Impulse Test and 83% and 86% for the ocular VEMP. Currently, due to methodological shortcoming, evidence on sensitivity and specificity of vestibular tests is unknown to moderate. Future research should focus on adequate sample sizes (subgroups >30).
Tucker, Robin M; Kaiser, Kathryn A; Parman, Mariel A; George, Brandon J; Allison, David B; Mattes, Richard D
2017-01-01
Given the increasing evidence that supports the ability of humans to taste non-esterified fatty acids (NEFA), recent studies have sought to determine if relationships exist between oral sensitivity to NEFA (measured as thresholds), food intake and obesity. Published findings suggest there is either no association or an inverse association. A systematic review and meta-analysis was conducted to determine if differences in fatty acid taste sensitivity or intensity ratings exist between individuals who are lean or obese. A total of 7 studies that reported measurement of taste sensations to non-esterified fatty acids by psychophysical methods (e.g.,studies using model systems rather than foods, detection thresholds as measured by a 3-alternative forced choice ascending methodology were included in the meta-analysis. Two other studies that measured intensity ratings to graded suprathreshold NEFA concentrations were evaluated qualitatively. No significant differences in fatty acid taste thresholds or intensity were observed. Thus, differences in fatty acid taste sensitivity do not appear to precede or result from obesity.
NASA Astrophysics Data System (ADS)
Winkler, Julie A.; Palutikof, Jean P.; Andresen, Jeffrey A.; Goodess, Clare M.
1997-10-01
Empirical transfer functions have been proposed as a means for `downscaling' simulations from general circulation models (GCMs) to the local scale. However, subjective decisions made during the development of these functions may influence the ensuing climate scenarios. This research evaluated the sensitivity of a selected empirical transfer function methodology to 1) the definition of the seasons for which separate specification equations are derived, 2) adjustments for known departures of the GCM simulations of the predictor variables from observations, 3) the length of the calibration period, 4) the choice of function form, and 5) the choice of predictor variables. A modified version of the Climatological Projection by Model Statistics method was employed to generate control (1 × CO2) and perturbed (2 × CO2) scenarios of daily maximum and minimum temperature for two locations with diverse climates (Alcantarilla, Spain, and Eau Claire, Michigan). The GCM simulations used in the scenario development were from the Canadian Climate Centre second-generation model (CCC GCMII).Variations in the downscaling methodology were found to have a statistically significant impact on the 2 × CO2 climate scenarios, even though the 1 × CO2 scenarios for the different transfer function approaches were often similar. The daily temperature scenarios for Alcantarilla and Eau Claire were most sensitive to the decision to adjust for deficiencies in the GCM simulations, the choice of predictor variables, and the seasonal definitions used to derive the functions (i.e., fixed seasons, floating seasons, or no seasons). The scenarios were less sensitive to the choice of function form (i.e., linear versus nonlinear) and to an increase in the length of the calibration period.The results of Part I, which identified significant departures of the CCC GCMII simulations of two candidate predictor variables from observations, together with those presented here in Part II, 1) illustrate the importance of detailed comparisons of observed and GCM 1 × CO2 series of candidate predictor variables as an initial step in impact analysis, 2) demonstrate that decisions made when developing the transfer functions can have a substantial influence on the 2 × CO2 scenarios and their interpretation, 3) highlight the uncertainty in the appropriate criteria for evaluating transfer function approaches, and 4) suggest that automation of empirical transfer function methodologies is inappropriate because of differences in the performance of transfer functions between sites and because of spatial differences in the GCM's ability to adequately simulate the predictor variables used in the functions.
Visual sensitivity of river recreation to power plants
David H. Blau; Michael C. Bowie
1979-01-01
The consultants were asked by the Power Plant Siting Staff of the Minnesota Environmental Quality Council to develop a methodology for evaluating the sensitivity of river-related recreational activities to visual intrusion by large coal-fired power plants. The methodology, which is applicable to any major stream in the state, was developed and tested on a case study...
Sava, M Gabriela; Dolan, James G; May, Jerrold H; Vargas, Luis G
2018-07-01
Current colorectal cancer screening guidelines by the US Preventive Services Task Force endorse multiple options for average-risk patients and recommend that screening choices should be guided by individual patient preferences. Implementing these recommendations in practice is challenging because they depend on accurate and efficient elicitation and assessment of preferences from patients who are facing a novel task. To present a methodology for analyzing the sensitivity and stability of a patient's preferences regarding colorectal cancer screening options and to provide a starting point for a personalized discussion between the patient and the health care provider about the selection of the appropriate screening option. This research is a secondary analysis of patient preference data collected as part of a previous study. We propose new measures of preference sensitivity and stability that can be used to determine if additional information provided would result in a change to the initially most preferred colorectal cancer screening option. Illustrative results of applying the methodology to the preferences of 2 patients, of different ages, are provided. The results show that different combinations of screening options are viable for each patient and that the health care provider should emphasize different information during the medical decision-making process. Sensitivity and stability analysis can supply health care providers with key topics to focus on when communicating with a patient and the degree of emphasis to place on each of them to accomplish specific goals. The insights provided by the analysis can be used by health care providers to approach communication with patients in a more personalized way, by taking into consideration patients' preferences before adding their own expertise to the discussion.
Surface immobilized antibody orientation determined using ToF-SIMS and multivariate analysis.
Welch, Nicholas G; Madiona, Robert M T; Payten, Thomas B; Easton, Christopher D; Pontes-Braz, Luisa; Brack, Narelle; Scoble, Judith A; Muir, Benjamin W; Pigram, Paul J
2017-06-01
Antibody orientation at solid phase interfaces plays a critical role in the sensitive detection of biomolecules during immunoassays. Correctly oriented antibodies with solution-facing antigen binding regions have improved antigen capture as compared to their randomly oriented counterparts. Direct characterization of oriented proteins with surface analysis methods still remains a challenge however surface sensitive techniques such as Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS) provide information-rich data that can be used to probe antibody orientation. Diethylene glycol dimethyl ether plasma polymers (DGpp) functionalized with chromium (DGpp+Cr) have improved immunoassay performance that is indicative of preferential antibody orientation. Herein, ToF-SIMS data from proteolytic fragments of anti-EGFR antibody bound to DGpp and DGpp+Cr are used to construct artificial neural network (ANN) and principal component analysis (PCA) models indicative of correctly oriented systems. Whole antibody samples (IgG) test against each of the models indicated preferential antibody orientation on DGpp+Cr. Cross-reference between ANN and PCA models yield 20 mass fragments associated with F(ab') 2 region representing correct orientation, and 23 mass fragments associated with the Fc region representing incorrect orientation. Mass fragments were then compared to amino acid fragments and amino acid composition in F(ab') 2 and Fc regions. A ratio of the sum of the ToF-SIMS ion intensities from the F(ab') 2 fragments to the Fc fragments demonstrated a 50% increase in intensity for IgG on DGpp+Cr as compared to DGpp. The systematic data analysis methodology employed herein offers a new approach for the investigation of antibody orientation applicable to a range of substrates. Controlled orientation of antibodies at solid phases is critical for maximizing antigen detection in biosensors and immunoassays. Surface-sensitive techniques (such as ToF-SIMS), capable of direct characterization of surface immobilized and oriented antibodies, are under-utilized in current practice. Selection of a small number of mass fragments for analysis, typically pertaining to amino acids, is commonplace in literature, leaving the majority of the information-rich spectra unanalyzed. The novelty of this work is the utilization of a comprehensive, unbiased mass fragment list and the employment of principal component analysis (PCA) and artificial neural network (ANN) models in a unique methodology to prove antibody orientation. This methodology is of significant and broad interest to the scientific community as it is applicable to a range of substrates and allows for direct, label-free characterization of surface bound proteins. Copyright © 2017 Acta Materialia Inc. All rights reserved.
Api, A M; Belsito, D; Bruze, M; Cadby, P; Calow, P; Dagli, M L; Dekant, W; Ellis, G; Fryer, A D; Fukayama, M; Griem, P; Hickey, C; Kromidas, L; Lalko, J F; Liebler, D C; Miyachi, Y; Politano, V T; Renskers, K; Ritacco, G; Salvito, D; Schultz, T W; Sipes, I G; Smith, B; Vitale, D; Wilcox, D K
2015-08-01
The Research Institute for Fragrance Materials, Inc. (RIFM) has been engaged in the generation and evaluation of safety data for fragrance materials since its inception over 45 years ago. Over time, RIFM's approach to gathering data, estimating exposure and assessing safety has evolved as the tools for risk assessment evolved. This publication is designed to update the RIFM safety assessment process, which follows a series of decision trees, reflecting advances in approaches in risk assessment and new and classical toxicological methodologies employed by RIFM over the past ten years. These changes include incorporating 1) new scientific information including a framework for choosing structural analogs, 2) consideration of the Threshold of Toxicological Concern (TTC), 3) the Quantitative Risk Assessment (QRA) for dermal sensitization, 4) the respiratory route of exposure, 5) aggregate exposure assessment methodology, 6) the latest methodology and approaches to risk assessments, 7) the latest alternatives to animal testing methodology and 8) environmental risk assessment. The assessment begins with a thorough analysis of existing data followed by in silico analysis, identification of 'read across' analogs, generation of additional data through in vitro testing as well as consideration of the TTC approach. If necessary, risk management may be considered. Copyright © 2014 Elsevier Ltd. All rights reserved.
A multicriteria decision making model for assessment and selection of an ERP in a logistics context
NASA Astrophysics Data System (ADS)
Pereira, Teresa; Ferreira, Fernanda A.
2017-07-01
The aim of this work is to apply a methodology of decision support based on a multicriteria decision analyses (MCDA) model that allows the assessment and selection of an Enterprise Resource Planning (ERP) in a Portuguese logistics company by Group Decision Maker (GDM). A Decision Support system (DSS) that implements a MCDA - Multicriteria Methodology for the Assessment and Selection of Information Systems / Information Technologies (MMASSI / IT) is used based on its features and facility to change and adapt the model to a given scope. Using this DSS it was obtained the information system that best suited to the decisional context, being this result evaluated through a sensitivity and robustness analysis.
Disposable Screen Printed Electrochemical Sensors: Tools for Environmental Monitoring
Hayat, Akhtar; Marty, Jean Louis
2014-01-01
Screen printing technology is a widely used technique for the fabrication of electrochemical sensors. This methodology is likely to underpin the progressive drive towards miniaturized, sensitive and portable devices, and has already established its route from “lab-to-market” for a plethora of sensors. The application of these sensors for analysis of environmental samples has been the major focus of research in this field. As a consequence, this work will focus on recent important advances in the design and fabrication of disposable screen printed sensors for the electrochemical detection of environmental contaminants. Special emphasis is given on sensor fabrication methodology, operating details and performance characteristics for environmental applications. PMID:24932865
Kaafarani, H M A; Hur, K; Campasano, M; Reda, D J; Itani, K M F
2010-06-01
Generic instruments used for the valuation of health states (e.g., EuroQol) often lack sensitivity to notable differences that are relevant to particular diseases or interventions. We developed a valuation methodology specifically for complications following ventral incisional herniorrhaphy (VIH). Between 2004 and 2006, 146 patients were prospectively randomized to undergo laparoscopic (n = 73) or open (n = 73) VIH. The primary outcome of the trial was complications at 8 weeks. A three-step methodology was used to assign severity weights to complications. First, each complication was graded using the Clavien classification. Second, five reviewers were asked to independently and directly rate their perception of the severity of each class using a non-categorized visual analog scale. Zero represented an uncomplicated postoperative course, while 100 represented postoperative death. Third, the median, lowest, and highest values assigned to each class of complications were used to derive weighted complication scores for open and laparoscopic VIH. Open VIH had more complications than laparoscopic VIH (47.9 vs. 31.5%, respectively; P = 0.026). However, complications of laparoscopic VIH were more severe than those of open VIH. Non-parametric analysis revealed a statistically higher weighted complication score for open VIH (interquartile range: 0-20 for open vs. 0-10 for laparoscopic; P = 0.049). In the sensitivity analysis, similar results were obtained using the median, highest, and lowest weights. We describe a new methodology for the valuation of complications following VIH that allows a direct outcome comparison of procedures with different complication profiles. Further testing of the validity, reliability, and generalizability of this method is warranted.
Systematic review of the evidence related to mandated nurse staffing ratios in acute hospitals.
Olley, Richard; Edwards, Ian; Avery, Mark; Cooper, Helen
2018-04-17
Objective The purpose of this systematic review was to evaluate and summarise available research on nurse staffing methods and relate these to outcomes under three overarching themes of: (1) management of clinical risk, quality and safety; (2) development of a new or innovative staffing methodology; and (3) equity of nursing workload. Methods The PRISMA method was used. Relevant articles were located by searching via the Griffith University Library electronic catalogue, including articles on PubMed, Cumulative Index to Nursing and Allied Health Literature (CINAHL) and Medline. Only English language publications published between 1 January 2010 and 30 April 2016 focusing on methodologies in acute hospital in-patient units were included in the present review. Results Two of the four staffing methods were found to have evidenced-based articles from empirical studies within the parameters set for inclusion. Of the four staffing methodologies searched, supply and demand returned 10 studies and staffing ratios returned 11. Conclusions There is a need to develop an evidence-based nurse-sensitive outcomes measure upon which staffing for safety, quality and workplace equity, as well as an instrument that reliability and validly projects nurse staffing requirements in a variety of clinical settings. Nurse-sensitive indicators reflect elements of patient care that are directly affected by nursing practice In addition, these measures must take into account patient satisfaction, workload and staffing, clinical risks and other measures of the quality and safety of care and nurses' work satisfaction. i. What is known about the topic? Nurse staffing is a controversial topic that has significant patient safety, quality of care, human resources and financial implications. In acute care services, nursing accounts for approximately 70% of salaries and wages paid by health services budgets, and evidence as to the efficacy and effectiveness of any staffing methodology is required because it has workforce and industrial relations implications. Although there is significant literature available on the topic, there is a paucity of empirical evidence supporting claims of increased patient safety in the acute hospital setting, but some evidence exists relating to equity of workload for nurses. What does this paper add? This paper provides a contemporary qualitative analysis of empirical evidence using PRISMA methodology to conduct a systematic review of the available literature. It demonstrates a significant research gap to support claims of increased patient safety in the acute hospital setting. The paper calls for greatly improved datasets upon which research can be undertaken to determine any associations between mandated patient to nurse ratios and other staffing methodologies and patient safety and quality of care. What are the implications for practitioners? There is insufficient contemporary research to support staffing methodologies for appropriate staffing, balanced workloads and quality, safe care. Such research would include the establishment of nurse-sensitive patient outcomes measures, and more robust datasets are needed for empirical analysis to produce such evidence.
NASA Astrophysics Data System (ADS)
Fitton, N.; Datta, A.; Hastings, A.; Kuhnert, M.; Topp, C. F. E.; Cloy, J. M.; Rees, R. M.; Cardenas, L. M.; Williams, J. R.; Smith, K.; Chadwick, D.; Smith, P.
2014-09-01
The United Kingdom currently reports nitrous oxide emissions from agriculture using the IPCC default Tier 1 methodology. However Tier 1 estimates have a large degree of uncertainty as they do not account for spatial variations in emissions. Therefore biogeochemical models such as DailyDayCent (DDC) are increasingly being used to provide a spatially disaggregated assessment of annual emissions. Prior to use, an assessment of the ability of the model to predict annual emissions should be undertaken, coupled with an analysis of how model inputs influence model outputs, and whether the modelled estimates are more robust that those derived from the Tier 1 methodology. The aims of the study were (a) to evaluate if the DailyDayCent model can accurately estimate annual N2O emissions across nine different experimental sites, (b) to examine its sensitivity to different soil and climate inputs across a number of experimental sites and (c) to examine the influence of uncertainty in the measured inputs on modelled N2O emissions. DailyDayCent performed well across the range of cropland and grassland sites, particularly for fertilized fields indicating that it is robust for UK conditions. The sensitivity of the model varied across the sites and also between fertilizer/manure treatments. Overall our results showed that there was a stronger correlation between the sensitivity of N2O emissions to changes in soil pH and clay content than the remaining input parameters used in this study. The lower the initial site values for soil pH and clay content, the more sensitive DDC was to changes from their initial value. When we compared modelled estimates with Tier 1 estimates for each site, we found that DailyDayCent provided a more accurate representation of the rate of annual emissions.
Tchepel, Oxana; Dias, Daniela
2011-06-01
This study is focused on the assessment of potential health benefits by meeting the air quality limit values (2008/50/CE) for short-term PM₁₀ exposure. For this purpose, the methodology of the WHO for Health Impact Assessment and APHEIS guidelines for data collection were applied to Porto Metropolitan Area, Portugal. Additionally, an improved methodology using population mobility data is proposed in this work to analyse number of persons exposed. In order to obtain representative background concentrations, an innovative approach to process air quality time series was implemented. The results provide the number of attributable cases prevented annually by reducing PM(10) concentration. An intercomparison of two approaches to process input data for the health risk analysis provides information on sensitivity of the applied methodology. The findings highlight the importance of taking into account spatial variability of the air pollution levels and population mobility in the health impact assessment.
Discrete Adjoint-Based Design for Unsteady Turbulent Flows On Dynamic Overset Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Diskin, Boris
2012-01-01
A discrete adjoint-based design methodology for unsteady turbulent flows on three-dimensional dynamic overset unstructured grids is formulated, implemented, and verified. The methodology supports both compressible and incompressible flows and is amenable to massively parallel computing environments. The approach provides a general framework for performing highly efficient and discretely consistent sensitivity analysis for problems involving arbitrary combinations of overset unstructured grids which may be static, undergoing rigid or deforming motions, or any combination thereof. General parent-child motions are also accommodated, and the accuracy of the implementation is established using an independent verification based on a complex-variable approach. The methodology is used to demonstrate aerodynamic optimizations of a wind turbine geometry, a biologically-inspired flapping wing, and a complex helicopter configuration subject to trimming constraints. The objective function for each problem is successfully reduced and all specified constraints are satisfied.
NASA Electronic Publishing System: Cost/benefit Methodology
NASA Technical Reports Server (NTRS)
Tuey, Richard C.
1994-01-01
The NASA Scientific and Technical Information Office was assigned the responsibility to examine the benefits of the utilization of electronic printing and duplicating systems throughout NASA Installations and Headquarters. The subject of this report is the documentation of the methodology used in justifying the acquisition of the most cost beneficial solution for the printing and duplicating requirements of a duplicating facility that is contemplating the acquisition of an electronic printing and duplicating system. Four alternatives are presented with each alternative costed out with its associated benefits. The methodology goes a step further than just a cost benefit analysis through its comparison of risks associated with each alternative, sensitivity to number of impressions and productivity gains on the selected alternative and finally the return on investment for the selected alternative. The report can be used in conjunction with the two earlier reports, NASA-TM-106242 and TM-106510 in guiding others in determining the cost effective duplicating alternative.
The art of spacecraft design: A multidisciplinary challenge
NASA Technical Reports Server (NTRS)
Abdi, F.; Ide, H.; Levine, M.; Austel, L.
1989-01-01
Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.
Gruskin, Sofia; Ferguson, Laura; Kumar, Shubha; Nicholson, Alexandra; Ali, Moazzam; Khosla, Rajat
2017-01-01
The last few years have seen a rise in the number of global and national initiatives that seek to incorporate human rights into public health practice. Nonetheless, a lack of clarity persists regarding the most appropriate indicators to monitor rights concerns in these efforts. The objective of this work was to develop a systematic methodology for use in determining the extent to which indicators commonly used in public health capture human rights concerns, using contraceptive services and programmes as a case study. The approach used to identify, evaluate, select and review indicators for their human rights sensitivity built on processes undertaken in previous work led by the World Health Organization (WHO). With advice from an expert advisory group, an analytic framework was developed to identify and evaluate quantitative, qualitative, and policy indicators in relation to contraception for their sensitivity to human rights. To test the framework's validity, indicators were reviewed to determine their feasibility to provide human rights analysis with attention to specific rights principles and standards. This exercise resulted in the identification of indicators that could be used to monitor human rights concerns as well as key gaps where additional indicators are required. While indicators generally used to monitor contraception programmes have some degree of sensitivity to human rights, breadth and depth are lacking. The proposed methodology can be useful to practitioners, researchers, and policy makers working in any area of health who are interested in monitoring and evaluating attention to human rights in commonly used health indicators.
Ali, Moazzam; Khosla, Rajat
2017-01-01
Objective The last few years have seen a rise in the number of global and national initiatives that seek to incorporate human rights into public health practice. Nonetheless, a lack of clarity persists regarding the most appropriate indicators to monitor rights concerns in these efforts. The objective of this work was to develop a systematic methodology for use in determining the extent to which indicators commonly used in public health capture human rights concerns, using contraceptive services and programmes as a case study. Methods The approach used to identify, evaluate, select and review indicators for their human rights sensitivity built on processes undertaken in previous work led by the World Health Organization (WHO). With advice from an expert advisory group, an analytic framework was developed to identify and evaluate quantitative, qualitative, and policy indicators in relation to contraception for their sensitivity to human rights. To test the framework’s validity, indicators were reviewed to determine their feasibility to provide human rights analysis with attention to specific rights principles and standards. Findings This exercise resulted in the identification of indicators that could be used to monitor human rights concerns as well as key gaps where additional indicators are required. While indicators generally used to monitor contraception programmes have some degree of sensitivity to human rights, breadth and depth are lacking. Conclusion The proposed methodology can be useful to practitioners, researchers, and policy makers working in any area of health who are interested in monitoring and evaluating attention to human rights in commonly used health indicators. PMID:29220365
Assessment of cognitive safety in clinical drug development
Roiser, Jonathan P.; Nathan, Pradeep J.; Mander, Adrian P.; Adusei, Gabriel; Zavitz, Kenton H.; Blackwell, Andrew D.
2016-01-01
Cognitive impairment is increasingly recognised as an important potential adverse effect of medication. However, many drug development programmes do not incorporate sensitive cognitive measurements. Here, we review the rationale for cognitive safety assessment, and explain several basic methodological principles for measuring cognition during clinical drug development, including study design and statistical analysis, from Phase I through to postmarketing. The crucial issue of how cognition should be assessed is emphasized, especially the sensitivity of measurement. We also consider how best to interpret the magnitude of any identified effects, including comparison with benchmarks. We conclude by discussing strategies for the effective communication of cognitive risks. PMID:26610416
Strategic Technology Investment Analysis: An Integrated System Approach
NASA Technical Reports Server (NTRS)
Adumitroaie, V.; Weisbin, C. R.
2010-01-01
Complex technology investment decisions within NASA are increasingly difficult to make such that the end results are satisfying the technical objectives and all the organizational constraints. Due to a restricted science budget environment and numerous required technology developments, the investment decisions need to take into account not only the functional impact on the program goals, but also development uncertainties and cost variations along with maintaining a healthy workforce. This paper describes an approach for optimizing and qualifying technology investment portfolios from the perspective of an integrated system model. The methodology encompasses multi-attribute decision theory elements and sensitivity analysis. The evaluation of the degree of robustness of the recommended portfolio provides the decision-maker with an array of viable selection alternatives, which take into account input uncertainties and possibly satisfy nontechnical constraints. The methodology is presented in the context of assessing capability development portfolios for NASA technology programs.
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
ERIC Educational Resources Information Center
Prime, Heather; Perlman, Michal; Tackett, Jennifer L.; Jenkins, Jennifer M.
2014-01-01
Research Findings: The goal of this study was to develop a construct of sibling cognitive sensitivity, which describes the extent to which children take their siblings' knowledge and cognitive abilities into account when working toward a joint goal. In addition, the study compared 2 coding methodologies for measuring the construct: a thin…
Mechanical modulation method for ultrasensitive phase measurements in photonics biosensing.
Patskovsky, S; Maisonneuve, M; Meunier, M; Kabashin, A V
2008-12-22
A novel polarimetry methodology for phase-sensitive measurements in single reflection geometry is proposed for applications in optical transduction-based biological sensing. The methodology uses altering step-like chopper-based mechanical phase modulation for orthogonal s- and p- polarizations of light reflected from the sensing interface and the extraction of phase information at different harmonics of the modulation. We show that even under a relatively simple experimental arrangement, the methodology provides the resolution of phase measurements as low as 0.007 deg. We also examine the proposed approach using Total Internal Reflection (TIR) and Surface Plasmon Resonance (SPR) geometries. For TIR geometry, the response appears to be strongly dependent on the prism material with the best values for high refractive index Si. The detection limit for Si-based TIR is estimated as 10(-5) in terms Refractive Index Units (RIU) change. SPR geometry offers much stronger phase response due to a much sharper phase characteristics. With the detection limit of 3.2*10(-7) RIU, the proposed methodology provides one of best sensitivities for phase-sensitive SPR devices. Advantages of the proposed method include high sensitivity, simplicity of experimental setup and noise immunity as a result of a high stability modulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomlinson, E.T.; deSaussure, G.; Weisbin, C.R.
1977-03-01
The main purpose of the study is the determination of the sensitivity of TRX-2 thermal lattice performance parameters to nuclear cross section data, particularly the epithermal resonance capture cross section of /sup 238/U. An energy-dependent sensitivity profile was generated for each of the performance parameters, to the most important cross sections of the various isotopes in the lattice. Uncertainties in the calculated values of the performance parameters due to estimated uncertainties in the basic nuclear data, deduced in this study, were shown to be small compared to the uncertainties in the measured values of the performance parameter and compared tomore » differences among calculations based upon the same data but with different methodologies.« less
Evaluation of digital real-time PCR assay as a molecular diagnostic tool for single-cell analysis.
Chang, Chia-Hao; Mau-Hsu, Daxen; Chen, Ke-Cheng; Wei, Cheng-Wey; Chiu, Chiung-Ying; Young, Tai-Horng
2018-02-21
In a single-cell study, isolating and identifying single cells are essential, but these processes often require a large investment of time or money. The aim of this study was to isolate and analyse single cells using a novel platform, the PanelChip™ Analysis System, which includes 2500 microwells chip and a digital real-time polymerase chain reaction (dqPCR) assay, in comparison with a standard PCR (qPCR) assay. Through the serial dilution of a known concentration standard, namely pUC19, the accuracy and sensitivity levels of two methodologies were compared. The two systems were tested on the basis of expression levels of the genetic markers vimentin, E-cadherin, N-cadherin and GAPDH in A549 lung carcinoma cells at two known concentrations. Furthermore, the influence of a known PCR inhibitor commonly found in blood samples, heparin, was evaluated in both methodologies. Finally, mathematical models were proposed and separation method of single cells was verified; moreover, gene expression levels during epithelial-mesenchymal transition in single cells under TGFβ1 treatment were measured. The drawn conclusion is that dqPCR performed using PanelChip™ is superior to the standard qPCR in terms of sensitivity, precision, and heparin tolerance. The dqPCR assay is a potential tool for clinical diagnosis and single-cell applications.
Kristensen, Lasse S; Andersen, Gitte B; Hager, Henrik; Hansen, Lise Lotte
2012-01-01
Sensitive and specific mutation detection is of particular importance in cancer diagnostics, prognostics, and individualized patient treatment. However, the majority of molecular methodologies that have been developed with the aim of increasing the sensitivity of mutation testing have drawbacks in terms of specificity, convenience, or costs. Here, we have established a new method, Competitive Amplification of Differentially Melting Amplicons (CADMA), which allows very sensitive and specific detection of all mutation types. The principle of the method is to amplify wild-type and mutated sequences simultaneously using a three-primer system. A mutation-specific primer is designed to introduce melting temperature decreasing mutations in the resulting mutated amplicon, while a second overlapping primer is designed to amplify both wild-type and mutated sequences. When combined with a third common primer very sensitive mutation detection becomes possible, when using high-resolution melting (HRM) as detection platform. The introduction of melting temperature decreasing mutations in the mutated amplicon also allows for further mutation enrichment by fast coamplification at lower denaturation temperature PCR (COLD-PCR). For proof-of-concept, we have designed CADMA assays for clinically relevant BRAF, EGFR, KRAS, and PIK3CA mutations, which are sensitive to, between 0.025% and 0.25%, mutated alleles in a wild-type background. In conclusion, CADMA enables highly sensitive and specific mutation detection by HRM analysis. © 2011 Wiley Periodicals, Inc.
Imaging modalities for characterising focal pancreatic lesions.
Best, Lawrence Mj; Rawji, Vishal; Pereira, Stephen P; Davidson, Brian R; Gurusamy, Kurinchi Selvan
2017-04-17
Increasing numbers of incidental pancreatic lesions are being detected each year. Accurate characterisation of pancreatic lesions into benign, precancerous, and cancer masses is crucial in deciding whether to use treatment or surveillance. Distinguishing benign lesions from precancerous and cancerous lesions can prevent patients from undergoing unnecessary major surgery. Despite the importance of accurately classifying pancreatic lesions, there is no clear algorithm for management of focal pancreatic lesions. To determine and compare the diagnostic accuracy of various imaging modalities in detecting cancerous and precancerous lesions in people with focal pancreatic lesions. We searched the CENTRAL, MEDLINE, Embase, and Science Citation Index until 19 July 2016. We searched the references of included studies to identify further studies. We did not restrict studies based on language or publication status, or whether data were collected prospectively or retrospectively. We planned to include studies reporting cross-sectional information on the index test (CT (computed tomography), MRI (magnetic resonance imaging), PET (positron emission tomography), EUS (endoscopic ultrasound), EUS elastography, and EUS-guided biopsy or FNA (fine-needle aspiration)) and reference standard (confirmation of the nature of the lesion was obtained by histopathological examination of the entire lesion by surgical excision, or histopathological examination for confirmation of precancer or cancer by biopsy and clinical follow-up of at least six months in people with negative index tests) in people with pancreatic lesions irrespective of language or publication status or whether the data were collected prospectively or retrospectively. Two review authors independently searched the references to identify relevant studies and extracted the data. We planned to use the bivariate analysis to calculate the summary sensitivity and specificity with their 95% confidence intervals and the hierarchical summary receiver operating characteristic (HSROC) to compare the tests and assess heterogeneity, but used simpler models (such as univariate random-effects model and univariate fixed-effect model) for combining studies when appropriate because of the sparse data. We were unable to compare the diagnostic performance of the tests using formal statistical methods because of sparse data. We included 54 studies involving a total of 3,196 participants evaluating the diagnostic accuracy of various index tests. In these 54 studies, eight different target conditions were identified with different final diagnoses constituting benign, precancerous, and cancerous lesions. None of the studies was of high methodological quality. None of the comparisons in which single studies were included was of sufficiently high methodological quality to warrant highlighting of the results. For differentiation of cancerous lesions from benign or precancerous lesions, we identified only one study per index test. The second analysis, of studies differentiating cancerous versus benign lesions, provided three tests in which meta-analysis could be performed. The sensitivities and specificities for diagnosing cancer were: EUS-FNA: sensitivity 0.79 (95% confidence interval (CI) 0.07 to 1.00), specificity 1.00 (95% CI 0.91 to 1.00); EUS: sensitivity 0.95 (95% CI 0.84 to 0.99), specificity 0.53 (95% CI 0.31 to 0.74); PET: sensitivity 0.92 (95% CI 0.80 to 0.97), specificity 0.65 (95% CI 0.39 to 0.84). The third analysis, of studies differentiating precancerous or cancerous lesions from benign lesions, only provided one test (EUS-FNA) in which meta-analysis was performed. EUS-FNA had moderate sensitivity for diagnosing precancerous or cancerous lesions (sensitivity 0.73 (95% CI 0.01 to 1.00) and high specificity 0.94 (95% CI 0.15 to 1.00), the extremely wide confidence intervals reflecting the heterogeneity between the studies). The fourth analysis, of studies differentiating cancerous (invasive carcinoma) from precancerous (dysplasia) provided three tests in which meta-analysis was performed. The sensitivities and specificities for diagnosing invasive carcinoma were: CT: sensitivity 0.72 (95% CI 0.50 to 0.87), specificity 0.92 (95% CI 0.81 to 0.97); EUS: sensitivity 0.78 (95% CI 0.44 to 0.94), specificity 0.91 (95% CI 0.61 to 0.98); EUS-FNA: sensitivity 0.66 (95% CI 0.03 to 0.99), specificity 0.92 (95% CI 0.73 to 0.98). The fifth analysis, of studies differentiating cancerous (high-grade dysplasia or invasive carcinoma) versus precancerous (low- or intermediate-grade dysplasia) provided six tests in which meta-analysis was performed. The sensitivities and specificities for diagnosing cancer (high-grade dysplasia or invasive carcinoma) were: CT: sensitivity 0.87 (95% CI 0.00 to 1.00), specificity 0.96 (95% CI 0.00 to 1.00); EUS: sensitivity 0.86 (95% CI 0.74 to 0.92), specificity 0.91 (95% CI 0.83 to 0.96); EUS-FNA: sensitivity 0.47 (95% CI 0.24 to 0.70), specificity 0.91 (95% CI 0.32 to 1.00); EUS-FNA carcinoembryonic antigen 200 ng/mL: sensitivity 0.58 (95% CI 0.28 to 0.83), specificity 0.51 (95% CI 0.19 to 0.81); MRI: sensitivity 0.69 (95% CI 0.44 to 0.86), specificity 0.93 (95% CI 0.43 to 1.00); PET: sensitivity 0.90 (95% CI 0.79 to 0.96), specificity 0.94 (95% CI 0.81 to 0.99). The sixth analysis, of studies differentiating cancerous (invasive carcinoma) from precancerous (low-grade dysplasia) provided no tests in which meta-analysis was performed. The seventh analysis, of studies differentiating precancerous or cancerous (intermediate- or high-grade dysplasia or invasive carcinoma) from precancerous (low-grade dysplasia) provided two tests in which meta-analysis was performed. The sensitivity and specificity for diagnosing cancer were: CT: sensitivity 0.83 (95% CI 0.68 to 0.92), specificity 0.83 (95% CI 0.64 to 0.93) and MRI: sensitivity 0.80 (95% CI 0.58 to 0.92), specificity 0.81 (95% CI 0.53 to 0.95), respectively. The eighth analysis, of studies differentiating precancerous or cancerous (intermediate- or high-grade dysplasia or invasive carcinoma) from precancerous (low-grade dysplasia) or benign lesions provided no test in which meta-analysis was performed.There were no major alterations in the subgroup analysis of cystic pancreatic focal lesions (42 studies; 2086 participants). None of the included studies evaluated EUS elastography or sequential testing. We were unable to arrive at any firm conclusions because of the differences in the way that study authors classified focal pancreatic lesions into cancerous, precancerous, and benign lesions; the inclusion of few studies with wide confidence intervals for each comparison; poor methodological quality in the studies; and heterogeneity in the estimates within comparisons.
Wang, Li; Carnegie, Graeme K.
2013-01-01
Among methods to study protein-protein interaction inside cells, Bimolecular Fluorescence Complementation (BiFC) is relatively simple and sensitive. BiFC is based on the production of fluorescence using two non-fluorescent fragments of a fluorescent protein (Venus, a Yellow Fluorescent Protein variant, is used here). Non-fluorescent Venus fragments (VN and VC) are fused to two interacting proteins (in this case, AKAP-Lbc and PDE4D3), yielding fluorescence due to VN-AKAP-Lbc-VC-PDE4D3 interaction and the formation of a functional fluorescent protein inside cells. BiFC provides information on the subcellular localization of protein complexes and the strength of protein interactions based on fluorescence intensity. However, BiFC analysis using microscopy to quantify the strength of protein-protein interaction is time-consuming and somewhat subjective due to heterogeneity in protein expression and interaction. By coupling flow cytometric analysis with BiFC methodology, the fluorescent BiFC protein-protein interaction signal can be accurately measured for a large quantity of cells in a short time. Here, we demonstrate an application of this methodology to map regions in PDE4D3 that are required for the interaction with AKAP-Lbc. This high throughput methodology can be applied to screening factors that regulate protein-protein interaction. PMID:23979513
Wang, Li; Carnegie, Graeme K
2013-08-15
Among methods to study protein-protein interaction inside cells, Bimolecular Fluorescence Complementation (BiFC) is relatively simple and sensitive. BiFC is based on the production of fluorescence using two non-fluorescent fragments of a fluorescent protein (Venus, a Yellow Fluorescent Protein variant, is used here). Non-fluorescent Venus fragments (VN and VC) are fused to two interacting proteins (in this case, AKAP-Lbc and PDE4D3), yielding fluorescence due to VN-AKAP-Lbc-VC-PDE4D3 interaction and the formation of a functional fluorescent protein inside cells. BiFC provides information on the subcellular localization of protein complexes and the strength of protein interactions based on fluorescence intensity. However, BiFC analysis using microscopy to quantify the strength of protein-protein interaction is time-consuming and somewhat subjective due to heterogeneity in protein expression and interaction. By coupling flow cytometric analysis with BiFC methodology, the fluorescent BiFC protein-protein interaction signal can be accurately measured for a large quantity of cells in a short time. Here, we demonstrate an application of this methodology to map regions in PDE4D3 that are required for the interaction with AKAP-Lbc. This high throughput methodology can be applied to screening factors that regulate protein-protein interaction.
Usefulness of MLPA in the detection of SHOX deletions.
Funari, Mariana F A; Jorge, Alexander A L; Souza, Silvia C A L; Billerbeck, Ana E C; Arnhold, Ivo J P; Mendonca, Berenice B; Nishi, Mirian Y
2010-01-01
SHOX haploinsufficiency causes a wide spectrum of short stature phenotypes, such as Leri-Weill dyschondrosteosis (LWD) and disproportionate short stature (DSS). SHOX deletions are responsible for approximately two thirds of isolated haploinsufficiency; therefore, it is important to determine the most appropriate methodology for detection of gene deletion. In this study, three methodologies for the detection of SHOX deletions were compared: the fluorescence in situ hybridization (FISH), microsatellite analysis and multiplex ligation-dependent probe amplification (MLPA). Forty-four patients (8 LWD and 36 DSS) were analyzed. The cosmid LLNOYCO3'M'34F5 was used as a probe for the FISH analysis and microsatellite analysis were performed using three intragenic microsatellite markers. MLPA was performed using commercial kits. Twelve patients (8 LWD and 4 DSS) had deletions in SHOX area detected by MLPA and 2 patients generated discordant results with the other methodologies. In the first case, the deletion was not detected by FISH. In the second case, both FISH and microsatellite analyses were unable to identify the intragenic deletion. In conclusion, MLPA was more sensitive, less expensive and less laborious; therefore, it should be used as the initial molecular method for the detection of SHOX gene deletion. Copyright © 2010 Elsevier Masson SAS. All rights reserved.
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines
Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.
2017-01-01
Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.
Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H
2017-04-01
Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan
2017-09-01
Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.
Poulsen, Nicklas N; Pedersen, Morten E; Østergaard, Jesper; Petersen, Nickolaj J; Nielsen, Christoffer T; Heegaard, Niels H H; Jensen, Henrik
2016-09-20
Detection of immune responses is important in the diagnosis of many diseases. For example, the detection of circulating autoantibodies against double-stranded DNA (dsDNA) is used in the diagnosis of Systemic Lupus Erythematosus (SLE). It is, however, difficult to reach satisfactory sensitivity, specificity, and accuracy with established assays. Also, existing methodologies for quantification of autoantibodies are challenging to transfer to a point-of-care setting. Here we present the use of flow-induced dispersion analysis (FIDA) for rapid (minutes) measurement of autoantibodies against dsDNA. The assay is based on Taylor dispersion analysis (TDA) and is fully automated with the use of standard capillary electrophoresis (CE) based equipment employing fluorescence detection. It is robust toward matrix effects as demonstrated by the direct analysis of samples composed of up to 85% plasma derived from human blood samples, and it allows for flexible exchange of the DNA sequences used to probe for the autoantibodies. Plasma samples from SLE positive patients were analyzed using the new FIDA methodology as well as by standard indirect immunofluorescence and solid-phase immunoassays. Interestingly, the patient antibodies bound DNA sequences with different affinities, suggesting pronounced heterogeneity among autoantibodies produced in SLE. The FIDA based methodology is a new approach for autoantibody detection and holds promise for being used for patient stratification and monitoring of disease activity.
Depellegrin, Daniel; Pereira, Paulo
2016-01-15
This study presents a series of oil spill indexes for the characterization of physical and biological sensitivity in unsheltered coastal environments. The case study extends over 237 km of Lithuanian-Russian coastal areas subjected to multiple oil spill threats. Results show that 180 km of shoreline have environmental sensitivity index (ESI) of score 3. Natural clean-up processes depending on (a) shoreline sinuosity, (b) orientation and (c) wave exposure are favourable on 72 km of shoreline. Vulnerability analysis from pre-existing Kravtsovskoye D6 platform oil spill scenarios indicates that 15.1 km of the Curonian Spit have high impact probability. The highest seafloor sensitivity within the 20 m isobath is at the Vistula Spit and Curonian Spit, whereas biological sensitivity is moderate over the entire study area. The paper concludes with the importance of harmonized datasets and methodologies for transboundary oil spill impact assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Experimental analysis of some determinants of inductive reasoning].
Ono, K
1989-02-01
Three experiments were conducted from a behavioral perspective to investigate the determinants of inductive reasoning and to compare some methodological differences. The dependent variable used in these experiments was the threshold of confident response (TCR), which was defined as "the minimal sample size required to establish generalization from instances." Experiment 1 examined the effects of population size on inductive reasoning, and the results from 35 college students showed that the TCR varied in proportion to the logarithm of population size. In Experiment 2, 30 subjects showed distinct sensitivity to both prior probability and base-rate. The results from 70 subjects who participated in Experiment 3 showed that the TCR was affected by its consequences (risk condition), and especially, that humans were sensitive to a loss situation. These results demonstrate the sensitivity of humans to statistical variables in inductive reasoning. Furthermore, methodological comparison indicated that the experimentally observed values of TCR were close to, but not as precise as the optimal values predicted by Bayes' model. On the other hand, the subjective TCR estimated by subjects was highly discrepant from the observed TCR. These findings suggest that various aspects of inductive reasoning can be fruitfully investigated not only from subjective estimations such as probability likelihood but also from an objective behavioral perspective.
Schlauch, Robert C.; Crane, Cory A.; Houston, Rebecca J.; Molnar, Danielle S.; Schlienz, Nicolas J.; Lang, Alan R.
2015-01-01
The current project sought to examine the psychometric properties of a personality based measure (Substance Use Risk Profile Scale; SURPS: introversion-hopelessness, anxiety sensitivity, impulsivity, and sensation seeking) designed to differentially predict substance use preferences and patterns by matching primary personality-based motives for use to the specific effects of various psychoactive substances. Specifically, we sought to validate the SURPS in a clinical sample of substance users using cue reactivity methodology to assess current inclinations to consume a wide range of psychoactive substances. Using confirmatory factor analysis and correlational analyses, the SURPS demonstrated good psychometric properties and construct validity. Further, impulsivity and sensation-seeking were associated with use of multiple substances but could be differentiated by motives for use and susceptibility to the reinforcing effects of stimulants (i.e., impulsivity) and alcohol (i.e. sensation-seeking). In contrast, introversion-hopelessness and anxiety sensitivity demonstrated a pattern of use more focused on reducing negative affect, but were not differentiated based on specific patterns of use. Taken together, results suggests that among those receiving inpatient treatment for substance use disorders, the SURPS is a valid instrument for measuring four distinct personality dimensions that may be sensitive to motivational susceptibilities to specific patterns of alcohol and drug use. PMID:26052180
Jahn, I; Foraita, R
2008-01-01
In Germany gender-sensitive approaches are part of guidelines for good epidemiological practice as well as health reporting. They are increasingly claimed to realize the gender mainstreaming strategy in research funding by the federation and federal states. This paper focuses on methodological aspects of data analysis, as an empirical data example of which serves the health report of Bremen, a population-based cross-sectional study. Health reporting requires analysis and reporting methods that are able to discover sex/gender issues of questions, on the one hand, and consider how results can adequately be communicated, on the other hand. The core question is: Which consequences do a different inclusion of the category sex in different statistical analyses for identification of potential target groups have on the results? As evaluation methods logistic regressions as well as a two-stage procedure were exploratively conducted. This procedure combines graphical models with CHAID decision trees and allows for visualising complex results. Both methods are analysed by stratification as well as adjusted by sex/gender and compared with each other. As a result, only stratified analyses are able to detect differences between the sexes and within the sex/gender groups as long as one cannot resort to previous knowledge. Adjusted analyses can detect sex/gender differences only if interaction terms have been included in the model. Results are discussed from a statistical-epidemiological perspective as well as in the context of health reporting. As a conclusion, the question, if a statistical method is gender-sensitive, can only be answered by having concrete research questions and known conditions. Often, an appropriate statistic procedure can be chosen after conducting a separate analysis for women and men. Future gender studies deserve innovative study designs as well as conceptual distinctiveness with regard to the biological and the sociocultural elements of the category sex/gender.
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA
Baixauli-Pérez, Mª Piedad
2017-01-01
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants. PMID:28665325
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA.
Fuentes-Bargues, José Luis; González-Cruz, Mª Carmen; González-Gaya, Cristina; Baixauli-Pérez, Mª Piedad
2017-06-30
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants.
[Study on the automatic parameters identification of water pipe network model].
Jia, Hai-Feng; Zhao, Qi-Feng
2010-01-01
Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.
The role of modelling in prioritising and planning clinical trials.
Chilcott, J; Brennan, A; Booth, A; Karnon, J; Tappenden, P
2003-01-01
To identify the role of modelling in planning and prioritising trials. The review focuses on modelling methods used in the construction of disease models and on methods for their analysis and interpretation. Searches were initially developed in MEDLINE and then translated into other databases. Systematic reviews of the methodological and case study literature were undertaken. Search strategies focused on the intersection between three domains: modelling, health technology assessment and prioritisation. The review found that modelling can extend the validity of trials by: generalising from trial populations to specific target groups; generalising to other settings and countries; extrapolating trial outcomes to the longer term; linking intermediate outcome measures to final outcomes; extending analysis to the relevant comparators; adjusting for prognostic factors in trials; and synthesising research results. The review suggested that modelling may offer greatest benefits where the impact of a technology occurs over a long duration, where disease/technology characteristics are not observable, where there are long lead times in research, or for rapidly changing technologies. It was also found that modelling can inform the key parameters for research: sample size, trial duration and population characteristics. One-way, multi-way and threshold sensitivity analysis have been used in informing these aspects but are flawed. The payback approach has been piloted and while there have been weaknesses in its implementation, the approach does have potential. Expected value of information analysis is the only existing methodology that has been applied in practice and can address all these issues. The potential benefit of this methodology is that the value of research is directly related to its impact on technology commissioning decisions, and is demonstrated in real and absolute rather than relative terms; it assesses the technical efficiency of different types of research. Modelling is not a substitute for data collection. However, modelling can identify trial designs of low priority in informing health technology commissioning decisions. Good practice in undertaking and reporting economic modelling studies requires further dissemination and support, specifically in sensitivity analyses, model validation and the reporting of assumptions. Case studies of the payback approach using stochastic sensitivity analyses should be developed. Use of overall expected value of perfect information should be encouraged in modelling studies seeking to inform prioritisation and planning of health technology assessments. Research is required to assess if the potential benefits of value of information analysis can be realised in practice; on the definition of an adequate objective function; on methods for analysing computationally expensive models; and on methods for updating prior probability distributions.
NASA Astrophysics Data System (ADS)
Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry
2017-04-01
Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.
Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation
NASA Astrophysics Data System (ADS)
Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter
2015-04-01
Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.
Chan, T M Simon; Teram, Eli; Shaw, Ian
2017-01-01
Despite growing consideration of the needs of research participants in studies related to sensitive issues, discussions of alternative ways to design sensitive research are scarce. Structured as an exchange between two researchers who used different approaches in their studies with childhood sexual abuse survivors, in this article, we seek to advance understanding of methodological and ethical issues in designing sensitive research. The first perspective, which is termed protective, promotes the gradual progression of participants from a treatment phase into a research phase, with the ongoing presence of a researcher and a social worker in both phases. In the second perspective, which is termed minimalist, we argue for clear boundaries between research and treatment processes, limiting the responsibility of researchers to ensuring that professional support is available to participants who experience emotional difficulties. Following rebuttals, lessons are drawn for ethical balancing between methodological rigor and the needs of participants. © The Author(s) 2015.
Methodology for cost analysis of film-based and filmless portable chest systems
NASA Astrophysics Data System (ADS)
Melson, David L.; Gauvain, Karen M.; Beardslee, Brian M.; Kraitsik, Michael J.; Burton, Larry; Blaine, G. James; Brink, Gary S.
1996-05-01
Many studies analyzing the costs of film-based and filmless radiology have focused on multi- modality, hospital-wide solutions. Yet due to the enormous cost of converting an entire large radiology department or hospital to a filmless environment all at once, institutions often choose to eliminate film one area at a time. Narrowing the focus of cost-analysis may be useful in making such decisions. This presentation will outline a methodology for analyzing the cost per exam of film-based and filmless solutions for providing portable chest exams to Intensive Care Units (ICUs). The methodology, unlike most in the literature, is based on parallel data collection from existing filmless and film-based ICUs, and is currently being utilized at our institution. Direct costs, taken from the perspective of the hospital, for portable computed radiography chest exams in one filmless and two film-based ICUs are identified. The major cost components are labor, equipment, materials, and storage. Methods for gathering and analyzing each of the cost components are discussed, including FTE-based and time-based labor analysis, incorporation of equipment depreciation, lease, and maintenance costs, and estimation of materials costs. Extrapolation of data from three ICUs to model hypothetical, hospital-wide film-based and filmless ICU imaging systems is described. Performance of sensitivity analysis on the filmless model to assess the impact of anticipated reductions in specific labor, equipment, and archiving costs is detailed. A number of indirect costs, which are not explicitly included in the analysis, are identified and discussed.
Borotikar, Bhushan S.; Sheehan, Frances T.
2017-01-01
Objectives To establish an in vivo, normative patellofemoral cartilage contact mechanics database acquired during voluntary muscle control using a novel dynamic magnetic resonance (MR) imaging-based computational methodology and validate the contact mechanics sensitivity to the known sub-millimeter methodological inaccuracies. Design Dynamic cine phase-contrast and multi-plane cine images were acquired while female subjects (n=20, sample of convenience) performed an open kinetic chain (knee flexion-extension) exercise inside a 3-Tesla MR scanner. Static cartilage models were created from high resolution three-dimensional static MR data and accurately placed in their dynamic pose at each time frame based on the cine-PC data. Cartilage contact parameters were calculated based on the surface overlap. Statistical analysis was performed using paired t-test and a one-sample repeated measures ANOVA. The sensitivity of the contact parameters to the known errors in the patellofemoral kinematics was determined. Results Peak mean patellofemoral contact area was 228.7±173.6mm2 at 40° knee angle. During extension, contact centroid and peak strain locations tracked medially on the femoral and patellar cartilage and were not significantly different from each other. At 30°, 35°, and 40° of knee extension, contact area was significantly different. Contact area and centroid locations were insensitive to rotational and translational perturbations. Conclusion This study is a first step towards unfolding the biomechanical pathways to anterior patellofemoral pain and OA using dynamic, in vivo, and accurate methodologies. The database provides crucial data for future studies and for validation of, or as an input to, computational models. PMID:24012620
Probabilistic Simulation of Stress Concentration in Composite Laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, D. G.
1994-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors (SCF's) in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties, whereas the finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate SCF's, such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using is to simulate the SCF's in three different composite laminates. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the SCF's are influenced by local stiffness variables, by load eccentricities, and by initial stress fields.
Advances in bioanalytical techniques to measure steroid hormones in serum.
French, Deborah
2016-06-01
Steroid hormones are measured clinically to determine if a patient has a pathological process occurring in the adrenal gland, or other hormone responsive organs. They are very similar in structure making them analytically challenging to measure. Additionally, these hormones have vast concentration differences in human serum adding to the measurement complexity. GC-MS was the gold standard methodology used to measure steroid hormones clinically, followed by radioimmunoassay, but that was replaced by immunoassay due to ease of use. LC-MS/MS has now become a popular alternative owing to simplified sample preparation than for GC-MS and increased specificity and sensitivity over immunoassay. This review will discuss these methodologies and some new developments that could simplify and improve steroid hormone analysis in serum.
The NBS Energy Model Assessment project: Summary and overview
NASA Astrophysics Data System (ADS)
Gass, S. I.; Hoffman, K. L.; Jackson, R. H. F.; Joel, L. S.; Saunders, P. B.
1980-09-01
The activities and technical reports for the project are summarized. The reports cover: assessment of the documentation of Midterm Oil and Gas Supply Modeling System; analysis of the model methodology characteristics of the input and other supporting data; statistical procedures undergirding construction of the model and sensitivity of the outputs to variations in input, as well as guidelines and recommendations for the role of these in model building and developing procedures for their evaluation.
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
Fozooni, Tahereh; Ravan, Hadi; Sasan, Hosseinali
2017-12-01
Due to their unique properties, such as programmability, ligand-binding capability, and flexibility, nucleic acids can serve as analytes and/or recognition elements for biosensing. To improve the sensitivity of nucleic acid-based biosensing and hence the detection of a few copies of target molecule, different modern amplification methodologies, namely target-and-signal-based amplification strategies, have already been developed. These recent signal amplification technologies, which are capable of amplifying the signal intensity without changing the targets' copy number, have resulted in fast, reliable, and sensitive methods for nucleic acid detection. Working in cell-free settings, researchers have been able to optimize a variety of complex and quantitative methods suitable for deploying in live-cell conditions. In this study, a comprehensive review of the signal amplification technologies for the detection of nucleic acids is provided. We classify the signal amplification methodologies into enzymatic and non-enzymatic strategies with a primary focus on the methods that enable us to shift away from in vitro detecting to in vivo imaging. Finally, the future challenges and limitations of detection for cellular conditions are discussed.
Zhang, Liding; Wei, Qiujiang; Han, Qinqin; Chen, Qiang; Tai, Wenlin; Zhang, Jinyang; Song, Yuzhu; Xia, Xueshan
2018-01-01
Shigella is an important human food-borne zoonosis bacterial pathogen, and can cause clinically severe diarrhea. There is an urgent need to develop a specific, sensitive, and rapid methodology for detection of this pathogen. In this study, loop-mediated isothermal amplification (LAMP) combined with magnetic immunocapture assay (IC-LAMP) was first developed for the detection of Shigella in pure culture, artificial milk, and clinical stool samples. This method exhibited a detection limit of 8.7 CFU/mL. Compared with polymerase chain reaction, IC-LAMP is sensitive, specific, and reliable for monitoring Shigella. Additionally, IC-LAMP is more convenient, efficient, and rapid than ordinary LAMP, as it is more efficiently enriches pathogen cells without extraction of genomic DNA. Under isothermal conditions, the amplification curves and the green fluorescence were detected within 30 min in the presence of genomic DNA template. The overall analysis time was approximately 1 h, including the enrichment and lysis of the bacterial cells, a significantly short detection time. Therefore, the IC-LAMP methodology described here is potentially useful for the efficient detection of Shigella in various samples. PMID:29467730
Methodologies for Removing/Desorbing and Transporting Particles from Surfaces to Instrumentation
NASA Astrophysics Data System (ADS)
Miller, Carla J.; Cespedes, Ernesto R.
2012-12-01
Explosive trace detection (ETD) continues to be a key technology supporting the fight against terrorist bombing threats. Very selective and sensitive ETD instruments have been developed to detect explosive threats concealed on personnel, in vehicles, in luggage, and in cargo containers, as well as for forensic analysis (e.g. post blast inspection, bomb-maker identification, etc.) in a broad range of homeland security, law enforcement, and military applications. A number of recent studies have highlighted the fact that significant improvements in ETD systems' capabilities will be achieved, not by increasing the selectivity/sensitivity of the sensors, but by improved techniques for particle/vapor sampling, pre-concentration, and transport to the sensors. This review article represents a compilation of studies focused on characterizing the adhesive properties of explosive particles, the methodologies for removing/desorbing these particles from a range of surfaces, and approaches for transporting them to the instrument. The objectives of this review are to summarize fundamental work in explosive particle characterization, to describe experimental work performed in harvesting and transport of these particles, and to highlight those approaches that indicate high potential for improving ETD capabilities.
Application of Adjoint Methodology in Various Aspects of Sonic Boom Design
NASA Technical Reports Server (NTRS)
Rallabhandi, Sriram K.
2014-01-01
One of the advances in computational design has been the development of adjoint methods allowing efficient calculation of sensitivities in gradient-based shape optimization. This paper discusses two new applications of adjoint methodology that have been developed to aid in sonic boom mitigation exercises. In the first, equivalent area targets are generated using adjoint sensitivities of selected boom metrics. These targets may then be used to drive the vehicle shape during optimization. The second application is the computation of adjoint sensitivities of boom metrics on the ground with respect to parameters such as flight conditions, propagation sampling rate, and selected inputs to the propagation algorithms. These sensitivities enable the designer to make more informed selections of flight conditions at which the chosen cost functionals are less sensitive.
Wu, Zhiyuan; Yuan, Hong; Zhang, Xinju; Liu, Weiwei; Xu, Jinhua; Zhang, Wei; Guan, Ming
2011-01-01
JAK2 V617F, a somatic point mutation that leads to constitutive JAK2 phosphorylation and kinase activation, has been incorporated into the WHO classification and diagnostic criteria of myeloid neoplasms. Although various approaches such as restriction fragment length polymorphism, amplification refractory mutation system and real-time PCR have been developed for its detection, a generic rapid closed-tube method, which can be utilized on routine genetic testing instruments with stability and cost-efficiency, has not been described. Asymmetric PCR for detection of JAK2 V617F with a 3'-blocked unlabeled probe, saturate dye and subsequent melting curve analysis was performed on a Rotor-Gene® Q real-time cycler to establish the methodology. We compared this method to the existing amplification refractory mutation systems and direct sequencing. Hereafter, the broad applicability of this unlabeled probe melting method was also validated on three diverse real-time systems (Roche LightCycler® 480, Applied Biosystems ABI® 7500 and Eppendorf Mastercycler® ep realplex) in two different laboratories. The unlabeled probe melting analysis could genotype JAK2 V617F mutation explicitly with a 3% mutation load detecting sensitivity. At level of 5% mutation load, the intra- and inter-assay CVs of probe-DNA heteroduplex (mutation/wild type) covered 3.14%/3.55% and 1.72%/1.29% respectively. The method could equally discriminate mutant from wild type samples on the other three real-time instruments. With a high detecting sensitivity, unlabeled probe melting curve analysis is more applicable to disclose JAK2 V617F mutation than conventional methodologies. Verified with the favorable inter- and intra-assay reproducibility, unlabeled probe melting analysis provided a generic mutation detecting alternative for real-time instruments.
van Boxtel, Niels; Wolfs, Kris; Van Schepdael, Ann; Adams, Erwin
2015-12-18
The sensitivity of gas chromatography (GC) combined with the full evaporation technique (FET) for the analysis of aqueous samples is limited due to the maximum tolerable sample volume in a headspace vial. Using an acetone acetal as water scavenger prior to FET-GC analysis proved to be a useful and versatile tool for the analysis of high boiling analytes in aqueous samples. 2,2-Dimethoxypropane (DMP) was used in this case resulting in methanol and acetone as reaction products with water. These solvents are relatively volatile and were easily removed by evaporation enabling sample enrichment leading to 10-fold improvement in sensitivity compared to the standard 10μL FET sample volumes for a selection of typical high boiling polar residual solvents in water. This could be improved even further if more sample is used. The method was applied for the determination of residual NMP in an aqueous solution of a cefotaxime analogue and proved to be considerably better than conventional static headspace (sHS) and the standard FET approach. The methodology was also applied to determine trace amounts of ethylene glycol (EG) in aqueous samples like contact lens fluids, where scavenging of the water would avoid laborious extraction prior to derivatization. During this experiment it was revealed that DMP reacts quantitatively with EG to form 2,2-dimethyl-1,3-dioxolane (2,2-DD) under the proposed reaction conditions. The relatively high volatility (bp 93°C) of 2,2-DD makes it possible to perform analysis of EG using the sHS methodology making additional derivatization reactions superfluous. Copyright © 2015 Elsevier B.V. All rights reserved.
Charpentier, R.R.; Klett, T.R.
2005-01-01
During the last 30 years, the methodology for assessment of undiscovered conventional oil and gas resources used by the Geological Survey has undergone considerable change. This evolution has been based on five major principles. First, the U.S. Geological Survey has responsibility for a wide range of U.S. and world assessments and requires a robust methodology suitable for immaturely explored as well as maturely explored areas. Second, the assessments should be based on as comprehensive a set of geological and exploration history data as possible. Third, the perils of methods that solely use statistical methods without geological analysis are recognized. Fourth, the methodology and course of the assessment should be documented as transparently as possible, within the limits imposed by the inevitable use of subjective judgement. Fifth, the multiple uses of the assessments require a continuing effort to provide the documentation in such ways as to increase utility to the many types of users. Undiscovered conventional oil and gas resources are those recoverable volumes in undiscovered, discrete, conventional structural or stratigraphic traps. The USGS 2000 methodology for these resources is based on a framework of assessing numbers and sizes of undiscovered oil and gas accumulations and the associated risks. The input is standardized on a form termed the Seventh Approximation Data Form for Conventional Assessment Units. Volumes of resource are then calculated using a Monte Carlo program named Emc2, but an alternative analytic (non-Monte Carlo) program named ASSESS also can be used. The resource assessment methodology continues to change. Accumulation-size distributions are being examined to determine how sensitive the results are to size-distribution assumptions. The resource assessment output is changing to provide better applicability for economic analysis. The separate methodology for assessing continuous (unconventional) resources also has been evolving. Further studies of the relationship between geologic models of conventional and continuous resources will likely impact the respective resource assessment methodologies. ?? 2005 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Moglia, Magnus; Sharma, Ashok K.; Maheepala, Shiroma
2012-07-01
SummaryPlanning of regional and urban water resources, and in particular with Integrated Urban Water Management approaches, often considers inter-relationships between human uses of water, the health of the natural environment as well as the cost of various management strategies. Decision makers hence typically need to consider a combination of social, environmental and economic goals. The types of strategies employed can include water efficiency measures, water sensitive urban design, stormwater management, or catchment management. Therefore, decision makers need to choose between different scenarios and to evaluate them against a number of criteria. This type of problem has a discipline devoted to it, i.e. Multi-Criteria Decision Analysis, which has often been applied in water management contexts. This paper describes the application of Subjective Logic in a basic Bayesian Network to a Multi-Criteria Decision Analysis problem. By doing this, it outlines a novel methodology that explicitly incorporates uncertainty and information reliability. The application of the methodology to a known case study context allows for exploration. By making uncertainty and reliability of assessments explicit, it allows for assessing risks of various options, and this may help in alleviating cognitive biases and move towards a well formulated risk management policy.
[Epistemological/methodological contributions to the fortification of an emancipatory con(science)].
Ferreira, Marcelo José Monteiro; Rigotto, Raquel Maria
2014-10-01
This article conducts a critical and reflective analysis into the paths of elaboration, sistematization and communication of the results of research in conjunction with colleges, social movements and individuals in the territory under scrutiny. For this, the article embraces as the core analytical theme the process of shared production of knowledge, both in the epistemological-methodological field and with respect to its social destination. The case study was adopted as the methodology, preceded by the use of focused groups and in-depth interviews as technique. To analyze the qualitative material discourse analysis was adopted in line with the assumptions of in-depth hermeneutics. The results are presented in two stages: Firstly, the new possibilities for a paradigmatic reorientation are discussed from the permanent and procedural interlocution with the empirical field and it's different contexts and authors. Secondly, it analyzes in the praxiological dimension, the distinct ways of appropriation of knowledge produced in dialogue with the social movements and the individuals in the territory under scrutiny. It concludes by highlighting alternative and innovative paths to an edifying academic practice. which stresses solidarity and is sensitive to the vulnerable population and its requests.
A Methodology for Robust Comparative Life Cycle Assessments Incorporating Uncertainty.
Gregory, Jeremy R; Noshadravan, Arash; Olivetti, Elsa A; Kirchain, Randolph E
2016-06-21
We propose a methodology for conducting robust comparative life cycle assessments (LCA) by leveraging uncertainty. The method evaluates a broad range of the possible scenario space in a probabilistic fashion while simultaneously considering uncertainty in input data. The method is intended to ascertain which scenarios have a definitive environmentally preferable choice among the alternatives being compared and the significance of the differences given uncertainty in the parameters, which parameters have the most influence on this difference, and how we can identify the resolvable scenarios (where one alternative in the comparison has a clearly lower environmental impact). This is accomplished via an aggregated probabilistic scenario-aware analysis, followed by an assessment of which scenarios have resolvable alternatives. Decision-tree partitioning algorithms are used to isolate meaningful scenario groups. In instances where the alternatives cannot be resolved for scenarios of interest, influential parameters are identified using sensitivity analysis. If those parameters can be refined, the process can be iterated using the refined parameters. We also present definitions of uncertainty quantities that have not been applied in the field of LCA and approaches for characterizing uncertainty in those quantities. We then demonstrate the methodology through a case study of pavements.
CORSSTOL: Cylinder Optimization of Rings, Skin, and Stringers with Tolerance sensitivity
NASA Technical Reports Server (NTRS)
Finckenor, J.; Bevill, M.
1995-01-01
Cylinder Optimization of Rings, Skin, and Stringers with Tolerance (CORSSTOL) sensitivity is a design optimization program incorporating a method to examine the effects of user-provided manufacturing tolerances on weight and failure. CORSSTOL gives designers a tool to determine tolerances based on need. This is a decisive way to choose the best design among several manufacturing methods with differing capabilities and costs. CORSSTOL initially optimizes a stringer-stiffened cylinder for weight without tolerances. The skin and stringer geometry are varied, subject to stress and buckling constraints. Then the same analysis and optimization routines are used to minimize the maximum material condition weight subject to the least favorable combination of tolerances. The adjusted optimum dimensions are provided with the weight and constraint sensitivities of each design variable. The designer can immediately identify critical tolerances. The safety of parts made out of tolerance can also be determined. During design and development of weight-critical systems, design/analysis tools that provide product-oriented results are of vital significance. The development of this program and methodology provides designers with an effective cost- and weight-saving design tool. The tolerance sensitivity method can be applied to any system defined by a set of deterministic equations.
Sampling and sensitivity analyses tools (SaSAT) for computational modelling
Hoare, Alexander; Regan, David G; Wilson, David P
2008-01-01
SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361
Risk Factors for Low Back Pain in Childhood and Adolescence: A Systematic Review.
Calvo-Muñoz, Inmaculada; Kovacs, Francisco M; Roqué, Marta; Gago Fernández, Inés; Seco Calvo, Jesús
2018-05-01
To identify factors associated with low back pain (LBP) in children and adolescents. A systematic review was conducted (Prospero CRD42016038186). Observational studies analyzing LBP risk factors among participants aged between 9 and 16 were searched for in 13 electronic databases and 8 specialized journals until March 31, 2016, with no language restrictions. In addition, references in the identified studies were manually tracked. All identified studies that included ≥50 participants aged 9 to 16, were reviewed. Their methodological quality was assessed by 2 reviewers separately, using validated tools, which scored, from worst to best, 0 to 100 for cross-sectional and 0 to 12 for cohort studies. A sensitivity analysis only included studies that had adjusted for confounders, had ≥500 participants, and had a methodological score of ≥50%. A total of 5142 citations were screened and 61 studies, including 137,877 participants from 5 continents, were reviewed. Their mean (range) methodological scores were 74.56 (50 to 100) for cross-sectional studies and 7.36 (5 to 9) for cohort studies. The studies had assessed 35 demographic, clinical, biological, family, psychological, ergonomic, and lifestyle risk factors. The mean (range) prevalence of LBP ranged between 15.25% (3.20 to 57.00) for point prevalence and 38.98% (11.60 to 85.56) for lifetime prevalence. Results on the association between LBP and risk factors were inconsistent. In the sensitivity analysis, "older age" and "participation in competitive sports" showed a consistent association with LBP. Future studies should focus on muscle characteristics, the relationship between body and backpack weights, duration of carrying the backpack, characteristics of sport practice, and which are the factors associated with specifically chronic pain.
Using discrete choice experiments within a cost-benefit analysis framework: some considerations.
McIntosh, Emma
2006-01-01
A great advantage of the stated preference discrete choice experiment (SPDCE) approach to economic evaluation methodology is its immense flexibility within applied cost-benefit analyses (CBAs). However, while the use of SPDCEs in healthcare has increased markedly in recent years there has been a distinct lack of equivalent CBAs in healthcare using such SPDCE-derived valuations. This article outlines specific issues and some practical suggestions for consideration relevant to the development of CBAs using SPDCE-derived benefits. The article shows that SPDCE-derived CBA can adopt recent developments in cost-effectiveness methodology including the cost-effectiveness plane, appropriate consideration of uncertainty, the net-benefit framework and probabilistic sensitivity analysis methods, while maintaining the theoretical advantage of the SPDCE approach. The concept of a cost-benefit plane is no different in principle to the cost-effectiveness plane and can be a useful tool for reporting and presenting the results of CBAs.However, there are many challenging issues to address for the advancement of CBA methodology using SPCDEs within healthcare. Particular areas for development include the importance of accounting for uncertainty in SPDCE-derived willingness-to-pay values, the methodology of SPDCEs in clinical trial settings and economic models, measurement issues pertinent to using SPDCEs specifically in healthcare, and the importance of issues such as consideration of the dynamic nature of healthcare and the resulting impact this has on the validity of attribute definitions and context.
A multicriteria-based methodology for site prioritisation in sediment management.
Alvarez-Guerra, Manuel; Viguri, Javier R; Voulvoulis, Nikolaos
2009-08-01
Decision-making for sediment management is a complex task that incorporates the selections of areas for remediation and the assessment of options for any mitigation required. The application of Multicriteria Analysis (MCA) to rank different areas, according to their need for sediment management, provides a great opportunity for prioritisation, a first step in an integrated methodology that finally aims to assess and select suitable alternatives for managing the identified priority sites. This paper develops a methodology that starts with the delimitation of management units within areas of study, followed by the application of MCA methods that allows ranking of these management units, according to their need for remediation. This proposed process considers not only scientific evidence on sediment quality, but also other relevant aspects such as social and economic criteria associated with such decisions. This methodology is illustrated with its application to the case study area of the Bay of Santander, in northern Spain, highlighting some of the implications of utilising different MCA methods in the process. It also uses site-specific data to assess the subjectivity in the decision-making process, mainly reflected through the assignment of the criteria weights and uncertainties in the criteria scores. Analysis of the sensitivity of the results to these factors is used as a way to assess the stability and robustness of the ranking as a first step of the sediment management decision-making process.
A new methodology to determine kinetic parameters for one- and two-step chemical models
NASA Technical Reports Server (NTRS)
Mantel, T.; Egolfopoulos, F. N.; Bowman, C. T.
1996-01-01
In this paper, a new methodology to determine kinetic parameters for simple chemical models and simple transport properties classically used in DNS of premixed combustion is presented. First, a one-dimensional code is utilized to performed steady unstrained laminar methane-air flame in order to verify intrinsic features of laminar flames such as burning velocity and temperature and concentration profiles. Second, the flame response to steady and unsteady strain in the opposed jet configuration is numerically investigated. It appears that for a well determined set of parameters, one- and two-step mechanisms reproduce the extinction limit of a laminar flame submitted to a steady strain. Computations with the GRI-mech mechanism (177 reactions, 39 species) and multicomponent transport properties are used to validate these simplified models. A sensitivity analysis of the preferential diffusion of heat and reactants when the Lewis number is close to unity indicates that the response of the flame to an oscillating strain is very sensitive to this number. As an application of this methodology, the interaction between a two-dimensional vortex pair and a premixed laminar flame is performed by Direct Numerical Simulation (DNS) using the one- and two-step mechanisms. Comparison with the experimental results of Samaniego et al. (1994) shows a significant improvement in the description of the interaction when the two-step model is used.
A network-base analysis of CMIP5 "historical" experiments
NASA Astrophysics Data System (ADS)
Bracco, A.; Foudalis, I.; Dovrolis, C.
2012-12-01
In computer science, "complex network analysis" refers to a set of metrics, modeling tools and algorithms commonly used in the study of complex nonlinear dynamical systems. Its main premise is that the underlying topology or network structure of a system has a strong impact on its dynamics and evolution. By allowing to investigate local and non-local statistical interaction, network analysis provides a powerful, but only marginally explored, framework to validate climate models and investigate teleconnections, assessing their strength, range, and impacts on the climate system. In this work we propose a new, fast, robust and scalable methodology to examine, quantify, and visualize climate sensitivity, while constraining general circulation models (GCMs) outputs with observations. The goal of our novel approach is to uncover relations in the climate system that are not (or not fully) captured by more traditional methodologies used in climate science and often adopted from nonlinear dynamical systems analysis, and to explain known climate phenomena in terms of the network structure or its metrics. Our methodology is based on a solid theoretical framework and employs mathematical and statistical tools, exploited only tentatively in climate research so far. Suitably adapted to the climate problem, these tools can assist in visualizing the trade-offs in representing global links and teleconnections among different data sets. Here we present the methodology, and compare network properties for different reanalysis data sets and a suite of CMIP5 coupled GCM outputs. With an extensive model intercomparison in terms of the climate network that each model leads to, we quantify how each model reproduces major teleconnections, rank model performances, and identify common or specific errors in comparing model outputs and observations.
NASA Astrophysics Data System (ADS)
Gambacorta, A.; Barnet, C.; Sun, F.; Goldberg, M.
2009-12-01
We investigate the water vapor component of the greenhouse effect in the tropical region using data from the Atmospheric InfraRed Sounder (AIRS). Differently from previous studies who have relayed on the assumption of constant lapse rate and performed coarse layer or total column sensitivity analysis, we resort to AIRS high vertical resolution to measure the greenhouse effect sensitivity to water vapor along the vertical column. We employ a "partial radiative perturbation" methodology and discriminate between two different dynamic regimes, convective and non-convective. This analysis provides useful insights on the occurrence and strength of the water vapor greenhouse effect and its sensitivity to spatial variations of surface temperature. By comparison with the clear-sky computation conducted in previous works, we attempt to confine an estimate for the cloud contribution to the greenhouse effect. Our results compare well with the current literature, falling in the upper range of the existing global circulation model estimates. We value the results of this analysis as a useful reference to help discriminate among model simulations and improve our capability to make predictions about the future of our climate.
Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor
Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.; ...
2017-02-28
Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less
Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.
Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2001-01-01
This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
Baxter, Siyan; Sanderson, Kristy; Venn, Alison J; Blizzard, C Leigh; Palmer, Andrew J
2014-01-01
To determine the relationship between return on investment (ROI) and quality of study methodology in workplace health promotion programs. Data were obtained through a systematic literature search of National Health Service Economic Evaluation Database (NHS EED), Database of Abstracts of Reviews of Effects (DARE), Health Technology Database (HTA), Cost Effectiveness Analysis (CEA) Registry, EconLit, PubMed, Embase, Wiley, and Scopus. Included were articles written in English or German reporting cost(s) and benefit(s) and single or multicomponent health promotion programs on working adults. Return-to-work and workplace injury prevention studies were excluded. Methodological quality was graded using British Medical Journal Economic Evaluation Working Party checklist. Economic outcomes were presented as ROI. ROI was calculated as ROI = (benefits - costs of program)/costs of program. Results were weighted by study size and combined using meta-analysis techniques. Sensitivity analysis was performed using two additional methodological quality checklists. The influences of quality score and important study characteristics on ROI were explored. Fifty-one studies (61 intervention arms) published between 1984 and 2012 included 261,901 participants and 122,242 controls from nine industry types across 12 countries. Methodological quality scores were highly correlated between checklists (r = .84-.93). Methodological quality improved over time. Overall weighted ROI [mean ± standard deviation (confidence interval)] was 1.38 ± 1.97 (1.38-1.39), which indicated a 138% return on investment. When accounting for methodological quality, an inverse relationship to ROI was found. High-quality studies (n = 18) had a smaller mean ROI, 0.26 ± 1.74 (.23-.30), compared to moderate (n = 16) 0.90 ± 1.25 (.90-.91) and low-quality (n = 27) 2.32 ± 2.14 (2.30-2.33) studies. Randomized control trials (RCTs) (n = 12) exhibited negative ROI, -0.22 ± 2.41(-.27 to -.16). Financial returns become increasingly positive across quasi-experimental, nonexperimental, and modeled studies: 1.12 ± 2.16 (1.11-1.14), 1.61 ± 0.91 (1.56-1.65), and 2.05 ± 0.88 (2.04-2.06), respectively. Overall, mean weighted ROI in workplace health promotion demonstrated a positive ROI. Higher methodological quality studies provided evidence of smaller financial returns. Methodological quality and study design are important determinants.
High-Resolution Melting Analysis for Rapid Detection of Sequence Type 131 Escherichia coli.
Harrison, Lucas B; Hanson, Nancy D
2017-06-01
Escherichia coli isolates belonging to the sequence type 131 (ST131) clonal complex have been associated with the global distribution of fluoroquinolone and β-lactam resistance. Whole-genome sequencing and multilocus sequence typing identify sequence type but are expensive when evaluating large numbers of samples. This study was designed to develop a cost-effective screening tool using high-resolution melting (HRM) analysis to differentiate ST131 from non-ST131 E. coli in large sample populations in the absence of sequence analysis. The method was optimized using DNA from 12 E. coli isolates. Singleplex PCR was performed using 10 ng of DNA, Type-it HRM buffer, and multilocus sequence typing primers and was followed by multiplex PCR. The amplicon sizes ranged from 630 to 737 bp. Melt temperature peaks were determined by performing HRM analysis at 0.1°C resolution from 50 to 95°C on a Rotor-Gene Q 5-plex HRM system. Derivative melt curves were compared between sequence types and analyzed by principal component analysis. A blinded study of 191 E. coli isolates of ST131 and unknown sequence types validated this methodology. This methodology returned 99.2% specificity (124 true negatives and 1 false positive) and 100% sensitivity (66 true positives and 0 false negatives). This HRM methodology distinguishes ST131 from non-ST131 E. coli without sequence analysis. The analysis can be accomplished in about 3 h in any laboratory with an HRM-capable instrument and principal component analysis software. Therefore, this assay is a fast and cost-effective alternative to sequencing-based ST131 identification. Copyright © 2017 Harrison and Hanson.
Uncertainty Analysis of the Grazing Flow Impedance Tube
NASA Technical Reports Server (NTRS)
Brown, Martha C.; Jones, Michael G.; Watson, Willie R.
2012-01-01
This paper outlines a methodology to identify the measurement uncertainty of NASA Langley s Grazing Flow Impedance Tube (GFIT) over its operating range, and to identify the parameters that most significantly contribute to the acoustic impedance prediction. Two acoustic liners are used for this study. The first is a single-layer, perforate-over-honeycomb liner that is nonlinear with respect to sound pressure level. The second consists of a wire-mesh facesheet and a honeycomb core, and is linear with respect to sound pressure level. These liners allow for evaluation of the effects of measurement uncertainty on impedances educed with linear and nonlinear liners. In general, the measurement uncertainty is observed to be larger for the nonlinear liners, with the largest uncertainty occurring near anti-resonance. A sensitivity analysis of the aerodynamic parameters (Mach number, static temperature, and static pressure) used in the impedance eduction process is also conducted using a Monte-Carlo approach. This sensitivity analysis demonstrates that the impedance eduction process is virtually insensitive to each of these parameters.
Three-dimensional aerodynamic shape optimization of supersonic delta wings
NASA Technical Reports Server (NTRS)
Burgreen, Greg W.; Baysal, Oktay
1994-01-01
A recently developed three-dimensional aerodynamic shape optimization procedure AeSOP(sub 3D) is described. This procedure incorporates some of the most promising concepts from the area of computational aerodynamic analysis and design, specifically, discrete sensitivity analysis, a fully implicit 3D Computational Fluid Dynamics (CFD) methodology, and 3D Bezier-Bernstein surface parameterizations. The new procedure is demonstrated in the preliminary design of supersonic delta wings. Starting from a symmetric clipped delta wing geometry, a Mach 1.62 asymmetric delta wing and two Mach 1. 5 cranked delta wings were designed subject to various aerodynamic and geometric constraints.
Landscape sensitivity in a dynamic environment
NASA Astrophysics Data System (ADS)
Lin, Jiun-Chuan; Jen, Chia-Horn
2010-05-01
Landscape sensitivity at different scales and topics is presented in this study. Methodological approach composed most of this paper. According to the environmental records in the south eastern Asia, the environment change is highly related with five factors, such as scale of influence area, background of environment characters, magnitude and frequency of events, thresholds of occurring hazards and influence by time factor. This paper tries to demonstrate above five points from historical and present data. It is found that landscape sensitivity is highly related to the degree of vulnerability of the land and the processes which put on the ground including human activities. The scale of sensitivity and evaluation of sensitivities is demonstrated in this paper by the data around east Asia. The methods of classification are mainly from the analysis of environmental data and the records of hazards. From the trend of rainfall records, rainfall intensity and change of temperature, the magnitude and frequency of earthquake, dust storm, days of draught, number of hazards, there are many coincidence on these factors with landscape sensitivities. In conclusion, the landscape sensitivities could be classified as four groups: physical stable, physical unstable, unstable, extremely unstable. This paper explain the difference.
2009-03-01
making process (Skinner, 2001, 9). According to Clemen , before we can begin to apply any methodology to a specific decision problem, the analyst...it is possible to work with them to determine the values and objectives that relate to the decision in question ( Clemen , 2001, 21). Clemen ...value hierarchy is constructed, Clemen and Reilly suggest that a trade off is made between varying objectives. They introduce weights to determine
NASA Astrophysics Data System (ADS)
Crevelin, Eduardo J.; Salami, Fernanda H.; Alves, Marcela N. R.; De Martinis, Bruno S.; Crotti, Antônio E. M.; Moraes, Luiz A. B.
2016-05-01
Amphetamine-type stimulants (ATS) are among illicit stimulant drugs that are most often used worldwide. A major challenge is to develop a fast and efficient methodology involving minimal sample preparation to analyze ATS in biological fluids. In this study, a urine pool solution containing amphetamine, methamphetamine, ephedrine, sibutramine, and fenfluramine at concentrations ranging from 0.5 pg/mL to 100 ng/mL was prepared and analyzed by atmospheric solids analysis probe tandem mass spectrometry (ASAP-MS/MS) and multiple reaction monitoring (MRM). A urine sample and saliva collected from a volunteer contributor (V1) were also analyzed. The limit of detection of the tested compounds ranged between 0.002 and 0.4 ng/mL in urine samples; the signal-to-noise ratio was 5. These results demonstrated that the ASAP-MS/MS methodology is applicable for the fast detection of ATS in urine samples with great sensitivity and specificity, without the need for cleanup, preconcentration, or chromatographic separation. Thus ASAP-MS/MS could potentially be used in clinical and forensic toxicology applications.
Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem
2008-01-01
A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.
Jahn, Ingeborg; Börnhorst, Claudia; Günther, Frauke; Brand, Tilman
2017-02-15
During the last decades, sex and gender biases have been identified in various areas of biomedical and public health research, leading to compromised validity of research findings. As a response, methodological requirements were developed but these are rarely translated into research practice. The aim of this study is to provide good practice examples of sex/gender sensitive health research. We conducted a systematic search of research articles published in JECH between 2006 and 2014. An instrument was constructed to evaluate sex/gender sensitivity in four stages of the research process (background, study design, statistical analysis, discussion). In total, 37 articles covering diverse topics were included. Thereof, 22 were evaluated as good practice example in at least one stage; two articles achieved highest ratings across all stages. Good examples of the background referred to available knowledge on sex/gender differences and sex/gender informed theoretical frameworks. Related to the study design, good examples calculated sample sizes to be able to detect sex/gender differences, selected sex/gender sensitive outcome/exposure indicators, or chose different cut-off values for male and female participants. Good examples of statistical analyses used interaction terms with sex/gender or different shapes of the estimated relationship for men and women. Examples of good discussions interpreted their findings related to social and biological explanatory models or questioned the statistical methods used to detect sex/gender differences. The identified good practice examples may inspire researchers to critically reflect on the relevance of sex/gender issues of their studies and help them to translate methodological recommendations of sex/gender sensitivity into research practice.
Space Station man-machine automation trade-off analysis
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.; Bard, J.; Feinberg, A.
1985-01-01
The man machine automation tradeoff methodology presented is of four research tasks comprising the autonomous spacecraft system technology (ASST) project. ASST was established to identify and study system level design problems for autonomous spacecraft. Using the Space Station as an example spacecraft system requiring a certain level of autonomous control, a system level, man machine automation tradeoff methodology is presented that: (1) optimizes man machine mixes for different ground and on orbit crew functions subject to cost, safety, weight, power, and reliability constraints, and (2) plots the best incorporation plan for new, emerging technologies by weighing cost, relative availability, reliability, safety, importance to out year missions, and ease of retrofit. A fairly straightforward approach is taken by the methodology to valuing human productivity, it is still sensitive to the important subtleties associated with designing a well integrated, man machine system. These subtleties include considerations such as crew preference to retain certain spacecraft control functions; or valuing human integration/decision capabilities over equivalent hardware/software where appropriate.
An eco-balance of a recycling plant for spent lead-acid batteries.
Salomone, Roberta; Mondello, Fabio; Lanuzza, Francesco; Micali, Giuseppe
2005-02-01
This study applies Life Cycle Assessment (LCA) methodology to present an eco-balance of a recycling plant that treats spent lead-acid batteries. The recycling plant uses pyrometallurgical treatment to obtain lead from spent batteries. The application of LCA methodology (ISO 14040 series) enabled us to assess the potential environmental impacts arising from the recycling plant's operations. Thus, net emissions of greenhouse gases as well as other major environmental consequences were examined and hot spots inside the recycling plant were identified. A sensitivity analysis was also performed on certain variables to evaluate their effect on the LCA study. The LCA of a recycling plant for spent lead-acid batteries presented shows that this methodology allows all of the major environmental consequences associated with lead recycling using the pyrometallurgical process to be examined. The study highlights areas in which environmental improvements are easily achievable by a business, providing a basis for suggestions to minimize the environmental impact of its production phases, improving process and company performance in environmental terms.
Using Design-Based Research in Gifted Education
ERIC Educational Resources Information Center
Jen, Enyi; Moon, Sidney; Samarapungavan, Ala
2015-01-01
Design-based research (DBR) is a new methodological framework that was developed in the context of the learning sciences; however, it has not been used very often in the field of gifted education. Compared with other methodologies, DBR is more process-oriented and context-sensitive. In this methodological brief, the authors introduce DBR and…
A methodological investigation of hominoid craniodental morphology and phylogenetics.
Bjarnason, Alexander; Chamberlain, Andrew T; Lockwood, Charles A
2011-01-01
The evolutionary relationships of extant great apes and humans have been largely resolved by molecular studies, yet morphology-based phylogenetic analyses continue to provide conflicting results. In order to further investigate this discrepancy we present bootstrap clade support of morphological data based on two quantitative datasets, one dataset consisting of linear measurements of the whole skull from 5 hominoid genera and the second dataset consisting of 3D landmark data from the temporal bone of 5 hominoid genera, including 11 sub-species. Using similar protocols for both datasets, we were able to 1) compare distance-based phylogenetic methods to cladistic parsimony of quantitative data converted into discrete character states, 2) vary outgroup choice to observe its effect on phylogenetic inference, and 3) analyse male and female data separately to observe the effect of sexual dimorphism on phylogenies. Phylogenetic analysis was sensitive to methodological decisions, particularly outgroup selection, where designation of Pongo as an outgroup and removal of Hylobates resulted in greater congruence with the proposed molecular phylogeny. The performance of distance-based methods also justifies their use in phylogenetic analysis of morphological data. It is clear from our analyses that hominoid phylogenetics ought not to be used as an example of conflict between the morphological and molecular, but as an example of how outgroup and methodological choices can affect the outcome of phylogenetic analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Batchu, Sudha Rani; Ramirez, Cesar E; Gardinali, Piero R
2015-05-01
Because of its widespread consumption and its persistence during wastewater treatment, the artificial sweetener sucralose has gained considerable interest as a proxy to detect wastewater intrusion into usable water resources. The molecular resilience of this compound dictates that coastal and oceanic waters are the final recipient of this compound with unknown effects on ecosystems. Furthermore, no suitable methodologies have been reported for routine, ultra-trace detection of sucralose in seawater as the sensitivity of traditional liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis is limited by a low yield of product ions upon collision-induced dissociation (CID). In this work, we report the development and field test of an alternative analysis tool for sucralose in environmental waters, with enough sensitivity for the proper quantitation and confirmation of this analyte in seawater. The methodology is based on automated online solid-phase extraction (SPE) and high-resolving-power orbitrap MS detection. Operating in full scan (no CID), detection of the unique isotopic pattern (100:96:31 for [M-H](-), [M-H+2](-), and [M-H+4](-), respectively) was used for ultra-trace quantitation and analyte identification. The method offers fast analysis (14 min per run) and low sample consumption (10 mL per sample) with method detection and confirmation limits (MDLs and MCLs) of 1.4 and 5.7 ng/L in seawater, respectively. The methodology involves low operating costs due to virtually no sample preparation steps or consumables. As an application example, samples were collected from 17 oceanic and estuarine sites in Broward County, FL, with varying salinity (6-40 PSU). Samples included the ocean outfall of the Southern Regional Wastewater Treatment Plant (WWTP) that serves Hollywood, FL. Sucralose was detected above MCL in 78% of the samples at concentrations ranging from 8 to 148 ng/L, with the exception of the WWTP ocean outfall (at pipe end, 28 m below the surface) where the measured concentration was 8418 ± 3813 ng/L. These results demonstrate the applicability of this monitoring tool for the trace-level detection of this wastewater marker in very dilute environmental waters.
Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.
2013-01-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559
Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H
2013-10-01
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.
Donovan, Wilberta; Leavitt, Lewis; Taylor, Nicole
2005-09-01
The impact of differences in maternal self-efficacy and infant difficulty on mothers' sensitivity to small changes in the fundamental frequency of an audiotaped infant's cry was explored in 2 experiments. The experiments share in common experimental manipulations of infant difficulty, a laboratory derived measure of maternal efficacy (low, moderate, and high illusory control), and the use of signal detection methodology to measure maternal sensory sensitivity. In Experiment 1 (N = 72), easy and difficult infant temperament was manipulated by varying the amount of crying (i.e., frequency of cry termination) in a simulated child-care task. In Experiment 2 (N = 51), easy and difficult infant temperament was manipulated via exposure to the solvable or unsolvable pretreatment of a learned helplessness task to mirror mothers' ability to soothe a crying infant. In both experiments, only mothers with high illusory control showed reduced sensory sensitivity under the difficult infant condition compared with the easy infant condition. Copyright 2005 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
Smith, Toby O; Drew, Benjamin T; Toms, Andoni P
2012-07-01
Magnetic resonance imaging (MRI) and magnetic resonance arthrography (MRA) have gained increasing favour in the assessment of patients with suspected glenoid labral injuries. The purpose of this study was to determine the diagnostic accuracy of MRI or MRA in the detection of gleniod labral lesions. A systematic review was undertaken of the electronic databases Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, AMED and CINAHL, in addition to a search of unpublished literature databases. All studies which compared the ability of MRI or MRA (index test) to assess gleniod labral tears or lesions, when verified with a surgical procedure (arthroscopy or open surgery-reference test) were included. Data extraction and methodological appraisal using the QUADAS tool were both conducted by two reviewers independently. Data were analysed through a summary receiver operator characteristic curve and pooled sensitivity and specificity analysis were calculated with 95% confidence intervals. Sixty studies including 4,667 shoulders from 4,574 patients were reviewed. There appeared slightly greater diagnostic test accuracy for MRA over MRI for the detection of overall gleniod labral lesions (MRA-sensitivity 88%, specificity 93% vs. MRI sensitivity 76% vs. specificity 87%). Methodologically, studies recruited and identified their samples appropriately and clearly defined the radiological procedures. In general, it was not clearly defined why patients were lost during the study, and studies were poor at recording whether the same clinical data were available to the radiologist interpreting the MRI or MRA as would be available in clinical practice. Most studies did not state whether the surgeon interpreting the arthroscopic procedure was blinded to the results of the MR or MRA imaging. Based on the available literature, overall MRA appeared marginally superior to MRI for the detection of glenohumeral labral lesions. Level 2a.
Abderrahim, Mohamed; Arribas, Silvia M; Condezo-Hoyos, Luis
2017-05-01
Pyrogallol red (PGR) was identified as a novel optical probe for the detection of hydrogen peroxide (H 2 O 2 ) based on horseradish peroxidase (HRP)-catalyzed oxidation. Response surface methodology (RSM) was applied as a tool to optimize the concentrations of PGR (100µmolL -1 ), HRP (1UmL -1 ) and H 2 O 2 (250µmolL -1 ) and used to develop a sensitive PGR-based catalase (CAT) activity assay (PGR-CAT assay). N-ethylmaleimide -NEM- (102mmolL -1 ) was used to avoid interference produced by thiol groups while protecting CAT activity. Incubation time (30min) for samples or CAT used as standard and H 2 O 2 as well as signal stability (stable between 5 and 60min) were also evaluated. PGR-CAT assay was linear within the range of 0-4UmL -1 (R 2 =0.993) and very sensitive with limits of detection (LOD) of 0.005UmL -1 and quantitation (LOQ) of 0.01UmL -1 . PGR-CAT assay showed an adequate intra-day RSD=0.6-9.5% and inter-day RSD=2.4-8.9%. Bland-Altman analysis and Passing-Bablok and Pearson correlation analysis showed good agreement between CAT activity as measured by the PRG-CAT assay and the Amplex Red assay. The PGR-CAT assay is more sensitive than all the other colorimetric assays reported, particularly the Amplex Red assay, and the cost of PGR is a small fraction (about 1/1000) of that of an Amplex Red probe, so it can be expected to find wide use among scientists studying CAT activity in biological samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Targeted Quantification of Isoforms of a Thylakoid-Bound Protein: MRM Method Development.
Bru-Martínez, Roque; Martínez-Márquez, Ascensión; Morante-Carriel, Jaime; Sellés-Marchart, Susana; Martínez-Esteso, María José; Pineda-Lucas, José Luis; Luque, Ignacio
2018-01-01
Targeted mass spectrometric methods such as selected/multiple reaction monitoring (SRM/MRM) have found intense application in protein detection and quantification which competes with classical immunoaffinity techniques. It provides a universal procedure to develop a fast, highly specific, sensitive, accurate, and cheap methodology for targeted detection and quantification of proteins based on the direct analysis of their surrogate peptides typically generated by tryptic digestion. This methodology can be advantageously applied in the field of plant proteomics and particularly for non-model species since immunoreagents are scarcely available. Here, we describe the issues to take into consideration in order to develop a MRM method to detect and quantify isoforms of the thylakoid-bound protein polyphenol oxidase from the non-model and database underrepresented species Eriobotrya japonica Lindl.
Probabilistic simulation of stress concentration in composite laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, L.
1993-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The probabilistic composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties while probabilistic finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate stress concentration factors such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using it to simulate the stress concentration factors in composite laminates made from three different composite systems. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the stress concentration factors are influenced by local stiffness variables, by load eccentricities and by initial stress fields.
Quantifying Drosophila food intake: comparative analysis of current methodology
Deshpande, Sonali A.; Carvalho, Gil B.; Amador, Ariadna; Phillips, Angela M.; Hoxha, Sany; Lizotte, Keith J.; Ja, William W.
2014-01-01
Food intake is a fundamental parameter in animal studies. Despite the prevalent use of Drosophila in laboratory research, precise measurements of food intake remain challenging in this model organism. Here, we compare several common Drosophila feeding assays: the Capillary Feeder (CAFE), food-labeling with a radioactive tracer or a colorimetric dye, and observations of proboscis extension (PE). We show that the CAFE and radioisotope-labeling provide the most consistent results, have the highest sensitivity, and can resolve differences in feeding that dye-labeling and PE fail to distinguish. We conclude that performing the radiolabeling and CAFE assays in parallel is currently the best approach for quantifying Drosophila food intake. Understanding the strengths and limitations of food intake methodology will greatly advance Drosophila studies of nutrition, behavior, and disease. PMID:24681694
1984-12-01
total sum of squares at the center points minus the correction factor for the mean at the center points ( SSpe =Y’Y-nlY), where n1 is the number of...SSlac=SSres- SSpe ). The sum of squares due to pure error estimates 0" and the sum of squares due to lack-of-fit estimates 0’" plus a bias term if...Response Surface Methodology Source d.f. SS MS Regression n b’X1 Y b’XVY/n Residual rn-n Y’Y-b’X’ *Y (Y’Y-b’X’Y)/(n-n) Pure Error ni-i Y’Y-nl1Y SSpe / (ni
NASA Astrophysics Data System (ADS)
Liberal, Iñigo; Engheta, Nader
2018-02-01
Quantum emitters interacting through a waveguide setup have been proposed as a promising platform for basic research on light-matter interactions and quantum information processing. We propose to augment waveguide setups with the use of multiport devices. Specifically, we demonstrate theoretically the possibility of exciting N -qubit subradiant, maximally entangled, states with the use of suitably designed N -port devices. Our general methodology is then applied based on two different devices: an epsilon-and-mu-near-zero waveguide hub and a nonreciprocal circulator. A sensitivity analysis is carried out to assess the robustness of the system against a number of nonidealities. These findings link and merge the designs of devices for quantum state engineering with classical communication network methodologies.
Banzato, Tommaso; Fiore, Enrico; Morgante, Massimo; Manuali, Elisabetta; Zotti, Alessandro
2016-10-01
Hepatic lipidosis is the most diffused hepatic disease in the lactating cow. A new methodology to estimate the degree of fatty infiltration of the liver in lactating cows by means of texture analysis of B-mode ultrasound images is proposed. B-mode ultrasonography of the liver was performed in 48 Holstein Friesian cows using standardized ultrasound parameters. Liver biopsies to determine the triacylglycerol content of the liver (TAGqa) were obtained from each animal. A large number of texture parameters were calculated on the ultrasound images by means of a free software. Based on the TAGqa content of the liver, 29 samples were classified as mild (TAGqa<50mg/g), 6 as moderate (50mg/g
Rombach, Ines; Rivero-Arias, Oliver; Gray, Alastair M; Jenkinson, Crispin; Burke, Órlaith
2016-07-01
Patient-reported outcome measures (PROMs) are designed to assess patients' perceived health states or health-related quality of life. However, PROMs are susceptible to missing data, which can affect the validity of conclusions from randomised controlled trials (RCTs). This review aims to assess current practice in the handling, analysis and reporting of missing PROMs outcome data in RCTs compared to contemporary methodology and guidance. This structured review of the literature includes RCTs with a minimum of 50 participants per arm. Studies using the EQ-5D-3L, EORTC QLQ-C30, SF-12 and SF-36 were included if published in 2013; those using the less commonly implemented HUI, OHS, OKS and PDQ were included if published between 2009 and 2013. The review included 237 records (4-76 per relevant PROM). Complete case analysis and single imputation were commonly used in 33 and 15 % of publications, respectively. Multiple imputation was reported for 9 % of the PROMs reviewed. The majority of publications (93 %) failed to describe the assumed missing data mechanism, while low numbers of papers reported methods to minimise missing data (23 %), performed sensitivity analyses (22 %) or discussed the potential influence of missing data on results (16 %). Considerable discrepancy exists between approved methodology and current practice in handling, analysis and reporting of missing PROMs outcome data in RCTs. Greater awareness is needed for the potential biases introduced by inappropriate handling of missing data, as well as the importance of sensitivity analysis and clear reporting to enable appropriate assessments of treatment effects and conclusions from RCTs.
Methodological and ethical issues related to qualitative telephone interviews on sensitive topics.
Mealer, Meredith; Jones Rn, Jacqueline
2014-03-01
To explore the methodological and ethical issues of conducting qualitative telephone interviews about personal or professional trauma with critical care nurses. The most common method for conducting interviews is face-to-face. However, there is evidence to support telephone interviewing on a variety of sensitive topics including post-traumatic stress disorder (PTSD). Qualitative telephone interviews can limit emotional distress because of the comfort experienced through virtual communication. Critical care nurses are at increased risk of developing PTSD due to the cumulative exposure to work-related stress in the intensive care unit. We explored the methodological and ethical issues of conducting qualitative telephone interviews, drawing on our experiences communicating with a group of critical care nurses. Qualitative research interviews with 27 critical care nurses. Fourteen of the nurses met the diagnostic criteria for PTSD; 13 did not and had scores consistent with high levels of resilience. This is a methodology paper on the authors' experiences of interviewing critical care nurses on sensitive topics via the telephone. The authors found that establishing rapport and connections with the participants and the therapeutic use of non-verbal communication were essential, and fostered trust and compassion. The ethical issues of this mode of communication include protecting the privacy and confidentiality associated with the disclosure of sensitive information, and minimising the risk of psychological harm to the researcher and participants. Qualitative telephone interviews are a valuable method of collecting information on sensitive topics. This paper explores a method of interviewing in the workplace. It will help inform interventions to promote healthy adaptation following trauma exposure in the intensive care unit.
ERIC Educational Resources Information Center
Barden, Sejal M.; Shannonhouse, Laura; Mobley, Keith
2015-01-01
Scholars (e.g., Bemak & Chung, 2004) underscore the need for group workers to be culturally sensitive. One group training strategy, cultural immersion, is often employed to develop cultural sensitivity. However, no studies have utilized quasi-experimental methodologies to assess differences in cultural sensitivity between trainees that immerse…
NASA Astrophysics Data System (ADS)
Pathak, Maharshi
City administrators and real-estate developers have been setting up rather aggressive energy efficiency targets. This, in turn, has led the building science research groups across the globe to focus on urban scale building performance studies and level of abstraction associated with the simulations of the same. The increasing maturity of the stakeholders towards energy efficiency and creating comfortable working environment has led researchers to develop methodologies and tools for addressing the policy driven interventions whether it's urban level energy systems, buildings' operational optimization or retrofit guidelines. Typically, these large-scale simulations are carried out by grouping buildings based on their design similarities i.e. standardization of the buildings. Such an approach does not necessarily lead to potential working inputs which can make decision-making effective. To address this, a novel approach is proposed in the present study. The principle objective of this study is to propose, to define and evaluate the methodology to utilize machine learning algorithms in defining representative building archetypes for the Stock-level Building Energy Modeling (SBEM) which are based on operational parameter database. The study uses "Phoenix- climate" based CBECS-2012 survey microdata for analysis and validation. Using the database, parameter correlations are studied to understand the relation between input parameters and the energy performance. Contrary to precedence, the study establishes that the energy performance is better explained by the non-linear models. The non-linear behavior is explained by advanced learning algorithms. Based on these algorithms, the buildings at study are grouped into meaningful clusters. The cluster "mediod" (statistically the centroid, meaning building that can be represented as the centroid of the cluster) are established statistically to identify the level of abstraction that is acceptable for the whole building energy simulations and post that the retrofit decision-making. Further, the methodology is validated by conducting Monte-Carlo simulations on 13 key input simulation parameters. The sensitivity analysis of these 13 parameters is utilized to identify the optimum retrofits. From the sample analysis, the envelope parameters are found to be more sensitive towards the EUI of the building and thus retrofit packages should also be directed to maximize the energy usage reduction.
Hanna, George B.
2018-01-01
Abstract Proton transfer reaction time of flight mass spectrometry (PTR‐ToF‐MS) is a direct injection MS technique, allowing for the sensitive and real‐time detection, identification, and quantification of volatile organic compounds. When aiming to employ PTR‐ToF‐MS for targeted volatile organic compound analysis, some methodological questions must be addressed, such as the need to correctly identify product ions, or evaluating the quantitation accuracy. This work proposes a workflow for PTR‐ToF‐MS method development, addressing the main issues affecting the reliable identification and quantification of target compounds. We determined the fragmentation patterns of 13 selected compounds (aldehydes, fatty acids, phenols). Experiments were conducted under breath‐relevant conditions (100% humid air), and within an extended range of reduced electric field values (E/N = 48–144 Td), obtained by changing drift tube voltage. Reactivity was inspected using H3O+, NO+, and O2 + as primary ions. The results show that a relatively low (<90 Td) E/N often permits to reduce fragmentation enhancing sensitivity and identification capabilities, particularly in the case of aldehydes using NO+, where a 4‐fold increase in sensitivity is obtained by means of drift voltage reduction. We developed a novel calibration methodology, relying on diffusion tubes used as gravimetric standards. For each of the tested compounds, it was possible to define suitable conditions whereby experimental error, defined as difference between gravimetric measurements and calculated concentrations, was 8% or lower. PMID:29336521
NASA Astrophysics Data System (ADS)
Rana, Sachin; Ertekin, Turgay; King, Gregory R.
2018-05-01
Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.
Schuff, M M; Gore, J P; Nauman, E A
2013-12-01
The treatment of cancerous tumors is dependent upon the delivery of therapeutics through the blood by means of the microcirculation. Differences in the vasculature of normal and malignant tissues have been recognized, but it is not fully understood how these differences affect transport and the applicability of existing mathematical models has been questioned at the microscale due to the complex rheology of blood and fluid exchange with the tissue. In addition to determining an appropriate set of governing equations it is necessary to specify appropriate model parameters based on physiological data. To this end, a two stage sensitivity analysis is described which makes it possible to determine the set of parameters most important to the model's calibration. In the first stage, the fluid flow equations are examined and a sensitivity analysis is used to evaluate the importance of 11 different model parameters. Of these, only four substantially influence the intravascular axial flow providing a tractable set that could be calibrated using red blood cell velocity data from the literature. The second stage also utilizes a sensitivity analysis to evaluate the importance of 14 model parameters on extravascular flux. Of these, six exhibit high sensitivity and are integrated into the model calibration using a response surface methodology and experimental intra- and extravascular accumulation data from the literature (Dreher et al. in J Natl Cancer Inst 98(5):335-344, 2006). The model exhibits good agreement with the experimental results for both the mean extravascular concentration and the penetration depth as a function of time for inert dextran over a wide range of molecular weights.
Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2013-01-01
The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less
ERIC Educational Resources Information Center
Diamond, Michael Jay; Shapiro, Jerrold Lee
This paper proposes a model for the long-term scientific study of encounter, T-, and sensitivity groups. The authors see the need for overcoming major methodological and design inadequacies of such research. They discuss major methodological flaws in group outcome research as including: (1) lack of adequate base rate or pretraining measures; (2)…
Jongeneel, W P; Delmaar, J E; Bokkers, B G H
2018-06-08
A methodology to assess the health impact of skin sensitizers is introduced, which consists of the comparison of the probabilistic aggregated exposure with a probabilistic (individual) human sensitization or elicitation induction dose. The health impact of potential policy measures aimed at reducing the concentration of a fragrance allergen, geraniol, in consumer products is analysed in a simulated population derived from multiple product use surveys. Our analysis shows that current dermal exposure to geraniol from personal care and household cleaning products lead to new cases of contact allergy and induce clinical symptoms for those already sensitized. We estimate that this exposure results yearly in 34 new cases of geraniol contact allergy per million consumers in Western and Northern Europe, mainly due to exposure to household cleaning products. About twice as many consumers (60 per million) are projected to suffer from clinical symptoms due to re-exposure to geraniol. Policy measures restricting geraniol concentrations to <0.01% will noticeably reduce new cases of sensitization and decrease the number of people with clinical symptoms as well as the frequency of occurrence of these clinical symptoms. The estimated numbers should be interpreted with caution and provide only a rough indication of the health impact. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
High-sensitivity chemical derivatization NMR analysis for condition monitoring of aged elastomers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Assink, Roger Alan; Celina, Mathias C.; Skutnik, Julie Michelle
2004-06-01
An aged polybutadiene-based elastomer was reacted with trifluoroacetic anhydride (TFAA) and subsequently analyzed via 19F NMR spectroscopy. Derivatization between the TFAA and hydroxyl functionalities produced during thermo-oxidative aging was achieved, resulting in the formation of trifluoroester groups on the polymer. Primary and secondary alcohols were confirmed to be the main oxidation products of this material, and the total percent oxidation correlated with data obtained from oxidation rate measurements. The chemical derivatization appears to be highly sensitive and can be used to establish the presence and identity of oxidation products in aged polymeric materials. This methodology represents a novel condition monitoringmore » approach for the detection of chemical changes that are otherwise difficult to analyze.« less
Derivatization of peptides as quaternary ammonium salts for sensitive detection by ESI-MS.
Cydzik, Marzena; Rudowska, Magdalena; Stefanowicz, Piotr; Szewczuk, Zbigniew
2011-06-01
A series of model peptides in the form of quaternary ammonium salts at the N-terminus was efficiently prepared by the solid-phase synthesis. Tandem mass spectrometric analysis of the peptide quaternary ammonium derivatives was shown to provide sequence confirmation and enhanced detection. We designed the 2-(1,4-diazabicyclo[2.2.2] octylammonium)acetyl quaternary ammonium group which does not suffer from neutral losses during MS/MS experiments. The presented quaternization of 1,4-diazabicyclo[2.2.2]octane (DABCO) by iodoacetylated peptides is relatively easy and compatible with standard solid-phase peptide synthesis. This methodology offers a novel sensitive approach to analyze peptides and other compounds. Copyright © 2011 European Peptide Society and John Wiley & Sons, Ltd.
How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?
Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J
2004-01-01
There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost-utility threshold above the base-case results (n = 25) were of somewhat higher quality, and were more likely to justify their sensitivity analysis parameters, than those that did not (n = 45), but the overall quality rating was only moderate. Sensitivity analyses for economic parameters are widely reported and often identify whether choosing different assumptions leads to a different conclusion regarding cost effectiveness. Changes in HR-QOL and cost parameters should be used to test alternative guideline recommendations when there is uncertainty regarding these parameters. Changes in discount rates less frequently produce results that would change the conclusion about cost effectiveness. Improving the overall quality of published studies and describing the justifications for parameter ranges would allow more meaningful conclusions to be drawn from sensitivity analyses.
Identification of elastic, dielectric, and piezoelectric constants in piezoceramic disks.
Perez, Nicolas; Andrade, Marco A B; Buiochi, Flavio; Adamowski, Julio C
2010-12-01
Three-dimensional modeling of piezoelectric devices requires a precise knowledge of piezoelectric material parameters. The commonly used piezoelectric materials belong to the 6mm symmetry class, which have ten independent constants. In this work, a methodology to obtain precise material constants over a wide frequency band through finite element analysis of a piezoceramic disk is presented. Given an experimental electrical impedance curve and a first estimate for the piezoelectric material properties, the objective is to find the material properties that minimize the difference between the electrical impedance calculated by the finite element method and that obtained experimentally by an electrical impedance analyzer. The methodology consists of four basic steps: experimental measurement, identification of vibration modes and their sensitivity to material constants, a preliminary identification algorithm, and final refinement of the material constants using an optimization algorithm. The application of the methodology is exemplified using a hard lead zirconate titanate piezoceramic. The same methodology is applied to a soft piezoceramic. The errors in the identification of each parameter are statistically estimated in both cases, and are less than 0.6% for elastic constants, and less than 6.3% for dielectric and piezoelectric constants.
Sexuality Research in Iran: A Focus on Methodological and Ethical Considerations.
Rahmani, Azam; Merghati-Khoei, Effat; Moghaddam-Banaem, Lida; Zarei, Fatemeh; Montazeri, Ali; Hajizadeh, Ebrahim
2015-07-01
Research on sensitive topics, such as sexuality, could raise technical, methodological, ethical, political, and legal challenges. The aim of this paper was to draw the methodological challenges which the authors confronted during sexuality research with young population in the Iranian culture. This study was an exploratory mixed method one conducted in 2013-14. We interviewed 63 young women aged 18-34 yr in qualitative phase and 265 young women in quantitative phase in (university and non-university) dormitories and in an Adolescent Friendly Center. Data were collected using focus group discussions and individual interviews in the qualitative phase. We employed conventional content analysis to analyze the data. To enhance the rigor of the data, multiple data collection methods, maximum variation sampling, and peer checks were applied. Five main themes emerged from the data: interaction with opposite sex, sexual risk, sexual protective, sex education, and sexual vulnerability. Challenges while conducting sex research have been discussed. These challenges included assumption of promiscuity, language of silence and privacy concerns, and sex segregation policy. We described the strategies applied in our study and the rationales for each strategy. Strategies applied in the present study can be employed in contexts with the similar methodological and moral concerns.
Toward quantifying the effectiveness of water trading under uncertainty.
Luo, B; Huang, G H; Zou, Y; Yin, Y Y
2007-04-01
This paper presents a methodology for quantifying the effectiveness of water-trading under uncertainty, by developing an optimization model based on the interval-parameter two-stage stochastic program (TSP) technique. In the study, the effectiveness of a water-trading program is measured by the water volume that can be released through trading from a statistical point of view. The methodology can also deal with recourse water allocation problems generated by randomness in water availability and, at the same time, tackle uncertainties expressed as intervals in the trading system. The developed methodology was tested with a hypothetical water-trading program in an agricultural system in the Swift Current Creek watershed, Canada. Study results indicate that the methodology can effectively measure the effectiveness of a trading program through estimating the water volume being released through trading in a long-term view. A sensitivity analysis was also conducted to analyze the effects of different trading costs on the trading program. It shows that the trading efforts would become ineffective when the trading costs are too high. The case study also demonstrates that the trading program is more effective in a dry season when total water availability is in shortage.
Analysis of Multiple Cracks in an Infinite Functionally Graded Plate
NASA Technical Reports Server (NTRS)
Shbeeb, N. I.; Binienda, W. K.; Kreider, K. L.
1999-01-01
A general methodology was constructed to develop the fundamental solution for a crack embedded in an infinite non-homogeneous material in which the shear modulus varies exponentially with the y coordinate. The fundamental solution was used to generate a solution to fully interactive multiple crack problems for stress intensity factors and strain energy release rates. Parametric studies were conducted for two crack configurations. The model displayed sensitivity to crack distance, relative angular orientation, and to the coefficient of nonhomogeneity.
2015-09-01
continue to occur in the Peruvian Andes and the low-lying Amazon basin in the environmentally sensitive and protected region of Madre de Dios . In 2013...PwC stated, “six mining companies and the small producers of the region of Madre de Dios concentrate 62% of [gold] production (PwC, 2013b, p. 16...illegal mining operations occur throughout Madre de Dios without attempts at formalization. In Madre de Dios , forests are clear cut of vegetation
Du, Xiaojiao; Jiang, Ding; Hao, Nan; Qian, Jing; Dai, Liming; Zhou, Lei; Hu, Jianping; Wang, Kun
2016-10-04
The development of novel detection methodologies in electrochemiluminescence (ECL) aptasensor fields with simplicity and ultrasensitivity is essential for constructing biosensing architectures. Herein, a facile, specific, and sensitive methodology was developed unprecedentedly for quantitative detection of microcystin-LR (MC-LR) based on three-dimensional boron and nitrogen codoped graphene hydrogels (BN-GHs) assisted steric hindrance amplifying effect between the aptamer and target analytes. The recognition reaction was monitored by quartz crystal microbalance (QCM) to validate the possible steric hindrance effect. First, the BN-GHs were synthesized via self-assembled hydrothermal method and then applied as the Ru(bpy) 3 2+ immobilization platform for further loading the biomolecule aptamers due to their nanoporous structure and large specific surface area. Interestingly, we discovered for the first time that, without the aid of conventional double-stranded DNA configuration, such three-dimensional nanomaterials can directly amplify the steric hindrance effect between the aptamer and target analytes to a detectable level, and this facile methodology could be for an exquisite assay. With the MC-LR as a model, this novel ECL biosensor showed a high sensitivity and a wide linear range. This strategy supplies a simple and versatile platform for specific and sensitive determination of a wide range of aptamer-related targets, implying that three-dimensional nanomaterials would play a crucial role in engineering and developing novel detection methodologies for ECL aptasensing fields.
Detection of white matter lesion regions in MRI using SLIC0 and convolutional neural network.
Diniz, Pedro Henrique Bandeira; Valente, Thales Levi Azevedo; Diniz, João Otávio Bandeira; Silva, Aristófanes Corrêa; Gattass, Marcelo; Ventura, Nina; Muniz, Bernardo Carvalho; Gasparetto, Emerson Leandro
2018-04-19
White matter lesions are non-static brain lesions that have a prevalence rate up to 98% in the elderly population. Because they may be associated with several brain diseases, it is important that they are detected as soon as possible. Magnetic Resonance Imaging (MRI) provides three-dimensional data with the possibility to detect and emphasize contrast differences in soft tissues, providing rich information about the human soft tissue anatomy. However, the amount of data provided for these images is far too much for manual analysis/interpretation, representing a difficult and time-consuming task for specialists. This work presents a computational methodology capable of detecting regions of white matter lesions of the brain in MRI of FLAIR modality. The techniques highlighted in this methodology are SLIC0 clustering for candidate segmentation and convolutional neural networks for candidate classification. The methodology proposed here consists of four steps: (1) images acquisition, (2) images preprocessing, (3) candidates segmentation and (4) candidates classification. The methodology was applied on 91 magnetic resonance images provided by DASA, and achieved an accuracy of 98.73%, specificity of 98.77% and sensitivity of 78.79% with 0.005 of false positives, without any false positives reduction technique, in detection of white matter lesion regions. It is demonstrated the feasibility of the analysis of brain MRI using SLIC0 and convolutional neural network techniques to achieve success in detection of white matter lesions regions. Copyright © 2018. Published by Elsevier B.V.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-07
... information that is sensitive or proprietary, such as detailed process designs or site plans. Because the... Inputs to Emission Equations X Calculation Methodology and Methodological Tier X Data Elements Reported...
Jebaseelan, D Davidson; Jebaraj, C; Yoganandan, Narayan; Rajasekaran, S; Kanna, Rishi M
2012-05-01
The objective of the study was to determine the sensitivity of material properties of the juvenile spine to its external and internal responses using a finite element model under compression, and flexion-extension bending moments. The methodology included exercising the 8-year-old juvenile lumbar spine using parametric procedures. The model included the vertebral centrum, growth plates, laminae, pedicles, transverse processes and spinous processes; disc annulus and nucleus; and various ligaments. The sensitivity analysis was conducted by varying the modulus of elasticity for various components. The first simulation was done using mean material properties. Additional simulations were done for each component corresponding to low and high material property variations. External displacement/rotation and internal stress-strain responses were determined under compression and flexion-extension bending. Results indicated that, under compression, disc properties were more sensitive than bone properties, implying an elevated role of the disc under this mode. Under flexion-extension moments, ligament properties were more dominant than the other components, suggesting that various ligaments of the juvenile spine play a key role in modulating bending behaviors. Changes in the growth plate stress associated with ligament properties explained the importance of the growth plate in the pediatric spine with potential implications in progressive deformities.
Accuracy and sensitivity analysis on seismic anisotropy parameter estimation
NASA Astrophysics Data System (ADS)
Yan, Fuyong; Han, De-Hua
2018-04-01
There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.
Veress, Alexander I.; Klein, Gregory; Gullberg, Grant T.
2013-01-01
Tmore » he objectives of the following research were to evaluate the utility of a deformable image registration technique known as hyperelastic warping for the measurement of local strains in the left ventricle through the analysis of clinical, gated PE image datasets. wo normal human male subjects were sequentially imaged with PE and tagged MRI imaging. Strain predictions were made for systolic contraction using warping analyses of the PE images and HARP based strain analyses of the MRI images. Coefficient of determination R 2 values were computed for the comparison of circumferential and radial strain predictions produced by each methodology. here was good correspondence between the methodologies, with R 2 values of 0.78 for the radial strains of both hearts and from an R 2 = 0.81 and R 2 = 0.83 for the circumferential strains. he strain predictions were not statistically different ( P ≤ 0.01 ) . A series of sensitivity results indicated that the methodology was relatively insensitive to alterations in image intensity, random image noise, and alterations in fiber structure. his study demonstrated that warping was able to provide strain predictions of systolic contraction of the LV consistent with those provided by tagged MRI Warping.« less
Irei, Satoshi
2016-01-01
Molecular marker analysis of environmental samples often requires time consuming preseparation steps. Here, analysis of low-volatile nonpolar molecular markers (5-6 ring polycyclic aromatic hydrocarbons or PAHs, hopanoids, and n-alkanes) without the preseparation procedure is presented. Analysis of artificial sample extracts was directly conducted by gas chromatography-mass spectrometry (GC-MS). After every sample injection, a standard mixture was also analyzed to make a correction on the variation of instrumental sensitivity caused by the unfavorable matrix contained in the extract. The method was further validated for the PAHs using the NIST standard reference materials (SRMs) and then applied to airborne particulate matter samples. Tests with the SRMs showed that overall our methodology was validated with the uncertainty of ~30%. The measurement results of airborne particulate matter (PM) filter samples showed a strong correlation between the PAHs, implying the contributions from the same emission source. Analysis of size-segregated PM filter samples showed that their size distributions were found to be in the PM smaller than 0.4 μm aerodynamic diameter. The observations were consistent with our expectation of their possible sources. Thus, the method was found to be useful for molecular marker studies. PMID:27127511
Frempong, Samuel N; Sutton, Andrew J; Davenport, Clare; Barton, Pelham
2018-02-01
There is little specific guidance on the implementation of cost-effectiveness modelling at the early stage of test development. The aim of this study was to review the literature in this field to examine the methodologies and tools that have been employed to date. Areas Covered: A systematic review to identify relevant studies in established literature databases. Five studies were identified and included for narrative synthesis. These studies revealed that there is no consistent approach in this growing field. The perspective of patients and the potential for value of information (VOI) to provide information on the value of future research is often overlooked. Test accuracy is an essential consideration, with most studies having described and included all possible test results in their analysis, and conducted extensive sensitivity analyses on important parameters. Headroom analysis was considered in some instances but at the early development stage (not the concept stage). Expert commentary: The techniques available to modellers that can demonstrate the value of conducting further research and product development (i.e. VOI analysis, headroom analysis) should be better utilized. There is the need for concerted efforts to develop rigorous methodology in this growing field to maximize the value and quality of such analysis.
Veisten, Knut; Houwing, Sjoerd; Mathijssen, M P M René; Akhtar, Juned
2013-03-01
Road users driving under the influence of psychoactive substances may be at much higher relative risk (RR) in road traffic than the average driver. Legislation banning blood alcohol concentrations above certain threshold levels combined with roadside breath-testing of alcohol have been in lieu for decades in many countries, but new legislation and testing of drivers for drug use have recently been implemented in some countries. In this article we present a methodology for cost-benefit analysis (CBA) of increased law enforcement of roadside drug screening. This is an analysis of the profitability for society, where costs of control are weighed against the reduction in injuries expected from fewer drugged drivers on the roads. We specify assumptions regarding costs and the effect of the specificity of the drug screening device, and quantify a deterrence effect related to sensitivity of the device yielding the benefit estimates. Three European countries with different current enforcement levels were studied, yielding benefit-cost ratios in the approximate range of 0.5-5 for a tripling of current levels of enforcement, with costs of about 4000 EUR per convicted and in the range of 1.5 and 13 million EUR per prevented fatality. The applied methodology for CBA has involved a simplistic behavioural response to enforcement increase and control efficiency. Although this methodology should be developed further, it is clearly indicated that the cost-efficiency of increased law enforcement of drug driving offences is dependent on the baseline situation of drug-use in traffic and on the current level of enforcement, as well as the RR and prevalence of drugs in road traffic. Copyright © 2012 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Tabatadze, Shalva; Gorgadze, Natia
2018-01-01
Purpose: The purpose of this paper is to assess the intercultural sensitivity of students in teacher educational programs at higher education institutes (HEIs) in Georgia. Design/methodology/approach: This research explored the intercultural sensitivity among 355 randomly selected students in teacher education programs at higher education…
Study of quiet turbofan STOL aircraft for short-haul transportation. Volume 6: Systems analysis
NASA Technical Reports Server (NTRS)
1973-01-01
A systems analysis of the quiet turbofan aircraft for short-haul transportation was conducted. The purpose of the study was to integrate the representative data generated by aircraft, market, and economic analyses. Activities of the study were to develop the approach and to refine the methodologies for analytic tradeoff, and sensitivity studies of propulsive lift conceptual aircraft and their performance in simulated regional airlines. The operations of appropriate airlines in each of six geographic regions of the United States were simulated. The offshore domestic regions were evaluated to provide a complete domestic evaluation of the STOL concept applicability.
Recent Advances in Clinical Glycoproteomics of Immunoglobulins (Igs).
Plomp, Rosina; Bondt, Albert; de Haan, Noortje; Rombouts, Yoann; Wuhrer, Manfred
2016-07-01
Antibody glycosylation analysis has seen methodological progress resulting in new findings with regard to antibody glycan structure and function in recent years. For example, antigen-specific IgG glycosylation analysis is now applicable for clinical samples because of the increased sensitivity of measurements, and this has led to new insights in the relationship between IgG glycosylation and various diseases. Furthermore, many new methods have been developed for the purification and analysis of IgG Fc glycopeptides, notably multiple reaction monitoring for high-throughput quantitative glycosylation analysis. In addition, new protocols for IgG Fab glycosylation analysis were established revealing autoimmune disease-associated changes. Functional analysis has shown that glycosylation of IgA and IgE is involved in transport across the intestinal epithelium and receptor binding, respectively. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Efficient Gradient-Based Shape Optimization Methodology Using Inviscid/Viscous CFD
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1997-01-01
The formerly developed preconditioned-biconjugate-gradient (PBCG) solvers for the analysis and the sensitivity equations had resulted in very large error reductions per iteration; quadratic convergence was achieved whenever the solution entered the domain of attraction to the root. Its memory requirement was also lower as compared to a direct inversion solver. However, this memory requirement was high enough to preclude the realistic, high grid-density design of a practical 3D geometry. This limitation served as the impetus to the first-year activity (March 9, 1995 to March 8, 1996). Therefore, the major activity for this period was the development of the low-memory methodology for the discrete-sensitivity-based shape optimization. This was accomplished by solving all the resulting sets of equations using an alternating-direction-implicit (ADI) approach. The results indicated that shape optimization problems which required large numbers of grid points could be resolved with a gradient-based approach. Therefore, to better utilize the computational resources, it was recommended that a number of coarse grid cases, using the PBCG method, should initially be conducted to better define the optimization problem and the design space, and obtain an improved initial shape. Subsequently, a fine grid shape optimization, which necessitates using the ADI method, should be conducted to accurately obtain the final optimized shape. The other activity during this period was the interaction with the members of the Aerodynamic and Aeroacoustic Methods Branch of Langley Research Center during one stage of their investigation to develop an adjoint-variable sensitivity method using the viscous flow equations. This method had algorithmic similarities to the variational sensitivity methods and the control-theory approach. However, unlike the prior studies, it was considered for the three-dimensional, viscous flow equations. The major accomplishment in the second period of this project (March 9, 1996 to March 8, 1997) was the extension of the shape optimization methodology for the Thin-Layer Navier-Stokes equations. Both the Euler-based and the TLNS-based analyses compared with the analyses obtained using the CFL3D code. The sensitivities, again from both levels of the flow equations, also compared very well with the finite-differenced sensitivities. A fairly large set of shape optimization cases were conducted to study a number of issues previously not well understood. The testbed for these cases was the shaping of an arrow wing in Mach 2.4 flow. All the final shapes, obtained either from a coarse-grid-based or a fine-grid-based optimization, using either a Euler-based or a TLNS-based analysis, were all re-analyzed using a fine-grid, TLNS solution for their function evaluations. This allowed for a more fair comparison of their relative merits. From the aerodynamic performance standpoint, the fine-grid TLNS-based optimization produced the best shape, and the fine-grid Euler-based optimization produced the lowest cruise efficiency.
Guide to context sensitive solutions.
DOT National Transportation Integrated Search
2006-06-01
Context sensitive solutions are being implemented by the New Mexico Department of Transportation (NMDOT) in its transportation planning and project delivery processes. The NMDOT seeks to incorporate CSS methodologies and techniques into its planning,...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cacuci, Dan G.; Favorite, Jeffrey A.
This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less
Cacuci, Dan G.; Favorite, Jeffrey A.
2018-04-06
This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less
Hu, Yaqin; Yu, Hiaxia; Dong, Kaicheng; Yang, Shuibing; Ye, Xingqian; Chen, Shiguo
2014-10-01
Due to its unique structure, jumbo squid (Dosidicus gigas) meat is sensitive to heat treatment, which makes the traditional squid products taste tough and hard. This study aimed to tenderise jumbo squid meat through ultrasonic treatment. Response surface methodology (RSM) was used to predict the tenderising effect of various treatment conditions. According to the results of RSM, the optimal conditions appeared to be a power of 186.9 W, a frequency of 25.6 kHz, and a time of 30.8 min, and the predicted values of flexibility and firmness under these optimal conditions were 2.40 mm and 435.1 g, respectively. Protein degradation and a broken muscle fibre structure were observed through histological assay and SDS-PAGE, which suggests a satisfactory tenderisation effect. Copyright © 2014. Published by Elsevier Ltd.
Rogers, Stephen C.; Gibbons, Lindsey B.; Griffin, Sherraine; Doctor, Allan
2012-01-01
This chapter summarizes the principles of RSNO measurement in the gas phase, utilizing ozone-based chemiluminescence and the copper cysteine (2C) ± carbon monoxide (3C) reagent. Although an indirect method for quantifying RSNOs, this assay represents one of the most robust methodologies available. It exploits the NO• detection sensitivity of ozone based chemiluminscence, which is within the range required to detect physiological concentrations of RSNO metabolites. Additionally, the specificity of the copper cysteine (2C and 3C) reagent for RSNOs negates the need for sample pretreatment, thereby minimizing the likelihood of sample contamination (false positive results), NO species inter-conversion, or the loss of certain highly labile RSNO species. Herein, we outline the principles of this methodology, summarizing key issues, potential pitfalls and corresponding solutions. PMID:23116707
Financial analysis of technology acquisition using fractionated lasers as a model.
Jutkowitz, Eric; Carniol, Paul J; Carniol, Alan R
2010-08-01
Ablative fractional lasers are among the most advanced and costly devices on the market. Yet, there is a dearth of published literature on the cost and potential return on investment (ROI) of such devices. The objective of this study was to provide a methodological framework for physicians to evaluate ROI. To facilitate this analysis, we conducted a case study on the potential ROI of eight ablative fractional lasers. In the base case analysis, a 5-year lease and a 3-year lease were assumed as the purchase option with a $0 down payment and 3-month payment deferral. In addition to lease payments, service contracts, labor cost, and disposables were included in the total cost estimate. Revenue was estimated as price per procedure multiplied by total number of procedures in a year. Sensitivity analyses were performed to account for variability in model assumptions. Based on the assumptions of the model, all lasers had higher ROI under the 5-year lease agreement compared with that for the 3-year lease agreement. When comparing results between lasers, those with lower operating and purchase cost delivered a higher ROI. Sensitivity analysis indicates the model is most sensitive to purchase method. If physicians opt to purchase the device rather than lease, they can significantly enhance ROI. ROI analysis is an important tool for physicians who are considering making an expensive device acquisition. However, physicians should not rely solely on ROI and must also consider the clinical benefits of a laser. (c) Thieme Medical Publishers.
Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages.
Zhu, R; Zacharias, L; Wooding, K M; Peng, W; Mechref, Y
2017-01-01
Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection, while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins, while automated software tools started replacing manual processing to improve the reliability and throughput of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. © 2017 Elsevier Inc. All rights reserved.
CHAPTER 7: Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages
Zhu, Rui; Zacharias, Lauren; Wooding, Kerry M.; Peng, Wenjing; Mechref, Yehia
2017-01-01
Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins while automated software tools started replacing manual processing to improve the reliability and throughout of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. PMID:28109440
Causality analysis in business performance measurement system using system dynamics methodology
NASA Astrophysics Data System (ADS)
Yusof, Zainuridah; Yusoff, Wan Fadzilah Wan; Maarof, Faridah
2014-07-01
One of the main components of the Balanced Scorecard (BSC) that differentiates it from any other performance measurement system (PMS) is the Strategy Map with its unidirectional causality feature. Despite its apparent popularity, criticisms on the causality have been rigorously discussed by earlier researchers. In seeking empirical evidence of causality, propositions based on the service profit chain theory were developed and tested using the econometrics analysis, Granger causality test on the 45 data points. However, the insufficiency of well-established causality models was found as only 40% of the causal linkages were supported by the data. Expert knowledge was suggested to be used in the situations of insufficiency of historical data. The Delphi method was selected and conducted in obtaining the consensus of the causality existence among the 15 selected expert persons by utilizing 3 rounds of questionnaires. Study revealed that only 20% of the propositions were not supported. The existences of bidirectional causality which demonstrate significant dynamic environmental complexity through interaction among measures were obtained from both methods. With that, a computer modeling and simulation using System Dynamics (SD) methodology was develop as an experimental platform to identify how policies impacting the business performance in such environments. The reproduction, sensitivity and extreme condition tests were conducted onto developed SD model to ensure their capability in mimic the reality, robustness and validity for causality analysis platform. This study applied a theoretical service management model within the BSC domain to a practical situation using SD methodology where very limited work has been done.
Do modern total knee replacements offer better value for money? A health economic analysis.
Hamilton, David F; Clement, Nicholas D; Burnett, Richard; Patton, James T; Moran, Mathew; Howie, Colin R; Simpson, A H R W; Gaston, Paul
2013-11-01
Cost effectiveness is an increasingly important factor in today's healthcare environment, and selection of arthroplasty implant is not exempt from such concerns. Quality adjusted life years (QALYs) are the typical tool for this type of evaluation. Using this methodology, joint arthroplasty has been shown to be cost effective; however, studies directly comparing differing prostheses are lacking. Data was gathered in a single-centre prospective double-blind randomised controlled trial comparing the outcome of modern and traditional knee implants, using the Short Form 6 dimensional (SF-6D) score and quality adjusted life year (QALY) methodology. There was significant improvement in the SF-6D score for both groups at one year (p < 0.0001). The calculated overall life expectancy for the study cohort was 15.1 years, resulting in an overall QALY gain of 2.144 (95% CI 1.752-2.507). The modern implant group demonstrated a small improvement in SF-6D score compared to the traditional design at one year (0.141 versus 0.143, p = 0.94). This difference resulted in the modern implant costing £298 less per QALY at one year. This study demonstrates that modern implant technology does not influence the cost-effectiveness of TKA using the SF-6D and QALY methodology. This type of analysis however assesses health status, and is not sensitive to joint specific function. Evolutionary design changes in implant technology are thus unlikely to influence QALY analysis following joint replacement, which has important implications for implant procurement.
Acoustics based assessment of respiratory diseases using GMM classification.
Mayorga, P; Druzgalski, C; Morelos, R L; Gonzalez, O H; Vidales, J
2010-01-01
The focus of this paper is to present a method utilizing lung sounds for a quantitative assessment of patient health as it relates to respiratory disorders. In order to accomplish this, applicable traditional techniques within the speech processing domain were utilized to evaluate lung sounds obtained with a digital stethoscope. Traditional methods utilized in the evaluation of asthma involve auscultation and spirometry, but utilization of more sensitive electronic stethoscopes, which are currently available, and application of quantitative signal analysis methods offer opportunities of improved diagnosis. In particular we propose an acoustic evaluation methodology based on the Gaussian Mixed Models (GMM) which should assist in broader analysis, identification, and diagnosis of asthma based on the frequency domain analysis of wheezing and crackles.
Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 1
NASA Technical Reports Server (NTRS)
1985-01-01
The primary objective of Task 3 is to provide additional analysis and insight necessary to support key design/programmatic decision for options quantification and selection for system definition. This includes: (1) the identification of key trade study topics; (2) the definition of a trade study procedure for each topic (issues to be resolved, key inputs, criteria/weighting, methodology); (3) conduct tradeoff and sensitivity analysis; and (4) the review/verification of results within the context of evolving system design and definition. The trade study topics addressed in this volume include space autonomy and function automation, software transportability, system network topology, communications standardization, onboard local area networking, distributed operating system, software configuration management, and the software development environment facility.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Geogdzhayev, Igor V.; Cairns, Brian; Rossow, William B.; Lacis, Andrew A.
1999-01-01
This paper outlines the methodology of interpreting channel 1 and 2 AVHRR radiance data over the oceans and describes a detailed analysis of the sensitivity of monthly averages of retrieved aerosol parameters to the assumptions made in different retrieval algorithms. The analysis is based on using real AVHRR data and exploiting accurate numerical techniques for computing single and multiple scattering and spectral absorption of light in the vertically inhomogeneous atmosphere-ocean system. We show that two-channel algorithms can be expected to provide significantly more accurate and less biased retrievals of the aerosol optical thickness than one-channel algorithms and that imperfect cloud screening and calibration uncertainties are by far the largest sources of errors in the retrieved aerosol parameters. Both underestimating and overestimating aerosol absorption as well as the potentially strong variability of the real part of the aerosol refractive index may lead to regional and/or seasonal biases in optical thickness retrievals. The Angstrom exponent appears to be the most invariant aerosol size characteristic and should be retrieved along with optical thickness as the second aerosol parameter.
Rapid Analysis of Trace Drugs and Metabolites Using a Thermal Desorption DART-MS Configuration.
Sisco, Edward; Forbes, Thomas P; Staymates, Matthew E; Gillen, Greg
2016-01-01
The need to analyze trace narcotic samples rapidly for screening or confirmatory purposes is of increasing interest to the forensic, homeland security, and criminal justice sectors. This work presents a novel method for the detection and quantification of trace drugs and metabolites off of a swipe material using a thermal desorption direct analysis in real time mass spectrometry (TD-DART-MS) configuration. A variation on traditional DART, this configuration allows for desorption of the sample into a confined tube, completely independent of the DART source, allowing for more efficient and thermally precise analysis of material present on a swipe. Over thirty trace samples of narcotics, metabolites, and cutting agents deposited onto swipes were rapidly differentiated using this methodology. The non-optimized method led to sensitivities ranging from single nanograms to hundreds of picograms. Direct comparison to traditional DART with a subset of the samples highlighted an improvement in sensitivity by a factor of twenty to thirty and an increase in reproducibility sample to sample from approximately 45 % RSD to less than 15 % RSD. Rapid extraction-less quantification was also possible.
Liu, Xiaoyan; Zhang, Xiaoyun; Zhang, Haixia; Liu, Mancang
2008-08-01
A sensitive method for the analysis of bisphenol A and 4-nonylphenol is developed by means of the optimization of solid-phase microextraction using Uniform Experimental Design methodology followed by high-performance liquid chromatographic analysis with fluorescence detection. The optimal extraction conditions are determined based on the relationship between parameters and the peak area. The curve calibration plots are linear (r2>or=0.9980) over the concentration range of 1.25-125 ng/mL for bisphenol A and 2.59-202.96 ng/mL for 4-nonylphenol, respectively. The detection limits, based on a signal-to-noise ratio of 3, are 0.097 ng/mL for bisphenol A and 0.27 ng/mL for 4-nonylphenol, respectively. The validity of the proposed method is demonstrated by the analysis of the investigated analytes in real water samples and sensitivity of the optimized method is verified by comparing results with those obtained by previous methods using the same commercial solid-phase microextraction fiber.
Rigatti, Fabiane; Tizotti, Maísa Kraulich; Hörner, Rosmari; Domingues, Vanessa Oliveira; Martini, Rosiéli; Mayer, Letícia Eichstaedt; Khun, Fábio Teixeira; de França, Chirles Araújo; da Costa, Mateus Matiuzzi
2010-01-01
This study aimed to characterize the prevalence and susceptibility profile to oxacillin-resistant Coagulase-negative Staphylococci strains isolated from blood cultures in a teaching hospital, located in Santa Maria, RS. In addition, different methodologies for phenotypic characterization of mecA-mediated oxacillin resistance were compared with genotypic reference testing. After identification (MicroScan - Siemens), the isolates were tested for antimicrobial sensitivity using disk diffusion and automation (MicroScan - Siemens). The presence of mecA gene was identified by the polymerase chain reaction molecular technique. The most common species was Staphylococcus epidermidis (n=40, 67%). The mecA gene was detected in 54 (90%) strains, while analysis of the sensitivity profiles revealed a high rate of resistance to multiple classes of antimicrobial drugs. However, all isolates were uniformly sensitive to vancomycin and tigecycline. The cefoxitin disk was the phenotypic method that best correlated with the gold standard. Analysis of the clinical significance of CoNS isolated from hemocultures and the precise detection of oxacillin resistance represent decisive factors for the correct choice of antibiotic therapy. Although vancomycin constitutes the normal treatment in most Brazilian hospitals, reduction in its use is recommended.
Kondou, Youichi; Manickavelu, Alagu; Komatsu, Kenji; Arifi, Mujiburahman; Kawashima, Mika; Ishii, Takayoshi; Hattori, Tomohiro; Iwata, Hiroyoshi; Tsujimoto, Hisashi; Ban, Tomohiro; Matsui, Minami
2016-01-01
This study was carried out with the aim of developing the methodology to determine elemental composition in wheat and identify the best germplasm for further research. Orphan and genetically diverse Afghan wheat landraces were chosen and EDXRF was used to measure the content of some of the elements to establish elemental composition in grains of 266 landraces using 10 reference lines. Four elements, K, Mg, P, and Fe, were measured by standardizing sample preparation. The results of hierarchical cluster analysis using elemental composition data sets indicated that the Fe content has an opposite pattern to the other elements, especially that of K. By systematic analysis the best wheat germplasms for P content and Fe content were identified. In order to compare the sensitivity of EDXRF, the ICP method was also used and the similar results obtained confirmed the EDXRF methodology. The sampling method for measurement using EDXRF was optimized resulting in high-throughput profiling of elemental composition in wheat grains at low cost. Using this method, we have characterized the Afghan wheat landraces and isolated the best genotypes that have high-elemental content and have the potential to be used in crop improvement. PMID:28163583
Broda, Agnieszka; Nikolayevskyy, Vlad; Casali, Nicki; Khan, Huma; Bowker, Richard; Blackwell, Gemma; Patel, Bhakti; Hume, James; Hussain, Waqar; Drobniewski, Francis
2018-04-20
Tuberculosis (TB) remains one of the most deadly infections with approximately a quarter of cases not being identified and/or treated mainly due to a lack of resources. Rapid detection of TB or drug-resistant TB enables timely adequate treatment and is a cornerstone of effective TB management. We evaluated the analytical performance of a single-tube assay for multidrug-resistant TB (MDR-TB) on an experimental platform utilising RT-PCR and melting curve analysis that could potentially be operated as a point-of-care (PoC) test in resource-constrained settings with a high burden of TB. Firstly, we developed and evaluated the prototype MDR-TB assay using specimens extracted from well-characterised TB isolates with a variety of distinct rifampicin and isoniazid resistance conferring mutations and nontuberculous Mycobacteria (NTM) strains. Secondly, we validated the experimental platform using 98 clinical sputum samples from pulmonary TB patients collected in high MDR-TB settings. The sensitivity of the platform for TB detection in clinical specimens was 75% for smear-negative and 92.6% for smear-positive sputum samples. The sensitivity of detection for rifampicin and isoniazid resistance was 88.9 and 96.0% and specificity was 87.5 and 100%, respectively. Observed limitations in sensitivity and specificity could be resolved by adjusting the sample preparation methodology and melting curve recognition algorithm. Overall technology could be considered a promising PoC methodology especially in resource-constrained settings based on its combined accuracy, convenience, simplicity, speed, and cost characteristics.
NASA Astrophysics Data System (ADS)
Kalyanapu, A. J.; Thames, B. A.
2013-12-01
Dam breach modeling often includes application of models that are sophisticated, yet computationally intensive to compute flood propagation at high temporal and spatial resolutions. This results in a significant need for computational capacity that requires development of newer flood models using multi-processor and graphics processing techniques. Recently, a comprehensive benchmark exercise titled the 12th Benchmark Workshop on Numerical Analysis of Dams, is organized by the International Commission on Large Dams (ICOLD) to evaluate the performance of these various tools used for dam break risk assessment. The ICOLD workshop is focused on estimating the consequences of failure of a hypothetical dam near a hypothetical populated area with complex demographics, and economic activity. The current study uses this hypothetical case study and focuses on evaluating the effects of dam breach methodologies on consequence estimation and analysis. The current study uses ICOLD hypothetical data including the topography, dam geometric and construction information, land use/land cover data along with socio-economic and demographic data. The objective of this study is to evaluate impacts of using four different dam breach methods on the consequence estimates used in the risk assessments. The four methodologies used are: i) Froehlich (1995), ii) MacDonald and Langridge-Monopolis 1984 (MLM), iii) Von Thun and Gillete 1990 (VTG), and iv) Froehlich (2008). To achieve this objective, three different modeling components were used. First, using the HEC-RAS v.4.1, dam breach discharge hydrographs are developed. These hydrographs are then provided as flow inputs into a two dimensional flood model named Flood2D-GPU, which leverages the computer's graphics card for much improved computational capabilities of the model input. Lastly, outputs from Flood2D-GPU, including inundated areas, depth grids, velocity grids, and flood wave arrival time grids, are input into HEC-FIA, which provides the consequence assessment for the solution to the problem statement. For the four breach methodologies, a sensitivity analysis of four breach parameters, breach side slope (SS), breach width (Wb), breach invert elevation (Elb), and time of failure (tf), is conducted. Up to, 68 simulations are computed to produce breach hydrographs in HEC-RAS for input into Flood2D-GPU. The Flood2D-GPU simulation results were then post-processed in HEC-FIA to evaluate: Total Population at Risk (PAR), 14-yr and Under PAR (PAR14-), 65-yr and Over PAR (PAR65+), Loss of Life (LOL) and Direct Economic Impact (DEI). The MLM approach resulted in wide variability in simulated minimum and maximum values of PAR, PAR 65+ and LOL estimates. For PAR14- and DEI, Froehlich (1995) resulted in lower values while MLM resulted in higher estimates. This preliminary study demonstrated the relative performance of four commonly used dam breach methodologies and their impacts on consequence estimation.
Finite element model updating and damage detection for bridges using vibration measurement.
DOT National Transportation Integrated Search
2013-12-01
In this report, the results of a study on developing a damage detection methodology based on Statistical Pattern Recognition are : presented. This methodology uses a new damage sensitive feature developed in this study that relies entirely on modal :...
Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J.; Hodge, B. M.; Florita, A.
2013-10-01
Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The resultsmore » show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.« less
NASA Technical Reports Server (NTRS)
Hoffman, R. N.; Leidner, S. M.; Henderson, J. M.; Atlas, R.; Ardizzone, J. V.; Bloom, S. C.; Atlas, Robert (Technical Monitor)
2001-01-01
In this study, we apply a two-dimensional variational analysis method (2d-VAR) to select a wind solution from NASA Scatterometer (NSCAT) ambiguous winds. 2d-VAR determines a "best" gridded surface wind analysis by minimizing a cost function. The cost function measures the misfit to the observations, the background, and the filtering and dynamical constraints. The ambiguity closest in direction to the minimizing analysis is selected. 2d-VAR method, sensitivity and numerical behavior are described. 2d-VAR is compared to statistical interpolation (OI) by examining the response of both systems to a single ship observation and to a swath of unique scatterometer winds. 2d-VAR is used with both NSCAT ambiguities and NSCAT backscatter values. Results are roughly comparable. When the background field is poor, 2d-VAR ambiguity removal often selects low probability ambiguities. To avoid this behavior, an initial 2d-VAR analysis, using only the two most likely ambiguities, provides the first guess for an analysis using all the ambiguities or the backscatter data. 2d-VAR and median filter selected ambiguities usually agree. Both methods require horizontal consistency, so disagreements occur in clumps, or as linear features. In these cases, 2d-VAR ambiguities are often more meteorologically reasonable and more consistent with satellite imagery.
NASA Technical Reports Server (NTRS)
Sreekantamurthy, Tham; Gaspar, James L.; Mann, Troy; Behun, Vaughn; Pearson, James C., Jr.; Scarborough, Stephen
2007-01-01
Ultra-light weight and ultra-thin membrane inflatable antenna concepts are fast evolving to become the state-of-the-art antenna concepts for deep-space applications. NASA Langley Research Center has been involved in the structural dynamics research on antenna structures. One of the goals of the research is to develop structural analysis methodology for prediction of the static and dynamic response characteristics of the inflatable antenna concepts. This research is focused on the computational studies to use nonlinear large deformation finite element analysis to characterize the ultra-thin membrane responses of the antennas. Recently, structural analyses have been performed on a few parabolic reflector antennas of varying size and shape, which are referred in the paper as 0.3 meters subscale, 2 meters half-scale, and 4 meters full-scale antenna. The various aspects studied included nonlinear analysis methodology and solution techniques, ways to speed convergence in iterative methods, the sensitivities of responses with respect to structural loads, such as inflation pressure, gravity, and pretension loads in the ground and in-space conditions, and the ultra-thin membrane wrinkling characteristics. Several such intrinsic aspects studied have provided valuable insight into evaluation of structural characteristics of such antennas. While analyzing these structural characteristics, a quick study was also made to assess the applicability of dynamics scaling of the half-scale antenna. This paper presents the details of the nonlinear structural analysis results, and discusses the insight gained from the studies on the various intrinsic aspects of the analysis methodology. The predicted reflector surface characteristics of the three inflatable ultra-thin membrane parabolic reflector antenna concepts are presented as easily observable displacement fringe patterns with associated maximum values, and normal mode shapes and associated frequencies. Wrinkling patterns are presented to show how surface wrinkle progress with increasing tension loads. Antenna reflector surface accuracies were found to be very much dependent on the type and size of the antenna, the reflector surface curvature, reflector membrane supports in terms of spacing of catenaries, as well as the amount of applied load.
Ghimire, Santosh R; Johnston, John M
2017-09-01
We propose a modified eco-efficiency (EE) framework and novel sustainability analysis methodology for green infrastructure (GI) practices used in water resource management. Green infrastructure practices such as rainwater harvesting (RWH), rain gardens, porous pavements, and green roofs are emerging as viable strategies for climate change adaptation. The modified framework includes 4 economic, 11 environmental, and 3 social indicators. Using 6 indicators from the framework, at least 1 from each dimension of sustainability, we demonstrate the methodology to analyze RWH designs. We use life cycle assessment and life cycle cost assessment to calculate the sustainability indicators of 20 design configurations as Decision Management Objectives (DMOs). Five DMOs emerged as relatively more sustainable along the EE analysis Tradeoff Line, and we used Data Envelopment Analysis (DEA), a widely applied statistical approach, to quantify the modified EE measures as DMO sustainability scores. We also addressed the subjectivity and sensitivity analysis requirements of sustainability analysis, and we evaluated the performance of 10 weighting schemes that included classical DEA, equal weights, National Institute of Standards and Technology's stakeholder panel, Eco-Indicator 99, Sustainable Society Foundation's Sustainable Society Index, and 5 derived schemes. We improved upon classical DEA by applying the weighting schemes to identify sustainability scores that ranged from 0.18 to 1.0, avoiding the nonuniqueness problem and revealing the least to most sustainable DMOs. Our methodology provides a more comprehensive view of water resource management and is generally applicable to GI and industrial, environmental, and engineered systems to explore the sustainability space of alternative design configurations. Integr Environ Assess Manag 2017;13:821-831. Published 2017. This article is a US Government work and is in the public domain in the USA. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). Published 2017. This article is a US Government work and is in the public domain in the USA. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).
NASA Astrophysics Data System (ADS)
Nanus, L.; Williams, M. W.; Campbell, D. H.
2005-12-01
Atmospheric deposition of pollutants threatens pristine environments around the world. However, scientifically-based decisions regarding management of these environments has been confounded by spatial variability of atmospheric deposition, particularly across regional scales at which resource management is typically considered. A statistically based methodology coupled within GIS is presented that builds on small alpine lake and sub-alpine catchments scale to identify deposition-sensitive lakes across larger watershed and regional scales. The sensitivity of 874 alpine and subalpine lakes to acidification from atmospheric deposition of nitrogen and sulfur was estimated using statistical models relating water quality and landscape attributes in Glacier National Park, Yellowstone National Park, Grand Teton National Park, Rocky Mountain National Park and Great Sand Dunes National Park and Preserve. Water-quality data measured during synoptic lake surveys were used to calibrate statistical models of lake sensitivity. In the case of nitrogen deposition, water quality data were supplemented with dual isotopic measurements of d15N and d18O of nitrate. Landscape attributes for the lake basins were derived from GIS including the following explanatory variables; topography (basin slope, basin aspect, basin elevation), bedrock type, vegetation type, and soil type. Using multivariate logistic regression analysis, probability estimates were developed for acid-neutralizing capacity, nitrate, sulfate and DOC concentrations, and lakes with a high probability of being sensitive to atmospheric deposition were identified. Water-quality data collected at 60 lakes during fall 2004 were used to validate statistical models. Relationships between landscape attributes and water quality vary by constituent, due to spatial variability in landscape attributes and spatial variation in the atmospheric deposition of pollutants within and among the five National Parks. Predictive ability, model fit and sensitivity were first assessed for each of the five National Parks individually, to evaluate the utility of this methodology for prediction of alpine and sub-alpine lake sensitivity across the catchment scale. A similar assessment was then performed, treating the five parks as a group. Validation results showed that 85 percent of lakes sampled were accurately identified by the model as having a greater than 60 percent probability of acid-neutralizing capacity concentrations less than 200 microequivalents per liter. Preliminary findings indicate good predictive ability and reasonable model fit and sensitivity, suggesting that logistic regression modeling coupled within a GIS framework is an appropriate approach for remote identification of deposition-sensitive lakes across the Rocky Mountain region. To assist resource management decisions regarding alpine and sub-alpine lakes across this region, screening procedures were developed based on terrain and landscape attribute information available to all participating parks. Since the screening procedure is based on publicly available data, our methodology and similar screening procedures may be applicable to other National Parks with deposition-sensitive surface waters.
Pharmacokinetics Application in Biophysics Experiments
NASA Astrophysics Data System (ADS)
Millet, Philippe; Lemoigne, Yves
Among the available computerised tomography devices, the Positron Emission Tomography (PET) has the advantage to be sensitive to pico-molar concentrations of radiotracers inside living matter. Devices adapted to small animal imaging are now commercially available and allow us to study the function rather than the structure of living tissues by in vivo analysis. PET methodology, from the physics of electron-positron annihilation to the biophysics involved in tracers, is treated by other authors in this book. The basics of coincidence detection, image reconstruction, spatial resolution and sensitivity are discussed in the paper by R. Ott. The use of compartment analysis combined with pharmacokinetics is described here to illustrate an application to neuroimaging and to show how parametric imaging can bring insight on the in vivo bio-distribution of a radioactive tracer with small animal PET scanners. After reporting on the use of an intracerebral β+ radiosensitive probe (βP), we describe a small animal PET experiment used to measure the density of 5HT 1 a receptors in rat brain.
Cost of Equity Estimation in Fuel and Energy Sector Companies Based on CAPM
NASA Astrophysics Data System (ADS)
Kozieł, Diana; Pawłowski, Stanisław; Kustra, Arkadiusz
2018-03-01
The article presents cost of equity estimation of capital groups from the fuel and energy sector, listed at the Warsaw Stock Exchange, based on the Capital Asset Pricing Model (CAPM). The objective of the article was to perform a valuation of equity with the application of CAPM, based on actual financial data and stock exchange data and to carry out a sensitivity analysis of such cost, depending on the financing structure of the entity. The objective of the article formulated in this manner has determined its' structure. It focuses on presentation of substantive analyses related to the core of equity and methods of estimating its' costs, with special attention given to the CAPM. In the practical section, estimation of cost was performed according to the CAPM methodology, based on the example of leading fuel and energy companies, such as Tauron GE and PGE. Simultaneously, sensitivity analysis of such cost was performed depending on the structure of financing the company's operation.
NASA Astrophysics Data System (ADS)
Wang, Ting; Plecháč, Petr
2017-12-01
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
Biffi, E; Menegon, A; Regalia, G; Maida, S; Ferrigno, G; Pedrocchi, A
2011-08-15
Modern drug discovery for Central Nervous System pathologies has recently focused its attention to in vitro neuronal networks as models for the study of neuronal activities. Micro Electrode Arrays (MEAs), a widely recognized tool for pharmacological investigations, enable the simultaneous study of the spiking activity of discrete regions of a neuronal culture, providing an insight into the dynamics of networks. Taking advantage of MEAs features and making the most of the cross-correlation analysis to assess internal parameters of a neuronal system, we provide an efficient method for the evaluation of comprehensive neuronal network activity. We developed an intra network burst correlation algorithm, we evaluated its sensitivity and we explored its potential use in pharmacological studies. Our results demonstrate the high sensitivity of this algorithm and the efficacy of this methodology in pharmacological dose-response studies, with the advantage of analyzing the effect of drugs on the comprehensive correlative properties of integrated neuronal networks. Copyright © 2011 Elsevier B.V. All rights reserved.
CRISPR-UMI: single-cell lineage tracing of pooled CRISPR-Cas9 screens.
Michlits, Georg; Hubmann, Maria; Wu, Szu-Hsien; Vainorius, Gintautas; Budusan, Elena; Zhuk, Sergei; Burkard, Thomas R; Novatchkova, Maria; Aichinger, Martin; Lu, Yiqing; Reece-Hoyes, John; Nitsch, Roberto; Schramek, Daniel; Hoepfner, Dominic; Elling, Ulrich
2017-12-01
Pooled CRISPR screens are a powerful tool for assessments of gene function. However, conventional analysis is based exclusively on the relative abundance of integrated single guide RNAs (sgRNAs) between populations, which does not discern distinct phenotypes and editing outcomes generated by identical sgRNAs. Here we present CRISPR-UMI, a single-cell lineage-tracing methodology for pooled screening to account for cell heterogeneity. We generated complex sgRNA libraries with unique molecular identifiers (UMIs) that allowed for screening of clonally expanded, individually tagged cells. A proof-of-principle CRISPR-UMI negative-selection screen provided increased sensitivity and robustness compared with conventional analysis by accounting for underlying cellular and editing-outcome heterogeneity and detection of outlier clones. Furthermore, a CRISPR-UMI positive-selection screen uncovered new roadblocks in reprogramming mouse embryonic fibroblasts as pluripotent stem cells, distinguishing reprogramming frequency and speed (i.e., effect size and probability). CRISPR-UMI boosts the predictive power, sensitivity, and information content of pooled CRISPR screens.
Ciccimaro, Eugene; Ranasinghe, Asoka; D'Arienzo, Celia; Xu, Carrie; Onorato, Joelle; Drexler, Dieter M; Josephs, Jonathan L; Poss, Michael; Olah, Timothy
2014-12-02
Due to observed collision induced dissociation (CID) fragmentation inefficiency, developing sensitive liquid chromatography tandem mass spectrometry (LC-MS/MS) assays for CID resistant compounds is especially challenging. As an alternative to traditional LC-MS/MS, we present here a methodology that preserves the intact analyte ion for quantification by selectively filtering ions while reducing chemical noise. Utilizing a quadrupole-Orbitrap MS, the target ion is selectively isolated while interfering matrix components undergo MS/MS fragmentation by CID, allowing noise-free detection of the analyte's surviving molecular ion. In this manner, CID affords additional selectivity during high resolution accurate mass analysis by elimination of isobaric interferences, a fundamentally different concept than the traditional approach of monitoring a target analyte's unique fragment following CID. This survivor-selected ion monitoring (survivor-SIM) approach has allowed sensitive and specific detection of disulfide-rich cyclic peptides extracted from plasma.
Giménez, Estela; Juan, M Emília; Calvo-Melià, Sara; Planas, Joana M
2017-08-15
Table olives are especially rich in pentacyclic triterpenic compounds, which exert several biological activities. A crucial step in order to know if these compounds could contribute to the beneficial and healthy properties of this food is their measurement in blood. Therefore, the present study describes a simple and accurate liquid-liquid extraction followed by LC-QqQ-MS analysis for the simultaneous determination of the main pentacyclic triterpenes from Olea europaea L. in rat plasma. The method was validated by the analysis of blank plasma samples spiked with pure compounds, obtaining a linear correlation, adequate sensitivity with a limit of quantification ranging from 1nM for maslinic acid to 10nM for uvaol. Precision and accuracy were lower than 10% in all cases and recoveries were between 95 and 104%. The oral administration of olives to rats and its determination in plasma verified that the established methodology is appropriate for bioavailability studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Recent activities within the Aeroservoelasticity Branch at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Noll, Thomas E.; Perry, Boyd, III; Gilbert, Michael G.
1989-01-01
The objective of research in aeroservoelasticity at the NASA Langley Research Center is to enhance the modeling, analysis, and multidisciplinary design methodologies for obtaining multifunction digital control systems for application to flexible flight vehicles. Recent accomplishments are discussed, and a status report on current activities within the Aeroservoelasticity Branch is presented. In the area of modeling, improvements to the Minimum-State Method of approximating unsteady aerodynamics are shown to provide precise, low-order aeroservoelastic models for design and simulation activities. Analytical methods based on Matched Filter Theory and Random Process Theory to provide efficient and direct predictions of the critical gust profile and the time-correlated gust loads for linear structural design considerations are also discussed. Two research projects leading towards improved design methodology are summarized. The first program is developing an integrated structure/control design capability based on hierarchical problem decomposition, multilevel optimization and analytical sensitivities. The second program provides procedures for obtaining low-order, robust digital control laws for aeroelastic applications. In terms of methodology validation and application the current activities associated with the Active Flexible Wing project are reviewed.
Measurement, methods, and divergent patterns: Reassessing the effects of same-sex parents.
Cheng, Simon; Powell, Brian
2015-07-01
Scholars have noted that survey analysis of small subsamples-for example, same-sex parent families-is sensitive to researchers' analytical decisions, and even small differences in coding can profoundly shape empirical patterns. As an illustration, we reassess the findings of a recent article by Regnerus regarding the implications of being raised by gay and lesbian parents. Taking a close look at the New Family Structures Study (NFSS), we demonstrate the potential for misclassifying a non-negligible number of respondents as having been raised by parents who had a same-sex romantic relationship. We assess the implications of these possible misclassifications, along with other methodological considerations, by reanalyzing the NFSS in seven steps. The reanalysis offers evidence that the empirical patterns showcased in the original Regnerus article are fragile-so fragile that they appear largely a function of these possible misclassifications and other methodological choices. Our replication and reanalysis of Regnerus's study offer a cautionary illustration of the importance of double checking and critically assessing the implications of measurement and other methodological decisions in our and others' research. Copyright © 2015 Elsevier Inc. All rights reserved.
Determining radiated sound power of building structures by means of laser Doppler vibrometry
NASA Astrophysics Data System (ADS)
Roozen, N. B.; Labelle, L.; Rychtáriková, M.; Glorieux, C.
2015-06-01
This paper introduces a methodology that makes use of laser Doppler vibrometry to assess the acoustic insulation performance of a building element. The sound power radiated by the surface of the element is numerically determined from the vibrational pattern, offering an alternative for classical microphone measurements. Compared to the latter the proposed analysis is not sensitive to room acoustical effects. This allows the proposed methodology to be used at low frequencies, where the standardized microphone based approach suffers from a high uncertainty due to a low acoustic modal density. Standardized measurements as well as laser Doppler vibrometry measurements and computations have been performed on two test panels, a light-weight wall and a gypsum block wall and are compared and discussed in this paper. The proposed methodology offers an adequate solution for the assessment of the acoustic insulation of building elements at low frequencies. This is crucial in the framework of recent proposals of acoustic standards for measurement approaches and single number sound insulation performance ratings to take into account frequencies down to 50 Hz.
Kaimakamis, Evangelos; Tsara, Venetia; Bratsas, Charalambos; Sichletidis, Lazaros; Karvounis, Charalambos; Maglaveras, Nikolaos
2016-01-01
Obstructive Sleep Apnea (OSA) is a common sleep disorder requiring the time/money consuming polysomnography for diagnosis. Alternative methods for initial evaluation are sought. Our aim was the prediction of Apnea-Hypopnea Index (AHI) in patients potentially suffering from OSA based on nonlinear analysis of respiratory biosignals during sleep, a method that is related to the pathophysiology of the disorder. Patients referred to a Sleep Unit (135) underwent full polysomnography. Three nonlinear indices (Largest Lyapunov Exponent, Detrended Fluctuation Analysis and Approximate Entropy) extracted from two biosignals (airflow from a nasal cannula, thoracic movement) and one linear derived from Oxygen saturation provided input to a data mining application with contemporary classification algorithms for the creation of predictive models for AHI. A linear regression model presented a correlation coefficient of 0.77 in predicting AHI. With a cutoff value of AHI = 8, the sensitivity and specificity were 93% and 71.4% in discrimination between patients and normal subjects. The decision tree for the discrimination between patients and normal had sensitivity and specificity of 91% and 60%, respectively. Certain obtained nonlinear values correlated significantly with commonly accepted physiological parameters of people suffering from OSA. We developed a predictive model for the presence/severity of OSA using a simple linear equation and additional decision trees with nonlinear features extracted from 3 respiratory recordings. The accuracy of the methodology is high and the findings provide insight to the underlying pathophysiology of the syndrome. Reliable predictions of OSA are possible using linear and nonlinear indices from only 3 respiratory signals during sleep. The proposed models could lead to a better study of the pathophysiology of OSA and facilitate initial evaluation/follow up of suspected patients OSA utilizing a practical low cost methodology. ClinicalTrials.gov NCT01161381.
A Risk-Based Approach for Aerothermal/TPS Analysis and Testing
NASA Technical Reports Server (NTRS)
Wright, Michael J.; Grinstead, Jay H.; Bose, Deepak
2007-01-01
The current status of aerothermal and thermal protection system modeling for civilian entry missions is reviewed. For most such missions, the accuracy of our simulations is limited not by the tools and processes currently employed, but rather by reducible deficiencies in the underlying physical models. Improving the accuracy of and reducing the uncertainties in these models will enable a greater understanding of the system level impacts of a particular thermal protection system and of the system operation and risk over the operational life of the system. A strategic plan will be laid out by which key modeling deficiencies can be identified via mission-specific gap analysis. Once these gaps have been identified, the driving component uncertainties are determined via sensitivity analyses. A Monte-Carlo based methodology is presented for physics-based probabilistic uncertainty analysis of aerothermodynamics and thermal protection system material response modeling. These data are then used to advocate for and plan focused testing aimed at reducing key uncertainties. The results of these tests are used to validate or modify existing physical models. Concurrently, a testing methodology is outlined for thermal protection materials. The proposed approach is based on using the results of uncertainty/sensitivity analyses discussed above to tailor ground testing so as to best identify and quantify system performance and risk drivers. A key component of this testing is understanding the relationship between the test and flight environments. No existing ground test facility can simultaneously replicate all aspects of the flight environment, and therefore good models for traceability to flight are critical to ensure a low risk, high reliability thermal protection system design. Finally, the role of flight testing in the overall thermal protection system development strategy is discussed.
NASA Astrophysics Data System (ADS)
Bell, A.; Tang, G.; Yang, P.; Wu, D.
2017-12-01
Due to their high spatial and temporal coverage, cirrus clouds have a profound role in regulating the Earth's energy budget. Variability of their radiative, geometric, and microphysical properties can pose significant uncertainties in global climate model simulations if not adequately constrained. Thus, the development of retrieval methodologies able to accurately retrieve ice cloud properties and present associated uncertainties is essential. The effectiveness of cirrus cloud retrievals relies on accurate a priori understanding of ice radiative properties, as well as the current state of the atmosphere. Current studies have implemented information content theory analyses prior to retrievals to quantify the amount of information that should be expected on parameters to be retrieved, as well as the relative contribution of information provided by certain measurement channels. Through this analysis, retrieval algorithms can be designed in a way to maximize the information in measurements, and therefore ensure enough information is present to retrieve ice cloud properties. In this study, we present such an information content analysis to quantify the amount of information to be expected in retrievals of cirrus ice water path and particle effective diameter using sub-millimeter and thermal infrared radiometry. Preliminary results show these bands to be sensitive to changes in ice water path and effective diameter, and thus lend confidence their ability to simultaneously retrieve these parameters. Further quantification of sensitivity and the information provided from these bands can then be used to design and optimal retrieval scheme. While this information content analysis is employed on a theoretical retrieval combining simulated radiance measurements, the methodology could in general be applicable to any instrument or retrieval approach.
Cichy, Radoslaw Martin; Teng, Santani
2017-02-19
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Quantitative mass spectrometry of unconventional human biological matrices
NASA Astrophysics Data System (ADS)
Dutkiewicz, Ewelina P.; Urban, Pawel L.
2016-10-01
The development of sensitive and versatile mass spectrometric methodology has fuelled interest in the analysis of metabolites and drugs in unconventional biological specimens. Here, we discuss the analysis of eight human matrices-hair, nail, breath, saliva, tears, meibum, nasal mucus and skin excretions (including sweat)-by mass spectrometry (MS). The use of such specimens brings a number of advantages, the most important being non-invasive sampling, the limited risk of adulteration and the ability to obtain information that complements blood and urine tests. The most often studied matrices are hair, breath and saliva. This review primarily focuses on endogenous (e.g. potential biomarkers, hormones) and exogenous (e.g. drugs, environmental contaminants) small molecules. The majority of analytical methods used chromatographic separation prior to MS; however, such a hyphenated methodology greatly limits analytical throughput. On the other hand, the mass spectrometric methods that exclude chromatographic separation are fast but suffer from matrix interferences. To enable development of quantitative assays for unconventional matrices, it is desirable to standardize the protocols for the analysis of each specimen and create appropriate certified reference materials. Overcoming these challenges will make analysis of unconventional human biological matrices more common in a clinical setting. This article is part of the themed issue 'Quantitative mass spectrometry'.
NASA Astrophysics Data System (ADS)
Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua
2014-07-01
Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.
The relationship between urban forests and income: A meta-analysis.
Gerrish, Ed; Watkins, Shannon Lea
2018-02-01
Urban trees provide substantial public health and public environmental benefits. However, scholarly works suggest that urban trees may be unequally distributed among poor and minority urban communities, meaning that these communities are potentially being deprived of public environmental benefits, a form of environmental injustice. The evidence of this problem is not uniform however, and evidence of inequity varies in size and significance across studies. This variation in results suggests the need for a research synthesis and meta-analysis. We employed a systematic literature search to identify original studies which examined the relationship between urban forest cover and income (n=61) and coded each effect size (n=332). We used meta-analytic techniques to estimate the average (unconditional) relationship between urban forest cover and income and to estimate the impact that methodological choices, measurement, publication characteristics, and study site characteristics had on the magnitude of that relationship. We leveraged variation in study methodology to evaluate the extent to which results were sensitive to methodological choices often debated in the geographic and environmental justice literature but not yet evaluated in environmental amenities research. We found evidence of income-based inequity in urban forest cover (unconditional mean effect size = 0.098; s.e. = .017) that was robust across most measurement and methodological strategies in original studies and results did not differ systematically with study site characteristics. Studies that controlled for spatial autocorrelation, a violation of independent errors, found evidence of substantially less urban forest inequity; future research in this area should test and correct for spatial autocorrelation.
Caillaud, Amandine; de la Iglesia, Pablo; Darius, H Taiana; Pauillac, Serge; Aligizaki, Katerina; Fraga, Santiago; Chinain, Mireille; Diogène, Jorge
2010-06-14
Ciguatera fish poisoning (CFP) occurs mainly when humans ingest finfish contaminated with ciguatoxins (CTXs). The complexity and variability of such toxins have made it difficult to develop reliable methods to routinely monitor CFP with specificity and sensitivity. This review aims to describe the methodologies available for CTX detection, including those based on the toxicological, biochemical, chemical, and pharmaceutical properties of CTXs. Selecting any of these methodological approaches for routine monitoring of ciguatera may be dependent upon the applicability of the method. However, identifying a reference validation method for CTXs is a critical and urgent issue, and is dependent upon the availability of certified CTX standards and the coordinated action of laboratories. Reports of CFP cases in European hospitals have been described in several countries, and are mostly due to travel to CFP endemic areas. Additionally, the recent detection of the CTX-producing tropical genus Gambierdiscus in the eastern Atlantic Ocean of the northern hemisphere and in the Mediterranean Sea, as well as the confirmation of CFP in the Canary Islands and possibly in Madeira, constitute other reasons to study the onset of CFP in Europe [1]. The question of the possible contribution of climate change to the distribution of toxin-producing microalgae and ciguateric fish is raised. The impact of ciguatera onset on European Union (EU) policies will be discussed with respect to EU regulations on marine toxins in seafood. Critical analysis and availability of methodologies for CTX determination is required for a rapid response to suspected CFP cases and to conduct sound CFP risk analysis.
Caillaud, Amandine; de la Iglesia, Pablo; Darius, H. Taiana; Pauillac, Serge; Aligizaki, Katerina; Fraga, Santiago; Chinain, Mireille; Diogène, Jorge
2010-01-01
Ciguatera fish poisoning (CFP) occurs mainly when humans ingest finfish contaminated with ciguatoxins (CTXs). The complexity and variability of such toxins have made it difficult to develop reliable methods to routinely monitor CFP with specificity and sensitivity. This review aims to describe the methodologies available for CTX detection, including those based on the toxicological, biochemical, chemical, and pharmaceutical properties of CTXs. Selecting any of these methodological approaches for routine monitoring of ciguatera may be dependent upon the applicability of the method. However, identifying a reference validation method for CTXs is a critical and urgent issue, and is dependent upon the availability of certified CTX standards and the coordinated action of laboratories. Reports of CFP cases in European hospitals have been described in several countries, and are mostly due to travel to CFP endemic areas. Additionally, the recent detection of the CTX-producing tropical genus Gambierdiscus in the eastern Atlantic Ocean of the northern hemisphere and in the Mediterranean Sea, as well as the confirmation of CFP in the Canary Islands and possibly in Madeira, constitute other reasons to study the onset of CFP in Europe [1]. The question of the possible contribution of climate change to the distribution of toxin-producing microalgae and ciguateric fish is raised. The impact of ciguatera onset on European Union (EU) policies will be discussed with respect to EU regulations on marine toxins in seafood. Critical analysis and availability of methodologies for CTX determination is required for a rapid response to suspected CFP cases and to conduct sound CFP risk analysis. PMID:20631873
Gascón, Fernando; de la Fuente, David; Puente, Javier; Lozano, Jesús
2007-11-01
The aim of this paper is to develop a methodology that is useful for analyzing, from a macroeconomic perspective, the aggregate demand and the aggregate supply features of the market of pharmaceutical generics. In order to determine the potential consumption and the potential production of pharmaceutical generics in different countries, two fuzzy decision support systems are proposed. Two fuzzy decision support systems, both based on the Mamdani model, were applied in this paper. These systems, generated by Matlab Toolbox 'Fuzzy' (v. 2.0), are able to determine the potential of a country for the manufacturing or the consumption of pharmaceutical generics. The systems make use of three macroeconomic input variables. In an empirical application of our proposed methodology, the potential towards consumption and manufacturing in Holland, Sweden, Italy and Spain has been estimated from national indicators. Cross-country comparisons are made and graphical surfaces are analyzed in order to interpret the results. The main contribution of this work is the development of a methodology that is useful for analyzing aggregate demand and aggregate supply characteristics of pharmaceutical generics. The methodology is valid for carrying out a systematic analysis of the potential generics have at a macrolevel in different countries. The main advantages of the use of fuzzy decision support systems in the context of pharmaceutical generics are the flexibility in the construction of the system, the speed in interpreting the results offered by the inference and surface maps and the ease with which a sensitivity analysis of the potential behavior of a given country may be performed.
Optimal search strategies for detecting health services research studies in MEDLINE
Wilczynski, Nancy L.; Haynes, R. Brian; Lavis, John N.; Ramkissoonsingh, Ravi; Arnold-Oatley, Alexandra E.
2004-01-01
Background Evidence from health services research (HSR) is currently thinly spread through many journals, making it difficult for health services researchers, managers and policy-makers to find research on clinical practice guidelines and the appropriateness, process, outcomes, cost and economics of health care services. We undertook to develop and test search terms to retrieve from the MEDLINE database HSR articles meeting minimum quality standards. Methods The retrieval performance of 7445 methodologic search terms and phrases in MEDLINE (the test) were compared with a hand search of the literature (the gold standard) for each issue of 68 journal titles for the year 2000 (a total of 25 936 articles). We determined sensitivity, specificity and precision (the positive predictive value) of the MEDLINE search strategies. Results A majority of the articles that were classified as outcome assessment, but fewer than half of those in the other categories, were considered methodologically acceptable (no methodologic criteria were applied for cost studies). Combining individual search terms to maximize sensitivity, while keeping specificity at 50% or more, led to sensitivities in the range of 88.1% to 100% for several categories (specificities ranged from 52.9% to 97.4%). When terms were combined to maximize specificity while keeping sensitivity at 50% or more, specificities of 88.8% to 99.8% were achieved. When terms were combined to maximize sensitivity and specificity while minimizing the differences between the 2 measurements, most strategies for HSR categories achieved sensitivity and specificity of at least 80%. Interpretation Sensitive and specific search strategies were validated for retrieval of HSR literature from MEDLINE. These strategies have been made available for public use by the US National Library of Medicine at www.nlm.nih.gov/nichsr/hedges/search.html. PMID:15534310
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.
Ngonghala, Calistus N; Teboh-Ewungkem, Miranda I; Ngwa, Gideon A
2015-06-01
We derive and study a deterministic compartmental model for malaria transmission with varying human and mosquito populations. Our model considers disease-related deaths, asymptomatic immune humans who are also infectious, as well as mosquito demography, reproduction and feeding habits. Analysis of the model reveals the existence of a backward bifurcation and persistent limit cycles whose period and size is determined by two threshold parameters: the vectorial basic reproduction number Rm, and the disease basic reproduction number R0, whose size can be reduced by reducing Rm. We conclude that malaria dynamics are indeed oscillatory when the methodology of explicitly incorporating the mosquito's demography, feeding and reproductive patterns is considered in modeling the mosquito population dynamics. A sensitivity analysis reveals important control parameters that can affect the magnitudes of Rm and R0, threshold quantities to be taken into consideration when designing control strategies. Both Rm and the intrinsic period of oscillation are shown to be highly sensitive to the mosquito's birth constant λm and the mosquito's feeding success probability pw. Control of λm can be achieved by spraying, eliminating breeding sites or moving them away from human habitats, while pw can be controlled via the use of mosquito repellant and insecticide-treated bed-nets. The disease threshold parameter R0 is shown to be highly sensitive to pw, and the intrinsic period of oscillation is also sensitive to the rate at which reproducing mosquitoes return to breeding sites. A global sensitivity and uncertainty analysis reveals that the ability of the mosquito to reproduce and uncertainties in the estimations of the rates at which exposed humans become infectious and infectious humans recover from malaria are critical in generating uncertainties in the disease classes.
NASA Astrophysics Data System (ADS)
Tiwari, Vaibhav
2018-07-01
The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.
Fast classification of hazelnut cultivars through portable infrared spectroscopy and chemometrics
NASA Astrophysics Data System (ADS)
Manfredi, Marcello; Robotti, Elisa; Quasso, Fabio; Mazzucco, Eleonora; Calabrese, Giorgio; Marengo, Emilio
2018-01-01
The authentication and traceability of hazelnuts is very important for both the consumer and the food industry, to safeguard the protected varieties and the food quality. This study investigates the use of a portable FTIR spectrometer coupled to multivariate statistical analysis for the classification of raw hazelnuts. The method discriminates hazelnuts from different origins/cultivars based on differences of the signal intensities of their IR spectra. The multivariate classification methods, namely principal component analysis (PCA) followed by linear discriminant analysis (LDA) and partial least square discriminant analysis (PLS-DA), with or without variable selection, allowed a very good discrimination among the groups, with PLS-DA coupled to variable selection providing the best results. Due to the fast analysis, high sensitivity, simplicity and no sample preparation, the proposed analytical methodology could be successfully used to verify the cultivar of hazelnuts, and the analysis can be performed quickly and directly on site.
NASA Technical Reports Server (NTRS)
Hill, Geoffrey A.; Olson, Erik D.
2004-01-01
Due to the growing problem of noise in today's air transportation system, there have arisen needs to incorporate noise considerations in the conceptual design of revolutionary aircraft. Through the use of response surfaces, complex noise models may be converted into polynomial equations for rapid and simplified evaluation. This conversion allows many of the commonly used response surface-based trade space exploration methods to be applied to noise analysis. This methodology is demonstrated using a noise model of a notional 300 passenger Blended-Wing-Body (BWB) transport. Response surfaces are created relating source noise levels of the BWB vehicle to its corresponding FAR-36 certification noise levels and the resulting trade space is explored. Methods demonstrated include: single point analysis, parametric study, an optimization technique for inverse analysis, sensitivity studies, and probabilistic analysis. Extended applications of response surface-based methods in noise analysis are also discussed.
A curved ultrasonic actuator optimized for spherical motors: design and experiments.
Leroy, Edouard; Lozada, José; Hafez, Moustapha
2014-08-01
Multi-degree-of-freedom angular actuators are commonly used in numerous mechatronic areas such as omnidirectional robots, robot articulations or inertially stabilized platforms. The conventional method to design these devices consists in placing multiple actuators in parallel or series using gimbals which are bulky and difficult to miniaturize. Motors using a spherical rotor are interesting for miniature multidegree-of-freedom actuators. In this paper, a new actuator is proposed. It is based on a curved piezoelectric element which has its inner contact surface adapted to the diameter of the rotor. This adaptation allows to build spherical motors with a fully constrained rotor and without a need for additional guiding system. The work presents a design methodology based on modal finite element analysis. A methodology for mode selection is proposed and a sensitivity analysis of the final geometry to uncertainties and added masses is discussed. Finally, experimental results that validate the actuator concept on a single degree-of-freedom ultrasonic motor set-up are presented. Copyright © 2014 Elsevier B.V. All rights reserved.
Hyde, J M; Cerezo, A; Williams, T J
2009-04-01
Statistical analysis of atom probe data has improved dramatically in the last decade and it is now possible to determine the size, the number density and the composition of individual clusters or precipitates such as those formed in reactor pressure vessel (RPV) steels during irradiation. However, the characterisation of the onset of clustering or co-segregation is more difficult and has traditionally focused on the use of composition frequency distributions (for detecting clustering) and contingency tables (for detecting co-segregation). In this work, the authors investigate the possibility of directly examining the neighbourhood of each individual solute atom as a means of identifying the onset of solute clustering and/or co-segregation. The methodology involves comparing the mean observed composition around a particular type of solute with that expected from the overall composition of the material. The methodology has been applied to atom probe data obtained from several irradiated RPV steels. The results show that the new approach is more sensitive to fine scale clustering and co-segregation than that achievable using composition frequency distribution and contingency table analyses.
Conforto, Egle; Joguet, Nicolas; Buisson, Pierre; Vendeville, Jean-Eudes; Chaigneau, Carine; Maugard, Thierry
2015-02-01
The aim of this paper is to describe an optimized methodology to study the surface characteristics and internal structure of biopolymer capsules using scanning electron microscopy (SEM) in environmental mode. The main advantage of this methodology is that no preparation is required and, significantly, no metallic coverage is deposited on the surface of the specimen, thus preserving the original capsule shape and its surface morphology. This avoids introducing preparation artefacts which could modify the capsule surface and mask information concerning important feature like porosities or roughness. Using this method gelatin and mainly fatty coatings, difficult to be analyzed by standard SEM technique, unambiguously show fine details of their surface morphology without damage. Furthermore, chemical contrast is preserved in backscattered electron images of unprepared samples, allowing visualizing the internal organization of the capsule, the quality of the envelope, etc... This study provides pointers on how to obtain optimal conditions for the analysis of biological or sensitive material, as this is not always studied using appropriate techniques. A reliable evaluation of the parameters used in capsule elaboration for research and industrial applications, as well as that of capsule functionality is provided by this methodology, which is essential for the technological progress in this domain. Copyright © 2014 Elsevier B.V. All rights reserved.
Improved operation of magnetic bearings for flywheel energy storage system
NASA Technical Reports Server (NTRS)
Zmood, R. B.; Pang, D.; Anand, D. K.; Kirk, J. A.
1990-01-01
Analysis and operation of prototype 500-Wh flywheel at low speeds have shown that many factors affect the correct functioning of the magnetic bearings. An examination is made of a number of these, including magnetic bearing control system nonlinearities and displacement transducer positioning, and their effects upon the successful operation of the suspension system. It is observed that the bearing control system is extremely sensitive to actuator parameters such as coil inductance. As a consequence of the analysis of bearing relaxation oscillations, the bearing actuator design methodology which has previously been used, where coil parameter selection is based upon static considerations, has been revised. Displacement transducer sensors which overcome the collocation problem are discussed.
Dupont, Anne-Laurence; Seemann, Agathe; Lavédrine, Bertrand
2012-01-30
A methodology for capillary electrophoresis/electrospray ionisation mass spectrometry (CE/ESI-MS) was developed for the simultaneous analysis of degradation products from paper among two families of compounds: low molar mass aliphatic organic acids, and aromatic (phenolic and furanic) compounds. The work comprises the optimisation of the CE separation and the ESI-MS parameters for improved sensitivity with model compounds using two successive designs of experiments. The method was applied to the analysis of lignocellulosic paper at different stages of accelerated hygrothermal ageing. The compounds of interest were identified. Most of them could be quantified and several additional analytes were separated. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chanda, Sandip; De, Abhinandan
2016-12-01
A social welfare optimization technique has been proposed in this paper with a developed state space based model and bifurcation analysis to offer substantial stability margin even in most inadvertent states of power system networks. The restoration of the power market dynamic price equilibrium has been negotiated in this paper, by forming Jacobian of the sensitivity matrix to regulate the state variables for the standardization of the quality of solution in worst possible contingencies of the network and even with co-option of intermittent renewable energy sources. The model has been tested in IEEE 30 bus system and illustrious particle swarm optimization has assisted the fusion of the proposed model and methodology.
Coupled reactors analysis: New needs and advances using Monte Carlo methodology
Aufiero, M.; Palmiotti, G.; Salvatores, M.; ...
2016-08-20
Coupled reactors and the coupling features of large or heterogeneous core reactors can be investigated with the Avery theory that allows a physics understanding of the main features of these systems. However, the complex geometries that are often encountered in association with coupled reactors, require a detailed geometry description that can be easily provided by modern Monte Carlo (MC) codes. This implies a MC calculation of the coupling parameters defined by Avery and of the sensitivity coefficients that allow further detailed physics analysis. The results presented in this paper show that the MC code SERPENT has been successfully modifed tomore » meet the required capabilities.« less
A Decomposition of Hospital Profitability: An Application of DuPont Analysis to the US Market.
Turner, Jason; Broom, Kevin; Elliott, Michael; Lee, Jen-Fu
2015-01-01
This paper evaluates the drivers of profitability for a large sample of U.S. hospitals. Following a methodology frequently used by financial analysts, we use a DuPont analysis as a framework to evaluate the quality of earnings. By decomposing returns on equity (ROE) into profit margin, total asset turnover, and capital structure, the DuPont analysis reveals what drives overall profitability. Profit margin, the efficiency with which services are rendered (total asset turnover), and capital structure is calculated for 3,255 U.S. hospitals between 2007 and 2012 using data from the Centers for Medicare & Medicaid Services' Healthcare Cost Report Information System (CMS Form 2552). The sample is then stratified by ownership, size, system affiliation, teaching status, critical access designation, and urban or non-urban location. Those hospital characteristics and interaction terms are then regressed (OLS) against the ROE and the respective DuPont components. Sensitivity to regression methodology is also investigated using a seemingly unrelated regression. When the sample is stratified by hospital characteristics, the results indicate investor-owned hospitals have higher profit margins, higher efficiency, and are substantially more leveraged. Hospitals in systems are found to have higher ROE, margins, and efficiency but are associated with less leverage. In addition, a number of important and significant interactions between teaching status, ownership, location, critical access designation, and inclusion in a system are documented. Many of the significant relationships, most notably not-for-profit ownership, lose significance or are predominately associated with one interaction effect when interaction terms are introduced as explanatory variables. Results are not sensitive to the alternative methodology. The results of the DuPont analysis suggest that although there appears to be convergence in the behavior of NFP and IO hospitals, significant financial differences remain depending on their respective hospital characteristics. Those differences are tempered or exacerbated by location, size, teaching status, system affiliation, and critical access designation. With the exception of cost-based reimbursement for critical access hospitals, emerging payment systems are placing additional financial pressures on hospitals. The financial pressures being applied treat hospitals as a monolithic category and, given the delicate and often negative ROE for many hospitals, the long-term stability of the healthcare facility infrastructure may be negatively impacted.
Ignition sensitivity study of an energetic train configuration using experiments and simulation
NASA Astrophysics Data System (ADS)
Kim, Bohoon; Yu, Hyeonju; Yoh, Jack J.
2018-06-01
A full scale hydrodynamic simulation intended for the accurate description of shock-induced detonation transition was conducted as a part of an ignition sensitivity analysis of an energetic component system. The system is composed of an exploding foil initiator (EFI), a donor explosive unit, a stainless steel gap, and an acceptor explosive. A series of velocity interferometer system for any reflector measurements were used to validate the hydrodynamic simulations based on the reactive flow model that describes the initiation of energetic materials arranged in a train configuration. A numerical methodology with ignition and growth mechanisms for tracking multi-material boundary interactions as well as severely transient fluid-structure coupling between high explosive charges and metal gap is described. The free surface velocity measurement is used to evaluate the sensitivity of energetic components that are subjected to strong pressure waves. Then, the full scale hydrodynamic simulation is performed on the flyer impacted initiation of an EFI driven pyrotechnical system.
High-Performance Piezoresistive MEMS Strain Sensor with Low Thermal Sensitivity
Mohammed, Ahmed A. S.; Moussa, Walied A.; Lou, Edmond
2011-01-01
This paper presents the experimental evaluation of a new piezoresistive MEMS strain sensor. Geometric characteristics of the sensor silicon carrier have been employed to improve the sensor sensitivity. Surface features or trenches have been introduced in the vicinity of the sensing elements. These features create stress concentration regions (SCRs) and as a result, the strain/stress field was altered. The improved sensing sensitivity compensated for the signal loss. The feasibility of this methodology was proved in a previous work using Finite Element Analysis (FEA). This paper provides the experimental part of the previous study. The experiments covered a temperature range from −50 °C to +50 °C. The MEMS sensors are fabricated using five different doping concentrations. FEA is also utilized to investigate the effect of material properties and layer thickness of the bonding adhesive on the sensor response. The experimental findings are compared to the simulation results to guide selection of bonding adhesive and installation procedure. Finally, FEA was used to analyze the effect of rotational/alignment errors. PMID:22319384
Yap, H Y; Nixon, J D
2015-12-01
Energy recovery from municipal solid waste plays a key role in sustainable waste management and energy security. However, there are numerous technologies that vary in suitability for different economic and social climates. This study sets out to develop and apply a multi-criteria decision making methodology that can be used to evaluate the trade-offs between the benefits, opportunities, costs and risks of alternative energy from waste technologies in both developed and developing countries. The technologies considered are mass burn incineration, refuse derived fuel incineration, gasification, anaerobic digestion and landfill gas recovery. By incorporating qualitative and quantitative assessments, a preference ranking of the alternative technologies is produced. The effect of variations in decision criteria weightings are analysed in a sensitivity analysis. The methodology is applied principally to compare and assess energy recovery from waste options in the UK and India. These two countries have been selected as they could both benefit from further development of their waste-to-energy strategies, but have different technical and socio-economic challenges to consider. It is concluded that gasification is the preferred technology for the UK, whereas anaerobic digestion is the preferred technology for India. We believe that the presented methodology will be of particular value for waste-to-energy decision-makers in both developed and developing countries. Copyright © 2015 Elsevier Ltd. All rights reserved.
A rapid and sensitive analytical method for the determination of 14 pyrethroids in water samples.
Feo, M L; Eljarrat, E; Barceló, D
2010-04-09
A simple, efficient and environmentally friendly analytical methodology is proposed for extracting and preconcentrating pyrethroids from water samples prior to gas chromatography-negative ion chemical ionization mass spectrometry (GC-NCI-MS) analysis. Fourteen pyrethroids were selected for this work: bifenthrin, cyfluthrin, lambda-cyhalothrin, cypermethrin, deltamethrin, esfenvalerate, fenvalerate, fenpropathrin, tau-fluvalinate, permethrin, phenothrin, resmethrin, tetramethrin and tralomethrin. The method is based on ultrasound-assisted emulsification-extraction (UAEE) of a water-immiscible solvent in an aqueous medium. Chloroform was used as extraction solvent in the UAEE technique. Target analytes were quantitatively extracted achieving an enrichment factor of 200 when 20 mL aliquot of pure water spiked with pyrethroid standards was extracted. The method was also evaluated with tap water and river water samples. Method detection limits (MDLs) ranged from 0.03 to 35.8 ng L(-1) with RSDs values < or =3-25% (n=5). The coefficients of estimation of the calibration curves obtained following the proposed methodology were > or =0.998. Recovery values were in the range of 45-106%, showing satisfactory robustness of the method for analyzing pyrethroids in water samples. The proposed methodology was applied for the analysis of river water samples. Cypermethrin was detected at concentration levels ranging from 4.94 to 30.5 ng L(-1). Copyright 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Potter, Christopher
2013-01-01
The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) methodology was applied to detected changes in perennial vegetation cover at marshland sites in Northern California reported to have undergone restoration between 1999 and 2009. Results showed extensive contiguous areas of restored marshland plant cover at 10 of the 14 sites selected. Gains in either woody shrub cover and/or from recovery of herbaceous cover that remains productive and evergreen on a year-round basis could be mapped out from the image results. However, LEDAPS may not be highly sensitive changes in wetlands that have been restored mainly with seasonal herbaceous cover (e.g., vernal pools), due to the ephemeral nature of the plant greenness signal. Based on this evaluation, the LEDAPS methodology would be capable of fulfilling a pressing need for consistent, continual, low-cost monitoring of changes in marshland ecosystems of the Pacific Flyway.
A new scenario-based approach to damage detection using operational modal parameter estimates
NASA Astrophysics Data System (ADS)
Hansen, J. B.; Brincker, R.; López-Aenlle, M.; Overgaard, C. F.; Kloborg, K.
2017-09-01
In this paper a vibration-based damage localization and quantification method, based on natural frequencies and mode shapes, is presented. The proposed technique is inspired by a damage assessment methodology based solely on the sensitivity of mass-normalized experimental determined mode shapes. The present method differs by being based on modal data extracted by means of Operational Modal Analysis (OMA) combined with a reasonable Finite Element (FE) representation of the test structure and implemented in a scenario-based framework. Besides a review of the basic methodology this paper addresses fundamental theoretical as well as practical considerations which are crucial to the applicability of a given vibration-based damage assessment configuration. Lastly, the technique is demonstrated on an experimental test case using automated OMA. Both the numerical study as well as the experimental test case presented in this paper are restricted to perturbations concerning mass change.
Soldatini, Cecilia; Albores-Barajas, Yuri Vladimir; Lovato, Tomas; Andreon, Adriano; Torricelli, Patrizia; Montemaggiori, Alessandro; Corsa, Cosimo; Georgalas, Vyron
2011-01-01
The presence of wildlife in airport areas poses substantial hazards to aviation. Wildlife aircraft collisions (hereafter wildlife strikes) cause losses in terms of human lives and direct monetary losses for the aviation industry. In recent years, wildlife strikes have increased in parallel with air traffic increase and species habituation to anthropic areas. In this paper, we used an ecological approach to wildlife strike risk assessment to eight Italian international airports. The main achievement is a site-specific analysis that avoids flattening wildlife strike events on a large scale while maintaining comparable airport risk assessments. This second version of the Birdstrike Risk Index (BRI2) is a sensitive tool that provides different time scale results allowing appropriate management planning. The methodology applied has been developed in accordance with the Italian Civil Aviation Authority, which recognizes it as a national standard implemented in the advisory circular ENAC APT-01B.
Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks.
Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo
2017-11-05
Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.
Soldatini, Cecilia; Albores-Barajas, Yuri Vladimir; Lovato, Tomas; Andreon, Adriano; Torricelli, Patrizia; Montemaggiori, Alessandro; Corsa, Cosimo; Georgalas, Vyron
2011-01-01
The presence of wildlife in airport areas poses substantial hazards to aviation. Wildlife aircraft collisions (hereafter wildlife strikes) cause losses in terms of human lives and direct monetary losses for the aviation industry. In recent years, wildlife strikes have increased in parallel with air traffic increase and species habituation to anthropic areas. In this paper, we used an ecological approach to wildlife strike risk assessment to eight Italian international airports. The main achievement is a site-specific analysis that avoids flattening wildlife strike events on a large scale while maintaining comparable airport risk assessments. This second version of the Birdstrike Risk Index (BRI2) is a sensitive tool that provides different time scale results allowing appropriate management planning. The methodology applied has been developed in accordance with the Italian Civil Aviation Authority, which recognizes it as a national standard implemented in the advisory circular ENAC APT-01B. PMID:22194950
Evaluation of errors in quantitative determination of asbestos in rock
NASA Astrophysics Data System (ADS)
Baietto, Oliviero; Marini, Paola; Vitaliti, Martina
2016-04-01
The quantitative determination of the content of asbestos in rock matrices is a complex operation which is susceptible to important errors. The principal methodologies for the analysis are Scanning Electron Microscopy (SEM) and Phase Contrast Optical Microscopy (PCOM). Despite the PCOM resolution is inferior to that of SEM, PCOM analysis has several advantages, including more representativity of the analyzed sample, more effective recognition of chrysotile and a lower cost. The DIATI LAA internal methodology for the analysis in PCOM is based on a mild grinding of a rock sample, its subdivision in 5-6 grain size classes smaller than 2 mm and a subsequent microscopic analysis of a portion of each class. The PCOM is based on the optical properties of asbestos and of the liquids with note refractive index in which the particles in analysis are immersed. The error evaluation in the analysis of rock samples, contrary to the analysis of airborne filters, cannot be based on a statistical distribution. In fact for airborne filters a binomial distribution (Poisson), which theoretically defines the variation in the count of fibers resulting from the observation of analysis fields, chosen randomly on the filter, can be applied. The analysis in rock matrices instead cannot lean on any statistical distribution because the most important object of the analysis is the size of the of asbestiform fibers and bundles of fibers observed and the resulting relationship between the weights of the fibrous component compared to the one granular. The error evaluation generally provided by public and private institutions varies between 50 and 150 percent, but there are not, however, specific studies that discuss the origin of the error or that link it to the asbestos content. Our work aims to provide a reliable estimation of the error in relation to the applied methodologies and to the total content of asbestos, especially for the values close to the legal limits. The error assessments must be made through the repetition of the same analysis on the same sample to try to estimate the error on the representativeness of the sample and the error related to the sensitivity of the operator, in order to provide a sufficiently reliable uncertainty of the method. We used about 30 natural rock samples with different asbestos content, performing 3 analysis on each sample to obtain a trend sufficiently representative of the percentage. Furthermore we made on one chosen sample 10 repetition of the analysis to try to define more specifically the error of the methodology.
White, J M L; McFadden, J P; White, I R
2008-03-01
Active patch test sensitization is an uncommon phenomenon which may have undesirable consequences for those undergoing this gold-standard investigation for contact allergy. To perform a retrospective analysis of the results of 241 subjects who were patch tested twice in a monocentre evaluating approximately 1500 subjects per year. Positivity to 11 common allergens in the recommended Baseline Series of contact allergens (European) was analysed: nickel sulphate; Myroxylon pereirae; fragrance mix I; para-phenylenediamine; colophonium; epoxy resin; neomycin; quaternium-15; thiuram mix; sesquiterpene lactone mix; and para-tert-butylphenol resin. Only fragrance mix I gave a statistically significant, increased rate of positivity on the second reading compared with the first (P=0.011). This trend was maintained when separately analysing a subgroup of 42 subjects who had been repeat patch tested within 1 year; this analysis was done to minimize the potential confounding factor of increased usage of fragrances with a wide interval between both tests. To reduce the confounding effect of age on our data, we calculated expected frequencies of positivity to fragrance mix I based on previously published data from our centre. This showed a marked excess of observed cases over predicted ones, particularly in women in the age range 40-60 years. We suspect that active sensitization to fragrance mix I may occur. Similar published analysis from another large group using standard methodology supports our data.
Autonomous Aerobraking: Thermal Analysis and Response Surface Development
NASA Technical Reports Server (NTRS)
Dec, John A.; Thornblom, Mark N.
2011-01-01
A high-fidelity thermal model of the Mars Reconnaissance Orbiter was developed for use in an autonomous aerobraking simulation study. Response surface equations were derived from the high-fidelity thermal model and integrated into the autonomous aerobraking simulation software. The high-fidelity thermal model was developed using the Thermal Desktop software and used in all phases of the analysis. The use of Thermal Desktop exclusively, represented a change from previously developed aerobraking thermal analysis methodologies. Comparisons were made between the Thermal Desktop solutions and those developed for the previous aerobraking thermal analyses performed on the Mars Reconnaissance Orbiter during aerobraking operations. A variable sensitivity screening study was performed to reduce the number of variables carried in the response surface equations. Thermal analysis and response surface equation development were performed for autonomous aerobraking missions at Mars and Venus.
Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis.
Till, Kevin; Jones, Ben L; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B
2016-01-01
Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001), although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003). Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification.
Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis
Till, Kevin; Jones, Ben L.; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B.
2016-01-01
Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001), although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003). Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification. PMID:27224653
NASA Astrophysics Data System (ADS)
Batzias, Fragiskos; Kopsidas, Odysseas
2012-12-01
The optimal concentration Copt of a pollutant in the environment can be determined as an equilibrium point in the trade off between (i) environmental cost, due to impact on man/ecosystem/economy, and (ii) economic cost for environmental protection, as it can be expressed by Pigouvian tax. These two conflict variables are internalized within the same techno-economic objective function of total cost, which is minimized. In this work, the first conflict variable is represented by a Willingness To Pay (WTP) index. A methodology is developed for the estimation of this index by using fuzzy sets to count for uncertainty. Implementation of this methodology is presented, concerning odor pollution of air round an olive pomace oil mill. The ASTM E544-99 (2004) 'Standard Practice for Referencing Suprathreshold Odor Intensity' has been modified to serve as a basis for testing, while a network of the quality standards, required for the realization/application of this 'Practice', is also presented. Last, sensitivity analysis of Copt as regards the impact of (i) the increase of environmental information/sensitization and (ii) the decrease of interest rate reveals a shifting of Copt to lower and higher values, respectively; certain positive and negative implications (i.e., shifting of Copt to lower and higher values, respectively) caused by socio-economic parameters are also discussed.
Wing-section optimization for supersonic viscous flow
NASA Technical Reports Server (NTRS)
Item, Cem C.; Baysal, Oktay (Editor)
1995-01-01
To improve the shape of a supersonic wing, an automated method that also includes higher fidelity to the flow physics is desirable. With this impetus, an aerodynamic optimization methodology incorporating thin-layer Navier-Stokes equations and sensitivity analysis had been previously developed. Prior to embarking upon the wind design task, the present investigation concentrated on testing the feasibility of the methodology, and the identification of adequate problem formulations, by defining two-dimensional, cost-effective test cases. Starting with two distinctly different initial airfoils, two independent shape optimizations resulted in shapes with similar features: slightly cambered, parabolic profiles with sharp leading- and trailing-edges. Secondly, the normal section to the subsonic portion of the leading edge, which had a high normal angle-of-attack, was considered. The optimization resulted in a shape with twist and camber which eliminated the adverse pressure gradient, hence, exploiting the leading-edge thrust. The wing section shapes obtained in all the test cases had the features predicted by previous studies. Therefore, it was concluded that the flowfield analyses and sensitivity coefficients were computed and fed to the present gradient-based optimizer correctly. Also, as a result of the present two-dimensional study, suggestions were made for the problem formulations which should contribute to an effective wing shape optimization.
Jager, Marieke F; Ott, Christian; Kaplan, Christopher J; Kraus, Peter M; Neumark, Daniel M; Leone, Stephen R
2018-01-01
We present an extreme ultraviolet (XUV) transient absorption apparatus tailored to attosecond and femtosecond measurements on bulk solid-state thin-film samples, specifically when the sample dynamics are sensitive to heating effects. The setup combines methodology for stabilizing sub-femtosecond time-resolution measurements over 48 h and techniques for mitigating heat buildup in temperature-dependent samples. Single-point beam stabilization in pump and probe arms and periodic time-zero reference measurements are described for accurate timing and stabilization. A hollow-shaft motor configuration for rapid sample rotation, raster scanning capability, and additional diagnostics are described for heat mitigation. Heat transfer simulations performed using a finite element analysis allow comparison of sample rotation and traditional raster scanning techniques for 100 Hz pulsed laser measurements on vanadium dioxide, a material that undergoes an insulator-to-metal transition at a modest temperature of 340 K. Experimental results are presented confirming that the vanadium dioxide (VO 2 ) sample cannot cool below its phase transition temperature between laser pulses without rapid rotation, in agreement with the simulations. The findings indicate the stringent conditions required to perform rigorous broadband XUV time-resolved absorption measurements on bulk solid-state samples, particularly those with temperature sensitivity, and elucidate a clear methodology to perform them.
NASA Astrophysics Data System (ADS)
Jager, Marieke F.; Ott, Christian; Kaplan, Christopher J.; Kraus, Peter M.; Neumark, Daniel M.; Leone, Stephen R.
2018-01-01
We present an extreme ultraviolet (XUV) transient absorption apparatus tailored to attosecond and femtosecond measurements on bulk solid-state thin-film samples, specifically when the sample dynamics are sensitive to heating effects. The setup combines methodology for stabilizing sub-femtosecond time-resolution measurements over 48 h and techniques for mitigating heat buildup in temperature-dependent samples. Single-point beam stabilization in pump and probe arms and periodic time-zero reference measurements are described for accurate timing and stabilization. A hollow-shaft motor configuration for rapid sample rotation, raster scanning capability, and additional diagnostics are described for heat mitigation. Heat transfer simulations performed using a finite element analysis allow comparison of sample rotation and traditional raster scanning techniques for 100 Hz pulsed laser measurements on vanadium dioxide, a material that undergoes an insulator-to-metal transition at a modest temperature of 340 K. Experimental results are presented confirming that the vanadium dioxide (VO2) sample cannot cool below its phase transition temperature between laser pulses without rapid rotation, in agreement with the simulations. The findings indicate the stringent conditions required to perform rigorous broadband XUV time-resolved absorption measurements on bulk solid-state samples, particularly those with temperature sensitivity, and elucidate a clear methodology to perform them.
Renewable Energy used in State Renewable Portfolio Standards Yielded
. Renewable Portfolio Standards also shows national water withdrawals and water consumption by fossil-fuel methodologies, while recognizing that states could perform their own more-detailed assessments," NREL's , respectively. Ranges are presented as the models and methodologies used are sensitive to multiple parameters
VFMA: Topographic Analysis of Sensitivity Data From Full-Field Static Perimetry
Weleber, Richard G.; Smith, Travis B.; Peters, Dawn; Chegarnov, Elvira N.; Gillespie, Scott P.; Francis, Peter J.; Gardiner, Stuart K.; Paetzold, Jens; Dietzsch, Janko; Schiefer, Ulrich; Johnson, Chris A.
2015-01-01
Purpose: To analyze static visual field sensitivity with topographic models of the hill of vision (HOV), and to characterize several visual function indices derived from the HOV volume. Methods: A software application, Visual Field Modeling and Analysis (VFMA), was developed for static perimetry data visualization and analysis. Three-dimensional HOV models were generated for 16 healthy subjects and 82 retinitis pigmentosa patients. Volumetric visual function indices, which are measures of quantity and comparable regardless of perimeter test pattern, were investigated. Cross-validation, reliability, and cross-sectional analyses were performed to assess this methodology and compare the volumetric indices to conventional mean sensitivity and mean deviation. Floor effects were evaluated by computer simulation. Results: Cross-validation yielded an overall R2 of 0.68 and index of agreement of 0.89, which were consistent among subject groups, indicating good accuracy. Volumetric and conventional indices were comparable in terms of test–retest variability and discriminability among subject groups. Simulated floor effects did not negatively impact the repeatability of any index, but large floor changes altered the discriminability for regional volumetric indices. Conclusions: VFMA is an effective tool for clinical and research analyses of static perimetry data. Topographic models of the HOV aid the visualization of field defects, and topographically derived indices quantify the magnitude and extent of visual field sensitivity. Translational Relevance: VFMA assists with the interpretation of visual field data from any perimetric device and any test location pattern. Topographic models and volumetric indices are suitable for diagnosis, monitoring of field loss, patient counseling, and endpoints in therapeutic trials. PMID:25938002
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Dell'Oca, A.
2017-12-01
We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Hanoca, P.; Ramakrishna, H. V.
2018-03-01
This work is related to develop a methodology to model and simulate the TEHD using the sequential application of CFD and CSD. The FSI analyses are carried out using ANSYS Workbench. In this analysis steady state, 3D Navier-Stoke equations along with energy equation are solved. Liquid properties are introduced where the viscosity and density are the function of pressure and temperature. The cavitation phenomenon is adopted in the analysis. Numerical analysis has been carried at different speeds and surfaces temperatures. During the analysis, it was found that as speed increases, hydrodynamic pressures will also increases. The pressure profile obtained from the Roelands equation is more sensitive to the temperature as compared to the Barus equation. The stress distributions specify the significant positions in the bearing structure. The developed method is capable of giving latest approaching into the physics of elasto hydrodynamic lubrication.
Stalpers, Dewi; de Brouwer, Brigitte J M; Kaljouw, Marian J; Schuurmans, Marieke J
2015-04-01
To systematically review the literature on relationships between characteristics of the nurse work environment and five nurse-sensitive patient outcomes in hospitals. The search was performed in Medline (PubMed), Cochrane, Embase, and CINAHL. Included were quantitative studies published from 2004 to 2012 that examined associations between work environment and the following patient outcomes: delirium, malnutrition, pain, patient falls and pressure ulcers. The Dutch version of Cochrane's critical appraisal instrument was used to assess the methodological quality of the included studies. Of the initial 1120 studies, 29 were included in the review. Nurse staffing was inversely related to patient falls; more favorable staffing hours were associated with fewer fall incidents. Mixed results were shown for nurse staffing in relation to pressure ulcers. Characteristics of work environment other than nurse staffing that showed significant effects were: (i) collaborative relationships; positively perceived communication between nurses and physicians was associated with fewer patient falls and lower rates of pressure ulcers, (ii) nurse education; higher levels of education were related to fewer patient falls and (iii) nursing experience; lower levels of experience were related to more patient falls and higher rates of pressure ulcers. No eligible studies were found regarding delirium and malnutrition, and only one study found that favorable staffing was related to better pain management. Our findings show that there is evidence on associations between work environment and nurse-sensitive patient outcomes. However, the results are equivocal and studies often do not provide clear conclusions. A quantitative meta-analysis was not feasible due to methodological issues in the primary studies (for example, poorly described samples). The diversity in outcome measures and the majority of cross-sectional designs make quantitative analysis even more difficult. In the future, well-described research designs of a longitudinal character will be needed in this field of work environment and nursing quality. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bressan, Lucas P.; do Nascimento, Paulo Cícero; Schmidt, Marcella E. P.; Faccin, Henrique; de Machado, Leandro Carvalho; Bohrer, Denise
2017-02-01
A novel method was developed to determine low molecular weight polycyclic aromatic hydrocarbons in aqueous leachates from soils and sediments using a salting-out assisted liquid-liquid extraction, synchronous fluorescence spectrometry and a multivariate calibration technique. Several experimental parameters were controlled and the optimum conditions were: sodium carbonate as the salting-out agent at concentration of 2 mol L- 1, 3 mL of acetonitrile as extraction solvent, 6 mL of aqueous leachate, vortexing for 5 min and centrifuging at 4000 rpm for 5 min. The partial least squares calibration was optimized to the lowest values of root mean squared error and five latent variables were chosen for each of the targeted compounds. The regression coefficients for the true versus predicted concentrations were higher than 0.99. Figures of merit for the multivariate method were calculated, namely sensitivity, multivariate detection limit and multivariate quantification limit. The selectivity was also evaluated and other polycyclic aromatic hydrocarbons did not interfere in the analysis. Likewise, high performance liquid chromatography was used as a comparative methodology, and the regression analysis between the methods showed no statistical difference (t-test). The proposed methodology was applied to soils and sediments of a Brazilian river and the recoveries ranged from 74.3% to 105.8%. Overall, the proposed methodology was suitable for the targeted compounds, showing that the extraction method can be applied to spectrofluorometric analysis and that the multivariate calibration is also suitable for these compounds in leachates from real samples.
Motivating Students for Project-based Learning for Application of Research Methodology Skills.
Tiwari, Ranjana; Arya, Raj Kumar; Bansal, Manoj
2017-12-01
Project-based learning (PBL) is motivational for students to learn research methodology skills. It is a way to engage and give them ownership over their own learning. The aim of this study is to use PBL for application of research methodology skills for better learning by encouraging an all-inclusive approach in teaching and learning rather than an individualized tailored approach. The present study was carried out for MBBS 6 th - and 7 th -semester students of community medicine. Students and faculties were sensitized about PBL and components of research methodology skills. They worked in small groups. The students were asked to fill the student feedback Questionnaire and the faculty was also asked to fill the faculty feedback Questionnaire. Both the Questionnaires were assessed on a 5 point Likert scale. After submitted projects, document analysis was done. A total of 99 students of the 6 th and 7 th semester were participated in PBL. About 90.91% students agreed that there should be continuation of PBL in subsequent batches. 73.74% felt satisfied and motivated with PBL, whereas 76.77% felt that they would be able to use research methodology in the near future. PBL requires considerable knowledge, effort, persistence, and self-regulation on the part of the students. They need to devise plans, gather information evaluate both the findings, and their approach. Facilitator plays a critical role in helping students in the process by shaping opportunity for learning, guiding students, thinking, and helping them construct new understanding.
NASA Astrophysics Data System (ADS)
Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick
2016-06-01
Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Cruz, Jose A.; Johnson Stephen B.; Lo, Yunnhon
2015-01-01
This paper describes a quantitative methodology for bounding the false positive (FP) and false negative (FN) probabilities associated with a human-rated launch vehicle abort trigger (AT) that includes sensor data qualification (SDQ). In this context, an AT is a hardware and software mechanism designed to detect the existence of a specific abort condition. Also, SDQ is an algorithmic approach used to identify sensor data suspected of being corrupt so that suspect data does not adversely affect an AT's detection capability. The FP and FN methodologies presented here were developed to support estimation of the probabilities of loss of crew and loss of mission for the Space Launch System (SLS) which is being developed by the National Aeronautics and Space Administration (NASA). The paper provides a brief overview of system health management as being an extension of control theory; and describes how ATs and the calculation of FP and FN probabilities relate to this theory. The discussion leads to a detailed presentation of the FP and FN methodology and an example showing how the FP and FN calculations are performed. This detailed presentation includes a methodology for calculating the change in FP and FN probabilities that result from including SDQ in the AT architecture. To avoid proprietary and sensitive data issues, the example incorporates a mixture of open literature and fictitious reliability data. Results presented in the paper demonstrate the effectiveness of the approach in providing quantitative estimates that bound the probability of a FP or FN abort determination.
Cost of diabetic eye, renal and foot complications: a methodological review.
Schirr-Bonnans, Solène; Costa, Nadège; Derumeaux-Burel, Hélène; Bos, Jérémy; Lepage, Benoît; Garnault, Valérie; Martini, Jacques; Hanaire, Hélène; Turnin, Marie-Christine; Molinier, Laurent
2017-04-01
Diabetic retinopathy (DR), diabetic kidney disease (DKD) and diabetic foot ulcer (DFU) represent a public health and economic concern that may be assessed with cost-of-illness (COI) studies. (1) To review COI studies published between 2000 and 2015, about DR, DKD and DFU; (2) to analyse methods used. Disease definition, epidemiological approach, perspective, type of costs, activity data sources, cost valuation, sensitivity analysis, cost discounting and presentation of costs may be described in COI studies. Each reviewed study was assessed with a methodological grid including these nine items. The five following items have been detailed in the reviewed studies: epidemiological approach (59 % of studies described it), perspective (75 %), type of costs (98 %), activity data sources (91 %) and cost valuation (59 %). The disease definition and the presentation of results were detailed in fewer studies (respectively 50 and 46 %). In contrast, sensitivity analysis was only performed in 14 % of studies and cost discounting in 7 %. Considering the studies showing an average cost per patient and per year with a societal perspective, DR cost estimates were US $2297 (range 5-67,486), DKD cost ranged from US $1095 to US $16,384, and DFU cost was US $10,604 (range 1444-85,718). This review reinforces the need to adequately describe the method to facilitate literature comparisons and projections. It also recalls that COI studies represent complementary tools to cost-effectiveness studies to help decision makers in the allocation of economic resources for the management of DR, DKD and DFU.
Lemonakis, Nikolaos; Skaltsounis, Alexios-Leandros; Tsarbopoulos, Anthony; Gikas, Evagelos
2016-01-15
A multistage optimization of all the parameters affecting detection/response in an LTQ-orbitrap analyzer was performed, using a design of experiments methodology. The signal intensity, a critical issue for mass analysis, was investigated and the optimization process was completed in three successive steps, taking into account the three main regions of an orbitrap, the ion generation, the ion transmission and the ion detection regions. Oleuropein and hydroxytyrosol were selected as the model compounds. Overall, applying this methodology the sensitivity was increased more than 24%, the resolution more than 6.5%, whereas the elapsed scan time was reduced nearly to its half. A high-resolution LTQ Orbitrap Discovery mass spectrometer was used for the determination of the analytes of interest. Thus, oleuropein and hydroxytyrosol were infused via the instruments syringe pump and they were analyzed employing electrospray ionization (ESI) in the negative high-resolution full-scan ion mode. The parameters of the three main regions of the LTQ-orbitrap were independently optimized in terms of maximum sensitivity. In this context, factorial design, response surface model and Plackett-Burman experiments were performed and analysis of variance was carried out to evaluate the validity of the statistical model and to determine the most significant parameters for signal intensity. The optimum MS conditions for each analyte were summarized and the method optimum condition was achieved by maximizing the desirability function. Our observation showed good agreement between the predicted optimum response and the responses collected at the predicted optimum conditions. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Bast, Callie Corinne Scheidt
1994-01-01
This thesis presents the on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes four effects that typically reduce lifetime strength: high temperature, mechanical fatigue, creep, and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for four variables, namely, high temperature, mechanical fatigue, creep, and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using the current version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of mechanical fatigue, creep, and thermal fatigue was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.
Mehri, M
2012-12-01
An artificial neural network (ANN) approach was used to develop feed-forward multilayer perceptron models to estimate the nutritional requirements of digestible lysine (dLys), methionine (dMet), and threonine (dThr) in broiler chicks. Sixty data lines representing response of the broiler chicks during 3 to 16 d of age to dietary levels of dLys (0.88-1.32%), dMet (0.42-0.58%), and dThr (0.53-0.87%) were obtained from literature and used to train the networks. The prediction values of ANN were compared with those of response surface methodology to evaluate the fitness of these 2 methods. The models were tested using R(2), mean absolute deviation, mean absolute percentage error, and absolute average deviation. The random search algorithm was used to optimize the developed ANN models to estimate the optimal values of dietary dLys, dMet, and dThr. The ANN models were used to assess the relative importance of each dietary input on the bird performance using sensitivity analysis. The statistical evaluations revealed the higher accuracy of ANN to predict the bird performance compared with response surface methodology models. The optimization results showed that the maximum BW gain may be obtained with dietary levels of 1.11, 0.51, and 0.78% of dLys, dMet, and dThr, respectively. Minimum feed conversion ratio may be achieved with dietary levels of 1.13, 0.54, 0.78% of dLys, dMet, and dThr, respectively. The sensitivity analysis on the models indicated that dietary Lys is the most important variable in the growth performance of the broiler chicks, followed by dietary Thr and Met. The results of this research revealed that the experimental data of a response-surface-methodology design could be successfully used to develop the well-designed ANN for pattern recognition of bird growth and optimization of nutritional requirements. The comparison between the 2 methods also showed that the statistical methods may have little effect on the ideal ratios of dMet and dThr to dLys in broiler chicks using multivariate optimization.
ERIC Educational Resources Information Center
Lindberg, Lene; Fransson, Mari; Forslund, Tommie; Springer, Lydia; Granqvist, Pehr
2017-01-01
Background: Scientific knowledge on the quality of caregiving/maternal sensitivity among mothers with mild intellectual disabilities (ID) is limited and subject to many methodological shortcomings, but seems to suggest that these mothers are less sensitive than mothers without intellectual disabilities. Methods: In this matched-comparison study…
The sperm motility pattern in ecotoxicological tests. The CRYO-Ecotest as a case study.
Fabbrocini, Adele; D'Adamo, Raffaele; Del Prete, Francesco; Maurizio, Daniela; Specchiulli, Antonietta; Oliveira, Luis F J; Silvestri, Fausto; Sansone, Giovanni
2016-01-01
Changes in environmental stressors inevitably lead to an increasing need for innovative and more flexible monitoring tools. The aim of this work has been the characterization of the motility pattern of the cryopreserved sea bream semen after exposure to a dumpsite leachate sample, for the identification of the best representative parameters to be used as endpoints in an ecotoxicological bioassay. Sperm motility has been evaluated either by visual and by computer-assisted analysis; parameters concerning motility on activation and those describing it in the times after activation (duration parameters) have been assessed, discerning them in terms of sensitivity, reliability and methodology of assessment by means of multivariate analyses. The EC50 values of the evaluated endpoints ranged between 2.3 and 4.5ml/L, except for the total motile percentage (aTM, 7.0ml/L), which proved to be the less sensitive among all the tested parameters. According to the multivariate analyses, a difference in sensitivity among "activation" endpoints in respect of "duration" ones can be inferred; on the contrary, endpoints seem to be equally informative either describing total motile sperm or the rapid sub-population, as well as the assessment methodology seems to be not discriminating. In conclusion, the CRYO-Ecotest is a multi-endpoint bioassay that can be considered a promising innovative ecotoxicological tool, characterized by a high plasticity, as its endpoints can be easy tailored each time according to the different needs of the environmental quality assessment programs. Copyright © 2015 Elsevier Inc. All rights reserved.
López-Cortés, Rubén; Formigo, Jacobo; Reboiro-Jato, Miguel; Fdez-Riverola, Florentino; Blanco, Francisco J; Lodeiro, Carlos; Oliveira, Elisabete; Capelo, J L; Santos, H M
2016-04-01
The aim of this work is to develop a nanoparticle-based methodology to find out biomarkers of diagnostic for knee osteoarthritis, KOA, through the use of matrix assisted laser desorption ionization time-of-flight-based mass spectrometry profiling. Urine samples used for this study were obtained from KOA patients (42 patients), patients with prosthesis (58 patients), and controls (36 individuals) with no history of joint disease. Gold-nano particle MALDI-based urine profiling was optimized and then applied over the 136 individuals. Jaccard index and 10 different classifiers over MALDI MS datasets were used to find out potential biomarkers. Then, the specificity and sensitivity of the method were evaluated. The presence of ten m/z signals as potential biomarkers in the healthy versus non-healthy approach suggests that patients (KOA and prosthesis) are differentiable from the healthy volunteers through profiling. The automatic diagnostic study confirmed these preliminary conclusions. The sensitivity and the specificity for the urine profiling criteria here reported, achieved by the C4.5 classifier, is 97% and 69% respectively. Thus, it is confirmed the utility of the method proposed in this work as an additional fast, non-expensive and robust test for KOA diagnostic. When the proposed method is compared with those used in common practice it is found that sensitivity is the highest, thus with a low false negative rate for diagnostic KOA patients in the population studied. Specificity is lower but in the range accepted for diagnostic objectives. Copyright © 2016. Published by Elsevier B.V.
Cieslak, Wendy; Pap, Kathleen; Bunch, Dustin R; Reineks, Edmunds; Jackson, Raymond; Steinle, Roxanne; Wang, Sihe
2013-02-01
Chromium (Cr), a trace metal element, is implicated in diabetes and cardiovascular disease. A hypochromic state has been associated with poor blood glucose control and unfavorable lipid metabolism. Sensitive and accurate measurement of blood chromium is very important to assess the chromium nutritional status. However, interferents in biological matrices and contamination make the sensitive analysis challenging. The primary goal of this study was to develop a highly sensitive method for quantification of total Cr in whole blood by inductively coupled plasma mass spectrometry (ICP-MS) and to validate the reference interval in a local healthy population. This method was developed on an ICP-MS with a collision/reaction cell. Interference was minimized using both kinetic energy discrimination between the quadrupole and hexapole and a selective collision gas (helium). Reference interval was validated in whole blood samples (n=51) collected in trace element free EDTA tubes from healthy adults (12 males, 39 females), aged 19-64 years (38.8±12.6), after a minimum of 8 h fasting. Blood samples were aliquoted into cryogenic vials and stored at -70 °C until analysis. The assay linearity was 3.42 to 1446.59 nmol/L with an accuracy of 87.7 to 99.8%. The high sensitivity was achieved by minimization of interference through selective kinetic energy discrimination and selective collision using helium. The reference interval for total Cr using a non-parametric method was verified to be 3.92 to 7.48 nmol/L. This validated ICP-MS methodology is highly sensitive and selective for measuring total Cr in whole blood. Copyright © 2012 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved. Published by Elsevier Inc. All rights reserved.
Mathes, Tim; Jacobs, Esther; Morfeld, Jana-Carina; Pieper, Dawid
2013-09-30
The number of Health Technology Assessment (HTA) agencies increases. One component of HTAs are economic aspects. To incorporate economic aspects commonly economic evaluations are performed. A convergence of recommendations for methods of health economic evaluations between international HTA agencies would facilitate the adaption of results to different settings and avoid unnecessary expense. A first step in this direction is a detailed analysis of existing similarities and differences in recommendations to identify potential for harmonization. The objective is to provide an overview and comparison of the methodological recommendations of international HTA agencies for economic evaluations. The webpages of 127 international HTA agencies were searched for guidelines containing recommendations on methods for the preparation of economic evaluations. Additionally, the HTA agencies were requested information on methods for economic evaluations. Recommendations of the included guidelines were extracted in standardized tables according to 13 methodological aspects. All process steps were performed independently by two reviewers. Finally 25 publications of 14 HTA agencies were included in the analysis. Methods for economic evaluations vary widely. The greatest accordance could be found for the type of analysis and comparator. Cost-utility-analyses or cost-effectiveness-analyses are recommended. The comparator should continuously be usual care. Again the greatest differences were shown in the recommendations on the measurement/sources of effects, discounting and in the analysis of sensitivity. The main difference regarding effects is the focus either on efficacy or effectiveness. Recommended discounting rates range from 1.5%-5% for effects and 3%-5% for costs whereby it is mostly recommended to use the same rate for costs and effects. With respect to the analysis of sensitivity the main difference is that oftentimes the probabilistic or deterministic approach is recommended exclusively. Methods for modeling are only described vaguely and mainly with the rational that the "appropriate model" depends on the decision problem. Considering all other aspects a comparison is challenging as recommendations vary regarding detailedness and addressed issues. There is a considerable unexplainable variance in recommendations. Further effort is needed to harmonize methods for preparing economic evaluations.
NASA Astrophysics Data System (ADS)
Jiang, Shan; Wang, Fang; Shen, Luming; Liao, Guiping; Wang, Lin
2017-03-01
Spectrum technology has been widely used in crop non-destructive testing diagnosis for crop information acquisition. Since spectrum covers a wide range of bands, it is of critical importance to extract the sensitive bands. In this paper, we propose a methodology to extract the sensitive spectrum bands of rapeseed using multiscale multifractal detrended fluctuation analysis. Our obtained sensitive bands are relatively robust in the range of 534 nm-574 nm. Further, by using the multifractal parameter (Hurst exponent) of the extracted sensitive bands, we propose a prediction model to forecast the Soil and plant analyzer development values ((SPAD), often used as a parameter to indicate the chlorophyll content) and an identification model to distinguish the different planting patterns. Three vegetation indices (VIs) based on previous work are used for comparison. Three evaluation indicators, namely, the root mean square error, the correlation coefficient, and the relative error employed in the SPAD values prediction model all demonstrate that our Hurst exponent has the best performance. Four rapeseed compound planting factors, namely, seeding method, planting density, fertilizer type, and weed control method are considered in the identification model. The Youden indices calculated by the random decision forest method and the K-nearest neighbor method show that our Hurst exponent is superior to other three Vis, and their combination for the factor of seeding method. In addition, there is no significant difference among the five features for other three planting factors. This interesting finding suggests that the transplanting and the direct seeding would make a big difference in the growth of rapeseed.
Laserson, K F; Petralanda, I; Hamlin, D M; Almera, R; Fuentes, M; Carrasquel, A; Barker, R H
1994-02-01
We have examined the reproducibility, sensitivity, and specificity of detecting Plasmodium falciparum using the polymerase chain reaction (PCR) and the species-specific probe pPF14 under field conditions in the Venezuelan Amazon. Up to eight samples were field collected from each of 48 consenting Amerindians presenting with symptoms of malaria. Sample processing and analysis was performed at the Centro Amazonico para la Investigacion y Control de Enfermedades Tropicales Simon Bolivar. A total of 229 samples from 48 patients were analyzed by PCR methods using four different P. falciparum-specific probes. One P. vivax-specific probe and by conventional microscopy. Samples in which results from PCR and microscopy differed were reanalyzed at a higher sensitivity by microscopy. Results suggest that microscopy-negative, PCR-positive samples are true positives, and that microscopy-positive and PCR-negative samples are true negatives. The sensitivity of the DNA probe/PCR method was 78% and its specificity was 97%. The positive predictive value of the PCR method was 88%, and the negative predictive value was 95%. Through the analysis of multiple blood samples from each individual, the DNA probe/PCR methodology was found to have an inherent reproducibility that was highly statistically significant.
NASA Astrophysics Data System (ADS)
Melchiorre, C.; Castellanos Abella, E. A.; van Westen, C. J.; Matteucci, M.
2011-04-01
This paper describes a procedure for landslide susceptibility assessment based on artificial neural networks, and focuses on the estimation of the prediction capability, robustness, and sensitivity of susceptibility models. The study is carried out in the Guantanamo Province of Cuba, where 186 landslides were mapped using photo-interpretation. Twelve conditioning factors were mapped including geomorphology, geology, soils, landuse, slope angle, slope direction, internal relief, drainage density, distance from roads and faults, rainfall intensity, and ground peak acceleration. A methodology was used that subdivided the database in 3 subsets. A training set was used for updating the weights. A validation set was used to stop the training procedure when the network started losing generalization capability, and a test set was used to calculate the performance of the network. A 10-fold cross-validation was performed in order to show that the results are repeatable. The prediction capability, the robustness analysis, and the sensitivity analysis were tested on 10 mutually exclusive datasets. The results show that by means of artificial neural networks it is possible to obtain models with high prediction capability and high robustness, and that an exploration of the effect of the individual variables is possible, even if they are considered as a black-box model.
An incremental strategy for calculating consistent discrete CFD sensitivity derivatives
NASA Technical Reports Server (NTRS)
Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.
1992-01-01
In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.
Novel methodology for pharmaceutical expenditure forecast.
Vataire, Anne-Lise; Cetinsoy, Laurent; Aballéa, Samuel; Rémuzat, Cécile; Urbinati, Duccio; Kornfeld, Åsa; Mzoughi, Olfa; Toumi, Mondher
2014-01-01
The value appreciation of new drugs across countries today features a disruption that is making the historical data that are used for forecasting pharmaceutical expenditure poorly reliable. Forecasting methods rarely addressed uncertainty. The objective of this project was to propose a methodology to perform pharmaceutical expenditure forecasting that integrates expected policy changes and uncertainty (developed for the European Commission as the 'EU Pharmaceutical expenditure forecast'; see http://ec.europa.eu/health/healthcare/key_documents/index_en.htm). 1) Identification of all pharmaceuticals going off-patent and new branded medicinal products over a 5-year forecasting period in seven European Union (EU) Member States. 2) Development of a model to estimate direct and indirect impacts (based on health policies and clinical experts) on savings of generics and biosimilars. Inputs were originator sales value, patent expiry date, time to launch after marketing authorization, price discount, penetration rate, time to peak sales, and impact on brand price. 3) Development of a model for new drugs, which estimated sales progression in a competitive environment. Clinical expected benefits as well as commercial potential were assessed for each product by clinical experts. Inputs were development phase, marketing authorization dates, orphan condition, market size, and competitors. 4) Separate analysis of the budget impact of products going off-patent and new drugs according to several perspectives, distribution chains, and outcomes. 5) Addressing uncertainty surrounding estimations via deterministic and probabilistic sensitivity analysis. This methodology has proven to be effective by 1) identifying the main parameters impacting the variations in pharmaceutical expenditure forecasting across countries: generics discounts and penetration, brand price after patent loss, reimbursement rate, the penetration of biosimilars and discount price, distribution chains, and the time to reach peak sales for new drugs; 2) estimating the statistical distribution of the budget impact; and 3) testing different pricing and reimbursement policy decisions on health expenditures. This methodology was independent of historical data and appeared to be highly flexible and adapted to test robustness and provide probabilistic analysis to support policy decision making.
Characterizing crown fuel distribution for conifers in the interior western United States
Seth Ex; Frederick W. Smith; Tara Keyser
2015-01-01
Canopy fire hazard evaluation is essential for prioritizing fuel treatments and for assessing potential risk to firefighters during suppression activities. Fire hazard is usually expressed as predicted potential fire behavior, which is sensitive to the methodology used to quantitatively describe fuel profiles: methodologies that assume that fuel is distributed...
Teng, Santani
2017-01-01
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019
ASC-AD penetration modeling FY05 status report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kistler, Bruce L.; Ostien, Jakob T.; Chiesa, Michael L.
2006-04-01
Sandia currently lacks a high fidelity method for predicting loads on and subsequent structural response of earth penetrating weapons. This project seeks to test, debug, improve and validate methodologies for modeling earth penetration. Results of this project will allow us to optimize and certify designs for the B61-11, Robust Nuclear Earth Penetrator (RNEP), PEN-X and future nuclear and conventional penetrator systems. Since this is an ASC Advanced Deployment project the primary goal of the work is to test, debug, verify and validate new Sierra (and Nevada) tools. Also, since this project is part of the V&V program within ASC, uncertaintymore » quantification (UQ), optimization using DAKOTA [1] and sensitivity analysis are an integral part of the work. This project evaluates, verifies and validates new constitutive models, penetration methodologies and Sierra/Nevada codes. In FY05 the project focused mostly on PRESTO [2] using the Spherical Cavity Expansion (SCE) [3,4] and PRESTO Lagrangian analysis with a preformed hole (Pen-X) methodologies. Modeling penetration tests using PRESTO with a pilot hole was also attempted to evaluate constitutive models. Future years work would include the Alegra/SHISM [5] and AlegrdEP (Earth Penetration) methodologies when they are ready for validation testing. Constitutive models such as Soil-and-Foam, the Sandia Geomodel [6], and the K&C Concrete model [7] were also tested and evaluated. This report is submitted to satisfy annual documentation requirements for the ASC Advanced Deployment program. This report summarizes FY05 work performed in the Penetration Mechanical Response (ASC-APPS) and Penetration Mechanics (ASC-V&V) projects. A single report is written to document the two projects because of the significant amount of technical overlap.« less
Churilov, Leonid; Liu, Daniel; Ma, Henry; Christensen, Soren; Nagakane, Yoshinari; Campbell, Bruce; Parsons, Mark W; Levi, Christopher R; Davis, Stephen M; Donnan, Geoffrey A
2013-04-01
The appropriateness of a software platform for rapid MRI assessment of the amount of salvageable brain tissue after stroke is critical for both the validity of the Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) Clinical Trial of stroke thrombolysis beyond 4.5 hours and for stroke patient care outcomes. The objective of this research is to develop and implement a methodology for selecting the acute stroke imaging software platform most appropriate for the setting of a multi-centre clinical trial. A multi-disciplinary decision making panel formulated the set of preferentially independent evaluation attributes. Alternative Multi-Attribute Value Measurement methods were used to identify the best imaging software platform followed by sensitivity analysis to ensure the validity and robustness of the proposed solution. Four alternative imaging software platforms were identified. RApid processing of PerfusIon and Diffusion (RAPID) software was selected as the most appropriate for the needs of the EXTEND trial. A theoretically grounded generic multi-attribute selection methodology for imaging software was developed and implemented. The developed methodology assured both a high quality decision outcome and a rational and transparent decision process. This development contributes to stroke literature in the area of comprehensive evaluation of MRI clinical software. At the time of evaluation, RAPID software presented the most appropriate imaging software platform for use in the EXTEND clinical trial. The proposed multi-attribute imaging software evaluation methodology is based on sound theoretical foundations of multiple criteria decision analysis and can be successfully used for choosing the most appropriate imaging software while ensuring both robust decision process and outcomes. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xiangqi; Zhang, Yingchen
This paper presents an optimal voltage control methodology with coordination among different voltage-regulating resources, including controllable loads, distributed energy resources such as energy storage and photovoltaics (PV), and utility voltage-regulating devices such as voltage regulators and capacitors. The proposed methodology could effectively tackle the overvoltage and voltage regulation device distortion problems brought by high penetrations of PV to improve grid operation reliability. A voltage-load sensitivity matrix and voltage-regulator sensitivity matrix are used to deploy the resources along the feeder to achieve the control objectives. Mixed-integer nonlinear programming is used to solve the formulated optimization control problem. The methodology has beenmore » tested on the IEEE 123-feeder test system, and the results demonstrate that the proposed approach could actively tackle the voltage problem brought about by high penetrations of PV and improve the reliability of distribution system operation.« less
Updating finite element dynamic models using an element-by-element sensitivity methodology
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Hemez, Francois M.
1993-01-01
A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.
Sonic Boom Mitigation Through Aircraft Design and Adjoint Methodology
NASA Technical Reports Server (NTRS)
Rallabhandi, Siriam K.; Diskin, Boris; Nielsen, Eric J.
2012-01-01
This paper presents a novel approach to design of the supersonic aircraft outer mold line (OML) by optimizing the A-weighted loudness of sonic boom signature predicted on the ground. The optimization process uses the sensitivity information obtained by coupling the discrete adjoint formulations for the augmented Burgers Equation and Computational Fluid Dynamics (CFD) equations. This coupled formulation links the loudness of the ground boom signature to the aircraft geometry thus allowing efficient shape optimization for the purpose of minimizing the impact of loudness. The accuracy of the adjoint-based sensitivities is verified against sensitivities obtained using an independent complex-variable approach. The adjoint based optimization methodology is applied to a configuration previously optimized using alternative state of the art optimization methods and produces additional loudness reduction. The results of the optimizations are reported and discussed.
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.; Thurman, S. W.
1992-01-01
An error covariance analysis methodology is used to investigate different weighting schemes for two-way (coherent) Doppler data in the presence of transmission-media and observing-platform calibration errors. The analysis focuses on orbit-determination performance in the interplanetary cruise phase of deep-space missions. Analytical models for the Doppler observable and for transmission-media and observing-platform calibration errors are presented, drawn primarily from previous work. Previously published analytical models were improved upon by the following: (1) considering the effects of errors in the calibration of radio signal propagation through the troposphere and ionosphere as well as station-location errors; (2) modelling the spacecraft state transition matrix using a more accurate piecewise-linear approximation to represent the evolution of the spacecraft trajectory; and (3) incorporating Doppler data weighting functions that are functions of elevation angle, which reduce the sensitivity of the estimated spacecraft trajectory to troposphere and ionosphere calibration errors. The analysis is motivated by the need to develop suitable weighting functions for two-way Doppler data acquired at 8.4 GHz (X-band) and 32 GHz (Ka-band). This weighting is likely to be different from that in the weighting functions currently in use; the current functions were constructed originally for use with 2.3 GHz (S-band) Doppler data, which are affected much more strongly by the ionosphere than are the higher frequency data.
Almansa, Carmen; Martínez-Paz, José M
2011-03-01
Cost-benefit analysis is a standard methodological platform for public investment evaluation. In high environmental impact projects, with a long-term effect on future generations, the choice of discount rate and time horizon is of particular relevance, because it can lead to very different profitability assessments. This paper describes some recent approaches to environmental discounting and applies them, together with a number of classical procedures, to the economic evaluation of a plant for the desalination of irrigation return water from intensive farming, aimed at halting the degradation of an area of great ecological value, the Mar Menor, in South Eastern Spain. A Monte Carlo procedure is used in four CBA approaches and three time horizons to carry out a probabilistic sensitivity analysis designed to integrate the views of an international panel of experts in environmental discounting with the uncertainty affecting the market price of the project's main output, i.e., irrigation water for a water-deprived area. The results show which discounting scenarios most accurately estimate the socio-environmental profitability of the project while also considering the risk associated with these two key parameters. The analysis also provides some methodological findings regarding ways of assessing financial and environmental profitability in decisions concerning public investment in the environment. Copyright © 2010 Elsevier B.V. All rights reserved.
Kahale, Lara A; Diab, Batoul; Brignardello-Petersen, Romina; Agarwal, Arnav; Mustafa, Reem A; Kwong, Joey; Neumann, Ignacio; Li, Ling; Lopes, Luciane Cruz; Briel, Matthias; Busse, Jason W; Iorio, Alfonso; Vandvik, Per Olav; Alexander, Paul Elias; Guyatt, Gordon; Akl, Elie A
2018-07-01
To describe how systematic review authors report and address categories of participants with potential missing outcome data of trial participants. Methodological survey of systematic reviews reporting a group-level meta-analysis. We included a random sample of 50 Cochrane and 50 non-Cochrane systematic reviews. Of these, 25 reported in their methods section a plan to consider at least one of the 10 categories of missing outcome data; 42 reported in their results, data for at least one category of missing data. The most reported category in the methods and results sections was "unexplained loss to follow-up" (n = 34 in methods section and n = 6 in the results section). Only 19 reported a method to handle missing data in their primary analyses, which was most often complete case analysis. Few reviews (n = 9) reported in the methods section conducting sensitivity analysis to judge risk of bias associated with missing outcome data at the level of the meta-analysis; and only five of them presented the results of these analyses in the results section. Most systematic reviews do not explicitly report sufficient information on categories of trial participants with potential missing outcome data or address missing data in their primary analyses. Copyright © 2018 Elsevier Inc. All rights reserved.
Extinction, survival or recovery of large predatory fishes
Myers, Ransom A.; Worm, Boris
2005-01-01
Large predatory fishes have long played an important role in marine ecosystems and fisheries. Overexploitation, however, is gradually diminishing this role. Recent estimates indicate that exploitation has depleted large predatory fish communities worldwide by at least 90% over the past 50–100 years. We demonstrate that these declines are general, independent of methodology, and even higher for sensitive species such as sharks. We also attempt to predict the future prospects of large predatory fishes. (i) An analysis of maximum reproductive rates predicts the collapse and extinction of sensitive species under current levels of fishing mortality. Sensitive species occur in marine habitats worldwide and have to be considered in most management situations. (ii) We show that to ensure the survival of sensitive species in the northwest Atlantic fishing mortality has to be reduced by 40–80%. (iii) We show that rapid recovery of community biomass and diversity usually occurs when fishing mortality is reduced. However, recovery is more variable for single species, often because of the influence of species interactions. We conclude that management of multi-species fisheries needs to be tailored to the most sensitive, rather than the more robust species. This requires reductions in fishing effort, reduction in bycatch mortality and protection of key areas to initiate recovery of severely depleted communities. PMID:15713586
Extinction, survival or recovery of large predatory fishes.
Myers, Ransom A; Worm, Boris
2005-01-29
Large predatory fishes have long played an important role in marine ecosystems and fisheries. Overexploitation, however, is gradually diminishing this role. Recent estimates indicate that exploitation has depleted large predatory fish communities worldwide by at least 90% over the past 50-100 years. We demonstrate that these declines are general, independent of methodology, and even higher for sensitive species such as sharks. We also attempt to predict the future prospects of large predatory fishes. (i) An analysis of maximum reproductive rates predicts the collapse and extinction of sensitive species under current levels of fishing mortality. Sensitive species occur in marine habitats worldwide and have to be considered in most management situations. (ii) We show that to ensure the survival of sensitive species in the northwest Atlantic fishing mortality has to be reduced by 40-80%. (iii) We show that rapid recovery of community biomass and diversity usually occurs when fishing mortality is reduced. However, recovery is more variable for single species, often because of the influence of species interactions. We conclude that management of multi-species fisheries needs to be tailored to the most sensitive, rather than the more robust species. This requires reductions in fishing effort, reduction in bycatch mortality and protection of key areas to initiate recovery of severely depleted communities.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Cross-cultural validation of the moral sensitivity questionnaire-revised Chinese version.
Huang, Fei Fei; Yang, Qing; Zhang, Jie; Zhang, Qing Hua; Khoshnood, Kaveh; Zhang, Jing Ping
2016-11-01
Ethical issues pose challenges for nurses who are increasingly caring for patients in complicated situations. Ethical sensitivity is a prerequisite for nurses to make decisions in the best interest of their patients in daily practice. Currently, there is no tool for assessing ethical sensitivity in Chinese language, and no empirical studies of ethical sensitivity among Chinese nurses. The study was conducted to translate the Moral Sensitivity Questionnaire-Revised Version (MSQ-R) into Chinese and establish the psychometric properties of the Moral Sensitivity Questionnaire-Revised Version into Chinese (MSQ-R-CV). This research was a methodological and descriptive study. MSQ-R was translated into Chinese using Brislin's model, and the Translation Validity Index was evaluated. MSQ-R-CV was then distributed along with a demographic questionnaire to 360 nurses working at tertiary and municipal hospitals in Changsha, China. This study was approved by the Institutional Review Boards of Yale University and Central South University. MSQ-R-CV achieved Cronbach's alpha 0.82, Spearman-Brown coefficient 0.75, significant item discrimination (p < 0.001), and item-total correlation values ranging from 0.524 to 0.717. A two-factor structure was illustrated by exploratory factor analysis, and further confirmed by confirmatory factor analysis. Chinese nurses had a mean total score of 40.22 ± 7.08 on the MSQ-R-CV, and sub-scores of 23.85 ± 4.4 for moral responsibility and strength and 16.37 ± 3.75 for sense of moral burden. The findings of this study were compared with studies from other countries to examine the structure and meaningful implications of ethical sensitivity in Chinese nurses. The two-factor MSQ-R-CV (moral responsibility and strength, and sense of moral burden) is a linguistically and culturally appropriate instrument for assessing ethical sensitivity among Chinese nurses. © The Author(s) 2015.
Real-time subsecond voltammetric analysis of Pb in aqueous environmental samples.
Yang, Yuanyuan; Pathirathna, Pavithra; Siriwardhane, Thushani; McElmurry, Shawn P; Hashemi, Parastoo
2013-08-06
Lead (Pb) pollution is an important environmental and public health concern. Rapid Pb transport during stormwater runoff significantly impairs surface water quality. The ability to characterize and model Pb transport during these events is critical to mitigating its impact on the environment. However, Pb analysis is limited by the lack of analytical methods that can afford rapid, sensitive measurements in situ. While electrochemical methods have previously shown promise for rapid Pb analysis, they are currently limited in two ways. First, because of Pb's limited solubility, test solutions that are representative of environmental systems are not typically employed in laboratory characterizations. Second, concerns about traditional Hg electrode toxicity, stability, and low temporal resolution have dampened opportunities for in situ analyses with traditional electrochemical methods. In this paper, we describe two novel methodological advances that bypass these limitations. Using geochemical models, we first create an environmentally relevant test solution that can be used for electrochemical method development and characterization. Second, we develop a fast-scan cyclic voltammetry (FSCV) method for Pb detection on Hg-free carbon fiber microelectrodes. We assess the method's sensitivity and stability, taking into account Pb speciation, and utilize it to characterize rapid Pb fluctuations in real environmental samples. We thus present a novel real-time electrochemical tool for Pb analysis in both model and authentic environmental solutions.
Schedl, A; Zweckmair, T; Kikul, F; Bacher, M; Rosenau, T; Potthast, A
2018-03-01
Widening the methodology of chromophore analysis in pulp and paper science, a sensitive gas-chromatographic approach with electron-capture detection is presented and applied to model samples and real-world historic paper material. Trifluoroacetic anhydride was used for derivatization of the chromophore target compounds. The derivative formation was confirmed by NMR and accurate mass analysis. The method successfully detects and quantifies hydroxyquinones which are key chromophores in cellulosic matrices. The analytical figures of merit appeared to be in an acceptable range with an LOD down to approx. 60ng/g for each key chromophore, which allows for their successful detection in historic sample material. Copyright © 2017 Elsevier B.V. All rights reserved.
High-performance parallel analysis of coupled problems for aircraft propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Lanteri, S.; Gumaste, U.; Ronaghi, M.
1994-01-01
Applications are described of high-performance parallel, computation for the analysis of complete jet engines, considering its multi-discipline coupled problem. The coupled problem involves interaction of structures with gas dynamics, heat conduction and heat transfer in aircraft engines. The methodology issues addressed include: consistent discrete formulation of coupled problems with emphasis on coupling phenomena; effect of partitioning strategies, augmentation and temporal solution procedures; sensitivity of response to problem parameters; and methods for interfacing multiscale discretizations in different single fields. The computer implementation issues addressed include: parallel treatment of coupled systems; domain decomposition and mesh partitioning strategies; data representation in object-oriented form and mapping to hardware driven representation, and tradeoff studies between partitioning schemes and fully coupled treatment.
Thermodynamic and economic analysis of a gas turbine combined cycle plant with oxy-combustion
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Job, Marcin
2013-12-01
This paper presents a gas turbine combined cycle plant with oxy-combustion and carbon dioxide capture. A gas turbine part of the unit with the operating parameters is presented. The methodology and results of optimization by the means of a genetic algorithm for the steam parts in three variants of the plant are shown. The variants of the plant differ by the heat recovery steam generator (HRSG) construction: the singlepressure HRSG (1P), the double-pressure HRSG with reheating (2PR), and the triple-pressure HRSG with reheating (3PR). For obtained results in all variants an economic evaluation was performed. The break-even prices of electricity were determined and the sensitivity analysis to the most significant economic factors were performed.
A Learning Framework for Control-Oriented Modeling of Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubio-Herrero, Javier; Chandan, Vikas; Siegel, Charles M.
Buildings consume a significant amount of energy worldwide. Several building optimization and control use cases require models of energy consumption which are control oriented, have high predictive capability, imposes minimal data pre-processing requirements, and have the ability to be adapted continuously to account for changing conditions as new data becomes available. Data driven modeling techniques, that have been investigated so far, while promising in the context of buildings, have been unable to simultaneously satisfy all the requirements mentioned above. In this context, deep learning techniques such as Recurrent Neural Networks (RNNs) hold promise, empowered by advanced computational capabilities and bigmore » data opportunities. In this paper, we propose a deep learning based methodology for the development of control oriented models for building energy management and test in on data from a real building. Results show that the proposed methodology outperforms other data driven modeling techniques significantly. We perform a detailed analysis of the proposed methodology along dimensions such as topology, sensitivity, and downsampling. Lastly, we conclude by envisioning a building analytics suite empowered by the proposed deep framework, that can drive several use cases related to building energy management.« less
Sokkar, Pandian; Mohandass, Shylajanaciyar; Ramachandran, Murugesan
2011-07-01
We present a comparative account on 3D-structures of human type-1 receptor (AT1) for angiotensin II (AngII), modeled using three different methodologies. AngII activates a wide spectrum of signaling responses via the AT1 receptor that mediates physiological control of blood pressure and diverse pathological actions in cardiovascular, renal, and other cell types. Availability of 3D-model of AT1 receptor would significantly enhance the development of new drugs for cardiovascular diseases. However, templates of AT1 receptor with low sequence similarity increase the complexity in straightforward homology modeling, and hence there is a need to evaluate different modeling methodologies in order to use the models for sensitive applications such as rational drug design. Three models were generated for AT1 receptor by, (1) homology modeling with bovine rhodopsin as template, (2) homology modeling with multiple templates and (3) threading using I-TASSER web server. Molecular dynamics (MD) simulation (15 ns) of models in explicit membrane-water system, Ramachandran plot analysis and molecular docking with antagonists led to the conclusion that multiple template-based homology modeling outweighs other methodologies for AT1 modeling.
An Evaluation of Aircraft Emissions Inventory Methodology by Comparisons with Reported Airline Data
NASA Technical Reports Server (NTRS)
Daggett, D. L.; Sutkus, D. J.; DuBois, D. P.; Baughcum, S. L.
1999-01-01
This report provides results of work done to evaluate the calculation methodology used in generating aircraft emissions inventories. Results from the inventory calculation methodology are compared to actual fuel consumption data. Results are also presented that show the sensitivity of calculated emissions to aircraft payload factors. Comparisons of departures made, ground track miles flown and total fuel consumed by selected air carriers were made between U.S. Dept. of Transportation (DOT) Form 41 data reported for 1992 and results of simplified aircraft emissions inventory calculations. These comparisons provide an indication of the magnitude of error that may be present in aircraft emissions inventories. To determine some of the factors responsible for the errors quantified in the DOT Form 41 analysis, a comparative study of in-flight fuel flow data for a specific operator's 747-400 fleet was conducted. Fuel consumption differences between the studied aircraft and the inventory calculation results may be attributable to several factors. Among these are longer flight times, greater actual aircraft weight and performance deterioration effects for the in-service aircraft. Results of a parametric study on the variation in fuel use and NOx emissions as a function of aircraft payload for different aircraft types are also presented.
NASA Astrophysics Data System (ADS)
Waters, Daniel F.; Cadou, Christopher P.
2014-02-01
A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (∼15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.
Chatziprodromidou, I P; Apostolou, T
2018-04-01
The aim of the study was to estimate the sensitivity and specificity of enzyme-linked immunosorbent assay (ELISA) and immunoblot (IB) for detecting antibodies of Neospora caninum in dairy cows, in the absence of a gold standard. The study complies with STRADAS-paratuberculosis guidelines for reporting the accuracy of the test. We tried to apply Bayesian models that do not require conditional independence of the tests under evaluation, but as convergence problems appeared, we used Bayesian methodology, that does not assume conditional dependence of the tests. Informative prior probability distributions were constructed, based on scientific inputs regarding sensitivity and specificity of the IB test and the prevalence of disease in the studied populations. IB sensitivity and specificity were estimated to be 98.8% and 91.3%, respectively, while the respective estimates for ELISA were 60% and 96.7%. A sensitivity analysis, where modified prior probability distributions concerning IB diagnostic accuracy applied, showed a limited effect in posterior assessments. We concluded that ELISA can be used to screen the bulk milk and secondly, IB can be used whenever needed.
Breath Analysis in Disease Diagnosis: Methodological Considerations and Applications
Lourenço, Célia; Turner, Claire
2014-01-01
Breath analysis is a promising field with great potential for non-invasive diagnosis of a number of disease states. Analysis of the concentrations of volatile organic compounds (VOCs) in breath with an acceptable accuracy are assessed by means of using analytical techniques with high sensitivity, accuracy, precision, low response time, and low detection limit, which are desirable characteristics for the detection of VOCs in human breath. “Breath fingerprinting”, indicative of a specific clinical status, relies on the use of multivariate statistics methods with powerful in-built algorithms. The need for standardisation of sample collection and analysis is the main issue concerning breath analysis, blocking the introduction of breath tests into clinical practice. This review describes recent scientific developments in basic research and clinical applications, namely issues concerning sampling and biochemistry, highlighting the diagnostic potential of breath analysis for disease diagnosis. Several considerations that need to be taken into account in breath analysis are documented here, including the growing need for metabolomics to deal with breath profiles. PMID:24957037
Breath analysis in disease diagnosis: methodological considerations and applications.
Lourenço, Célia; Turner, Claire
2014-06-20
Breath analysis is a promising field with great potential for non-invasive diagnosis of a number of disease states. Analysis of the concentrations of volatile organic compounds (VOCs) in breath with an acceptable accuracy are assessed by means of using analytical techniques with high sensitivity, accuracy, precision, low response time, and low detection limit, which are desirable characteristics for the detection of VOCs in human breath. "Breath fingerprinting", indicative of a specific clinical status, relies on the use of multivariate statistics methods with powerful in-built algorithms. The need for standardisation of sample collection and analysis is the main issue concerning breath analysis, blocking the introduction of breath tests into clinical practice. This review describes recent scientific developments in basic research and clinical applications, namely issues concerning sampling and biochemistry, highlighting the diagnostic potential of breath analysis for disease diagnosis. Several considerations that need to be taken into account in breath analysis are documented here, including the growing need for metabolomics to deal with breath profiles.
ON IDENTIFIABILITY OF NONLINEAR ODE MODELS AND APPLICATIONS IN VIRAL DYNAMICS
MIAO, HONGYU; XIA, XIAOHUA; PERELSON, ALAN S.; WU, HULIN
2011-01-01
Ordinary differential equations (ODE) are a powerful tool for modeling dynamic processes with wide applications in a variety of scientific fields. Over the last 2 decades, ODEs have also emerged as a prevailing tool in various biomedical research fields, especially in infectious disease modeling. In practice, it is important and necessary to determine unknown parameters in ODE models based on experimental data. Identifiability analysis is the first step in determing unknown parameters in ODE models and such analysis techniques for nonlinear ODE models are still under development. In this article, we review identifiability analysis methodologies for nonlinear ODE models developed in the past one to two decades, including structural identifiability analysis, practical identifiability analysis and sensitivity-based identifiability analysis. Some advanced topics and ongoing research are also briefly reviewed. Finally, some examples from modeling viral dynamics of HIV, influenza and hepatitis viruses are given to illustrate how to apply these identifiability analysis methods in practice. PMID:21785515
The use of copula functions for predictive analysis of correlations between extreme storm tides
NASA Astrophysics Data System (ADS)
Domino, Krzysztof; Błachowicz, Tomasz; Ciupak, Maurycy
2014-11-01
In this paper we present a method used in quantitative description of weakly predictable hydrological, extreme events at inland sea. Investigations for correlations between variations of individual measuring points, employing combined statistical methods, were carried out. As a main tool for this analysis we used a two-dimensional copula function sensitive for correlated extreme effects. Additionally, a new proposed methodology, based on Detrended Fluctuations Analysis (DFA) and Anomalous Diffusion (AD), was used for the prediction of negative and positive auto-correlations and associated optimum choice of copula functions. As a practical example we analysed maximum storm tides data recorded at five spatially separated places at the Baltic Sea. For the analysis we used Gumbel, Clayton, and Frank copula functions and introduced the reversed Clayton copula. The application of our research model is associated with modelling the risk of high storm tides and possible storm flooding.
Economic Efficiency and Investment Timing for Dual Water Systems
NASA Astrophysics Data System (ADS)
Leconte, Robert; Hughes, Trevor C.; Narayanan, Rangesan
1987-10-01
A general methodology to evaluate the economic feasibility of dual water systems is presented. In a first step, a static analysis (evaluation at a single point in time) is developed. The analysis requires the evaluation of consumers' and producer's surpluses from water use and the capital cost of the dual (outdoor) system. The analysis is then extended to a dynamic approach where the water demand increases with time (as a result of a population increase) and where the dual system is allowed to expand. The model determines whether construction of a dual system represents a net benefit, and if so, what is the best time to initiate the system (corresponding to maximization of social welfare). Conditions under which an analytic solution is possible are discussed and results of an application are summarized (including sensitivity to different parameters). The analysis allows identification of key parameters influencing attractiveness of dual water systems.
Laboratory and field testing of commercial rotational seismometers
Nigbor, R.L.; Evans, J.R.; Hutt, C.R.
2009-01-01
There are a small number of commercially available sensors to measure rotational motion in the frequency and amplitude ranges appropriate for earthquake motions on the ground and in structures. However, the performance of these rotational seismometers has not been rigorously and independently tested and characterized for earthquake monitoring purposes as is done for translational strong- and weak-motion seismometers. Quantities such as sensitivity, frequency response, resolution, and linearity are needed for the understanding of recorded rotational data. To address this need, we, with assistance from colleagues in the United States and Taiwan, have been developing performance test methodologies and equipment for rotational seismometers. In this article the performance testing methodologies are applied to samples of a commonly used commercial rotational seismometer, the eentec model R-1. Several examples were obtained for various test sequences in 2006, 2007, and 2008. Performance testing of these sensors consisted of measuring: (1) sensitivity and frequency response; (2) clip level; (3) self noise and resolution; and (4) cross-axis sensitivity, both rotational and translational. These sensor-specific results will assist in understanding the performance envelope of the R-1 rotational seismometer, and the test methodologies can be applied to other rotational seismometers.
Durán, Gema M; Contento, Ana M; Ríos, Ángel
2013-11-01
Based on the highly sensitive fluorescence change of water-soluble CdSe/ZnS core-shell quantum dots (QD) by paraquat herbicide, a simple, rapid and reproducible methodology was developed to selectively determine paraquat (PQ) in water samples. The methodology enabled the use of simple pretreatment procedure based on the simple water solubilization of CdSe/ZnS QDs with hydrophilic heterobifunctional thiol ligands, such as 3-mercaptopropionic acid (3-MPA), using microwave irradiation. The resulting water-soluble QDs exhibit a strong fluorescence emission at 596 nm with a high and reproducible photostability. The proposed analytical method thus satisfies the need for a simple, sensible and rapid methodology to determine residues of paraquat in water samples, as required by the increasingly strict regulations for health protection introduced in recent years. The sensitivity of the method, expressed as detection limits, was as low as 3.0 ng L(-1). The lineal range was between 10-5×10(3) ng L(-1). RSD values in the range of 71-102% were obtained. The analytical applicability of proposed method was demonstrated by analyzing water samples from different procedence. Copyright © 2013 Elsevier B.V. All rights reserved.
Analysis of fluorinated proteins by mass spectrometry.
Luck, Linda A
2014-01-01
(19)F NMR has been used as a probe for investigating bioorganic and biological systems for three decades. Recent reviews have touted this nucleus for its unique characteristics that allow probing in vivo biological systems without endogenous signals. (19)F nucleus is exceptionally sensitive to molecular and microenvironmental changes and thus can be exploited to explore structure, dynamics, and changes in a protein or molecule in the cellular environment. We show how mass spectrometry can be used to assess and characterize the incorporation of fluorine into proteins. This methodology can be applied to a number of systems where (19)F NMR is used.
Data Mining Applied to Analysis of Contraceptive Methods Among College Students.
Simões, Priscyla Waleska; Cesconetto, Samuel; Dalló, Eduardo Daminelli; de Souza Pires, Maria Marlene; Comunello, Eros; Borges Tomaz, Felipe; Xavier, Eduardo Pícolo; da Rosa Brunel Alves, Pedro Antonio; Ceretta, Luciane Bisognin; Manenti, Sandra Aparecida
2017-01-01
The aim of this study was to use the Data Mining to analyze the profile of the use of contraceptive methods in a university population. We used a database about sexuality performed on a university population in southern Brazil. The results obtained by the generated rules are largely in line with the literature and epidemiology worldwide, showing significant points of vulnerability in the university population. Validation measures of the study, as such, accuracy, sensitivity, specificity, and area under the ROC curve were higher or at least similar as compared to recent studies using the same methodology.
NASA Astrophysics Data System (ADS)
Alfano, M.; Bisagni, C.
2017-01-01
The objective of the running EU project DESICOS (New Robust DESign Guideline for Imperfection Sensitive COmposite Launcher Structures) is to formulate an improved shell design methodology in order to meet the demand of aerospace industry for lighter structures. Within the project, this article discusses the development of a probability-based methodology developed at Politecnico di Milano. It is based on the combination of the Stress-Strength Interference Method and the Latin Hypercube Method with the aim to predict the bucking response of three sandwich composite cylindrical shells, assuming a loading condition of pure compression. The three shells are made of the same material, but have different stacking sequence and geometric dimensions. One of them presents three circular cut-outs. Different types of input imperfections, treated as random variables, are taken into account independently and in combination: variability in longitudinal Young's modulus, ply misalignment, geometric imperfections, and boundary imperfections. The methodology enables a first assessment of the structural reliability of the shells through the calculation of a probabilistic buckling factor for a specified level of probability. The factor depends highly on the reliability level, on the number of adopted samples, and on the assumptions made in modeling the input imperfections. The main advantage of the developed procedure is the versatility, as it can be applied to the buckling analysis of laminated composite shells and sandwich composite shells including different types of imperfections.
Nyvold, Charlotte Guldborg
2015-05-01
Hematological malignancies are a heterogeneous group of cancers with respect to both presentation and prognosis, and many subtypes are nowadays associated with aberrations that make up excellent molecular targets for the quantification of minimal residual disease. The quantitative PCR methodology is outstanding in terms of sensitivity, specificity and reproducibility and thus an excellent choice for minimal residual disease assessment. However, the methodology still has pitfalls that should be carefully considered when the technique is integrated in a clinical setting.
Multidisciplinary optimization of an HSCT wing using a response surface methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giunta, A.A.; Grossman, B.; Mason, W.H.
1994-12-31
Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less
2011-01-01
Background It was still unclear whether the methodological reporting quality of randomized controlled trials (RCTs) in major hepato-gastroenterology journals improved after the Consolidated Standards of Reporting Trials (CONSORT) Statement was revised in 2001. Methods RCTs in five major hepato-gastroenterology journals published in 1998 or 2008 were retrieved from MEDLINE using a high sensitivity search method and their reporting quality of methodological details were evaluated based on the CONSORT Statement and Cochrane Handbook for Systematic Reviews of interventions. Changes of the methodological reporting quality between 2008 and 1998 were calculated by risk ratios with 95% confidence intervals. Results A total of 107 RCTs published in 2008 and 99 RCTs published in 1998 were found. Compared to those in 1998, the proportion of RCTs that reported sequence generation (RR, 5.70; 95%CI 3.11-10.42), allocation concealment (RR, 4.08; 95%CI 2.25-7.39), sample size calculation (RR, 3.83; 95%CI 2.10-6.98), incomplete outecome data addressed (RR, 1.81; 95%CI, 1.03-3.17), intention-to-treat analyses (RR, 3.04; 95%CI 1.72-5.39) increased in 2008. Blinding and intent-to-treat analysis were reported better in multi-center trials than in single-center trials. The reporting of allocation concealment and blinding were better in industry-sponsored trials than in public-funded trials. Compared with historical studies, the methodological reporting quality improved with time. Conclusion Although the reporting of several important methodological aspects improved in 2008 compared with those published in 1998, which may indicate the researchers had increased awareness of and compliance with the revised CONSORT statement, some items were still reported badly. There is much room for future improvement. PMID:21801429
Kinase Pathway Dependence in Primary Human Leukemias Determined by Rapid Inhibitor Screening
Tyner, Jeffrey W.; Yang, Wayne F.; Bankhead, Armand; Fan, Guang; Fletcher, Luke B.; Bryant, Jade; Glover, Jason M.; Chang, Bill H.; Spurgeon, Stephen E.; Fleming, William H.; Kovacsovics, Tibor; Gotlib, Jason R.; Oh, Stephen T.; Deininger, Michael W.; Zwaan, C. Michel; Den Boer, Monique L.; van den Heuvel-Eibrink, Marry M.; O’Hare, Thomas; Druker, Brian J.; Loriaux, Marc M.
2012-01-01
Kinases are dysregulated in most cancer but the frequency of specific kinase mutations is low, indicating a complex etiology in kinase dysregulation. Here we report a strategy to rapidly identify functionally important kinase targets, irrespective of the etiology of kinase pathway dysregulation, ultimately enabling a correlation of patient genetic profiles to clinically effective kinase inhibitors. Our methodology assessed the sensitivity of primary leukemia patient samples to a panel of 66 small-molecule kinase inhibitors over 3 days. Screening of 151 leukemia patient samples revealed a wide diversity of drug sensitivities, with 70% of the clinical specimens exhibiting hypersensitivity to one or more drugs. From this data set, we developed an algorithm to predict kinase pathway dependence based on analysis of inhibitor sensitivity patterns. Applying this algorithm correctly identified pathway dependence in proof-of-principle specimens with known oncogenes, including a rare FLT3 mutation outside regions covered by standard molecular diagnostic tests. Interrogation of all 151 patient specimens with this algorithm identified a diversity of gene targets and signaling pathways that could aid prioritization of deep sequencing data sets, permitting a cumulative analysis to understand kinase pathway dependence within leukemia subsets. In a proof-of-principle case, we showed that in vitro drug sensitivity could predict both a clinical response and the development of drug resistance. Taken together, our results suggested that drug target scores derived from a comprehensive kinase inhibitor panel could predict pathway dependence in cancer cells while simultaneously identifying potential therapeutic options. PMID:23087056
NASA Astrophysics Data System (ADS)
Sahoo, Madhumita; Sahoo, Satiprasad; Dhar, Anirban; Pradhan, Biswajeet
2016-10-01
Groundwater vulnerability assessment has been an accepted practice to identify the zones with relatively increased potential for groundwater contamination. DRASTIC is the most popular secondary information-based vulnerability assessment approach. Original DRASTIC approach considers relative importance of features/sub-features based on subjective weighting/rating values. However variability of features at a smaller scale is not reflected in this subjective vulnerability assessment process. In contrast to the subjective approach, the objective weighting-based methods provide flexibility in weight assignment depending on the variation of the local system. However experts' opinion is not directly considered in the objective weighting-based methods. Thus effectiveness of both subjective and objective weighting-based approaches needs to be evaluated. In the present study, three methods - Entropy information method (E-DRASTIC), Fuzzy pattern recognition method (F-DRASTIC) and Single parameter sensitivity analysis (SA-DRASTIC), were used to modify the weights of the original DRASTIC features to include local variability. Moreover, a grey incidence analysis was used to evaluate the relative performance of subjective (DRASTIC and SA-DRASTIC) and objective (E-DRASTIC and F-DRASTIC) weighting-based methods. The performance of the developed methodology was tested in an urban area of Kanpur City, India. Relative performance of the subjective and objective methods varies with the choice of water quality parameters. This methodology can be applied without/with suitable modification. These evaluations establish the potential applicability of the methodology for general vulnerability assessment in urban context.
Quantitation of DNA adducts by stable isotope dilution mass spectrometry
Tretyakova, Natalia; Goggin, Melissa; Janis, Gregory
2012-01-01
Exposure to endogenous and exogenous chemicals can lead to the formation of structurally modified DNA bases (DNA adducts). If not repaired, these nucleobase lesions can cause polymerase errors during DNA replication, leading to heritable mutations potentially contributing to the development of cancer. Due to their critical role in cancer initiation, DNA adducts represent mechanism-based biomarkers of carcinogen exposure, and their quantitation is particularly useful for cancer risk assessment. DNA adducts are also valuable in mechanistic studies linking tumorigenic effects of environmental and industrial carcinogens to specific electrophilic species generated from their metabolism. While multiple experimental methodologies have been developed for DNA adduct analysis in biological samples – including immunoassay, HPLC, and 32P-postlabeling – isotope dilution high performance liquid chromatography-electrospray ionization-tandem mass spectrometry (HPLC-ESI-MS/MS) generally has superior selectivity, sensitivity, accuracy, and reproducibility. As typical DNA adducts concentrations in biological samples are between 0.01 – 10 adducts per 108 normal nucleotides, ultrasensitive HPLC-ESI-MS/MS methodologies are required for their analysis. Recent developments in analytical separations and biological mass spectrometry – especially nanoflow HPLC, nanospray ionization MS, chip-MS, and high resolution MS – have pushed the limits of analytical HPLC-ESI-MS/MS methodologies for DNA adducts, allowing researchers to accurately measure their concentrations in biological samples from patients treated with DNA alkylating drugs and in populations exposed to carcinogens from urban air, drinking water, cooked food, alcohol, and cigarette smoke. PMID:22827593
Capozzi, Vittorio; Yener, Sine; Khomenko, Iuliia; Farneti, Brian; Cappellin, Luca; Gasperi, Flavia; Scampicchio, Matteo; Biasioli, Franco
2017-05-11
Proton Transfer Reaction (PTR), combined with a Time-of-Flight (ToF) Mass Spectrometer (MS) is an analytical approach based on chemical ionization that belongs to the Direct-Injection Mass Spectrometric (DIMS) technologies. These techniques allow the rapid determination of volatile organic compounds (VOCs), assuring high sensitivity and accuracy. In general, PTR-MS requires neither sample preparation nor sample destruction, allowing real time and non-invasive analysis of samples. PTR-MS are exploited in many fields, from environmental and atmospheric chemistry to medical and biological sciences. More recently, we developed a methodology based on coupling PTR-ToF-MS with an automated sampler and tailored data analysis tools, to increase the degree of automation and, consequently, to enhance the potential of the technique. This approach allowed us to monitor bioprocesses (e.g. enzymatic oxidation, alcoholic fermentation), to screen large sample sets (e.g. different origins, entire germoplasms) and to analyze several experimental modes (e.g. different concentrations of a given ingredient, different intensities of a specific technological parameter) in terms of VOC content. Here, we report the experimental protocols exemplifying different possible applications of our methodology: i.e. the detection of VOCs released during lactic acid fermentation of yogurt (on-line bioprocess monitoring), the monitoring of VOCs associated with different apple cultivars (large-scale screening), and the in vivo study of retronasal VOC release during coffee drinking (nosespace analysis).
Schrieks, Ilse C; Heil, Annelijn L J; Hendriks, Henk F J; Mukamal, Kenneth J; Beulens, Joline W J
2015-04-01
Moderate alcohol consumption is associated with a reduced risk of type 2 diabetes. This reduced risk might be explained by improved insulin sensitivity or improved glycemic status, but results of intervention studies on this relation are inconsistent. The purpose of this study was to conduct a systematic review and meta-analysis of intervention studies investigating the effect of alcohol consumption on insulin sensitivity and glycemic status. PubMed and Embase were searched up to August 2014. Intervention studies on the effect of alcohol consumption on biological markers of insulin sensitivity or glycemic status of at least 2 weeks' duration were included. Investigators extracted data on study characteristics, outcome measures, and methodological quality. Fourteen intervention studies were included in a meta-analysis of six glycemic end points. Alcohol consumption did not influence estimated insulin sensitivity (standardized mean difference [SMD] 0.08 [-0.09 to 0.24]) or fasting glucose (SMD 0.07 [-0.11 to 0.24]) but reduced HbA1c (SMD -0.62 [-1.01 to -0.23]) and fasting insulin concentrations (SMD -0.19 [-0.35 to -0.02]) compared with the control condition. Alcohol consumption among women reduced fasting insulin (SMD -0.23 [-0.41 to -0.04]) and tended to improve insulin sensitivity (SMD 0.16 [-0.04 to 0.37]) but not among men. Results were similar after excluding studies with high alcohol dosages (>40 g/day) and were not influenced by dosage and duration of the intervention. Although the studies had small sample sizes and were of short duration, the current evidence suggests that moderate alcohol consumption may decrease fasting insulin and HbA1c concentrations among nondiabetic subjects. Alcohol consumption might improve insulin sensitivity among women but did not do so overall. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.
Tsushima, Yoko; Brient, Florent; Klein, Stephen A.; ...
2017-11-27
The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. Here, this paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments willmore » also be facilitated by the sharing of diagnostic codes via this catalogue. Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.« less
NASA Astrophysics Data System (ADS)
Synek, Petr; Zemánek, Miroslav; Kudrle, Vít; Hoder, Tomáš
2018-04-01
Electrical current measurements in corona or barrier microdischarges are a challenge as they require both high temporal resolution and a large dynamic range of the current probe used. In this article, we apply a simple self-assembled current probe and compare it to commercial ones. An analysis in the time and frequency domain is carried out. Moreover, an improved methodology is presented, enabling both temporal resolution in sub-nanosecond times and current sensitivity in the order of tens of micro-amperes. Combining this methodology with a high-tech oscilloscope and self-developed software, a unique statistical analysis of currents in volume barrier discharge driven in atmospheric-pressure air is made for over 80 consecutive periods of a 15 kHz applied voltage. We reveal the presence of repetitive sub-critical current pulses and conclude that these can be identified with the discharging of surface charge microdomains. Moreover, extremely low, long-lasting microsecond currents were detected which are caused by ion flow, and are analysed in detail. The statistical behaviour presented gives deeper insight into the discharge physics of these usually undetectable current signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsushima, Yoko; Brient, Florent; Klein, Stephen A.
The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. Here, this paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments willmore » also be facilitated by the sharing of diagnostic codes via this catalogue. Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.« less
Filip, Xenia; Borodi, Gheorghe; Filip, Claudiu
2011-10-28
A solid state structural investigation of ethoxzolamide is performed on microcrystalline powder by using a multi-technique approach that combines X-ray powder diffraction (XRPD) data analysis based on direct space methods with information from (13)C((15)N) solid-state Nuclear Magnetic Resonance (SS-NMR) and molecular modeling. Quantum chemical computations of the crystal were employed for geometry optimization and chemical shift calculations based on the Gauge Including Projector Augmented-Wave (GIPAW) method, whereas a systematic search in the conformational space was performed on the isolated molecule using a molecular mechanics (MM) approach. The applied methodology proved useful for: (i) removing ambiguities in the XRPD crystal structure determination process and further refining the derived structure solutions, and (ii) getting important insights into the relationship between the complex network of non-covalent interactions and the induced supra-molecular architectures/crystal packing patterns. It was found that ethoxzolamide provides an ideal case study for testing the accuracy with which this methodology allows to distinguish between various structural features emerging from the analysis of the powder diffraction data. This journal is © the Owner Societies 2011
Mycotoxin Analysis of Human Urine by LC-MS/MS: A Comparative Extraction Study
Escrivá, Laura; Font, Guillermina
2017-01-01
The lower mycotoxin levels detected in urine make the development of sensitive and accurate analytical methods essential. Three extraction methods, namely salting-out liquid–liquid extraction (SALLE), miniQuEChERS (quick, easy, cheap, effective, rugged, and safe), and dispersive liquid–liquid microextraction (DLLME), were evaluated and compared based on analytical parameters for the quantitative LC-MS/MS measurement of 11 mycotoxins (AFB1, AFB2, AFG1, AFG2, OTA, ZEA, BEA, EN A, EN B, EN A1 and EN B1) in human urine. DLLME was selected as the most appropriate methodology, as it produced better validation results for recovery (79–113%), reproducibility (RSDs < 12%), and repeatability (RSDs < 15%) than miniQuEChERS (71–109%, RSDs <14% and <24%, respectively) and SALLE (70–108%, RSDs < 14% and < 24%, respectively). Moreover, the lowest detection (LODS) and quantitation limits (LOQS) were achieved with DLLME (LODs: 0.005–2 μg L−1, LOQs: 0.1–4 μg L−1). DLLME methodology was used for the analysis of 10 real urine samples from healthy volunteers showing the presence of ENs B, B1 and A1 at low concentrations. PMID:29048356
How to estimate green house gas (GHG) emissions from an excavator by using CAT's performance chart
NASA Astrophysics Data System (ADS)
Hajji, Apif M.; Lewis, Michael P.
2017-09-01
Construction equipment activities are a major part of many infrastructure projects. This type of equipment typically releases large quantities of green house gas (GHG) emissions. GHG emissions may come from fuel consumption. Furthermore, equipment productivity affects the fuel consumption. Thus, an estimating tool based on the construction equipment productivity rate is able to accurately assess the GHG emissions resulted from the equipment activities. This paper proposes a methodology to estimate the environmental impact for a common construction activity. This paper delivers sensitivity analysis and a case study for an excavator based on trench excavation activity. The methodology delivered in this study can be applied to a stand-alone model, or a module that is integrated with other emissions estimators. The GHG emissions are highly correlated to diesel fuel use, which is approximately 10.15 kilograms (kg) of CO2 per gallon of diesel fuel. The results showed that the productivity rate model as the result from multiple regression analysis can be used as the basis for estimating GHG emissions, and also as the framework for developing emissions footprint and understanding the environmental impact from construction equipment activities introduction.
Analytical methods in sphingolipidomics: Quantitative and profiling approaches in food analysis.
Canela, Núria; Herrero, Pol; Mariné, Sílvia; Nadal, Pedro; Ras, Maria Rosa; Rodríguez, Miguel Ángel; Arola, Lluís
2016-01-08
In recent years, sphingolipidomics has emerged as an interesting omic science that encompasses the study of the full sphingolipidome characterization, content, structure and activity in cells, tissues or organisms. Like other omics, it has the potential to impact biomarker discovery, drug development and systems biology knowledge. Concretely, dietary food sphingolipids have gained considerable importance due to their extensively reported bioactivity. Because of the complexity of this lipid family and their diversity among foods, powerful analytical methodologies are needed for their study. The analytical tools developed in the past have been improved with the enormous advances made in recent years in mass spectrometry (MS) and chromatography, which allow the convenient and sensitive identification and quantitation of sphingolipid classes and form the basis of current sphingolipidomics methodologies. In addition, novel hyphenated nuclear magnetic resonance (NMR) strategies, new ionization strategies, and MS imaging are outlined as promising technologies to shape the future of sphingolipid analyses. This review traces the analytical methods of sphingolipidomics in food analysis concerning sample extraction, chromatographic separation, the identification and quantification of sphingolipids by MS and their structural elucidation by NMR. Copyright © 2015 Elsevier B.V. All rights reserved.
Electrochemical biosensing strategies for DNA methylation analysis.
Hossain, Tanvir; Mahmudunnabi, Golam; Masud, Mostafa Kamal; Islam, Md Nazmul; Ooi, Lezanne; Konstantinov, Konstantin; Hossain, Md Shahriar Al; Martinac, Boris; Alici, Gursel; Nguyen, Nam-Trung; Shiddiky, Muhammad J A
2017-08-15
DNA methylation is one of the key epigenetic modifications of DNA that results from the enzymatic addition of a methyl group at the fifth carbon of the cytosine base. It plays a crucial role in cellular development, genomic stability and gene expression. Aberrant DNA methylation is responsible for the pathogenesis of many diseases including cancers. Over the past several decades, many methodologies have been developed to detect DNA methylation. These methodologies range from classical molecular biology and optical approaches, such as bisulfite sequencing, microarrays, quantitative real-time PCR, colorimetry, Raman spectroscopy to the more recent electrochemical approaches. Among these, electrochemical approaches offer sensitive, simple, specific, rapid, and cost-effective analysis of DNA methylation. Additionally, electrochemical methods are highly amenable to miniaturization and possess the potential to be multiplexed. In recent years, several reviews have provided information on the detection strategies of DNA methylation. However, to date, there is no comprehensive evaluation of electrochemical DNA methylation detection strategies. Herein, we address the recent developments of electrochemical DNA methylation detection approaches. Furthermore, we highlight the major technical and biological challenges involved in these strategies and provide suggestions for the future direction of this important field. Copyright © 2017 Elsevier B.V. All rights reserved.
A Decomposition of Hospital Profitability
Broom, Kevin; Elliott, Michael; Lee, Jen-Fu
2015-01-01
Objectives: This paper evaluates the drivers of profitability for a large sample of U.S. hospitals. Following a methodology frequently used by financial analysts, we use a DuPont analysis as a framework to evaluate the quality of earnings. By decomposing returns on equity (ROE) into profit margin, total asset turnover, and capital structure, the DuPont analysis reveals what drives overall profitability. Methods: Profit margin, the efficiency with which services are rendered (total asset turnover), and capital structure is calculated for 3,255 U.S. hospitals between 2007 and 2012 using data from the Centers for Medicare & Medicaid Services’ Healthcare Cost Report Information System (CMS Form 2552). The sample is then stratified by ownership, size, system affiliation, teaching status, critical access designation, and urban or non-urban location. Those hospital characteristics and interaction terms are then regressed (OLS) against the ROE and the respective DuPont components. Sensitivity to regression methodology is also investigated using a seemingly unrelated regression. Results: When the sample is stratified by hospital characteristics, the results indicate investor-owned hospitals have higher profit margins, higher efficiency, and are substantially more leveraged. Hospitals in systems are found to have higher ROE, margins, and efficiency but are associated with less leverage. In addition, a number of important and significant interactions between teaching status, ownership, location, critical access designation, and inclusion in a system are documented. Many of the significant relationships, most notably not-for-profit ownership, lose significance or are predominately associated with one interaction effect when interaction terms are introduced as explanatory variables. Results are not sensitive to the alternative methodology. Conclusion: The results of the DuPont analysis suggest that although there appears to be convergence in the behavior of NFP and IO hospitals, significant financial differences remain depending on their respective hospital characteristics. Those differences are tempered or exacerbated by location, size, teaching status, system affiliation, and critical access designation. With the exception of cost-based reimbursement for critical access hospitals, emerging payment systems are placing additional financial pressures on hospitals. The financial pressures being applied treat hospitals as a monolithic category and, given the delicate and often negative ROE for many hospitals, the long-term stability of the healthcare facility infrastructure may be negatively impacted. PMID:28462258
Evaluating linguistic equivalence of patient-reported outcomes in a cancer clinical trial.
Hahn, Elizabeth A; Bode, Rita K; Du, Hongyan; Cella, David
2006-01-01
In order to make meaningful cross-cultural or cross-linguistic comparisons of health-related quality of life (HRQL) or to pool international research data, it is essential to create unbiased measures that can detect clinically important differences. When HRQL scores differ between cultural/linguistic groups, it is important to determine whether this reflects real group differences, or is the result of systematic measurement variability. To investigate the linguistic measurement equivalence of a cancer-specific HRQL questionnaire, and to conduct a sensitivity analysis of treatment differences in HRQL in a clinical trial. Patients with newly diagnosed chronic myelogenous leukemia (n = 1049) completed serial HRQL assessments in an international Phase III trial. Two types of differential item functioning (uniform and non-uniform) were evaluated using item response theory and classical test theory approaches. A sensitivity analysis was conducted to compare HRQL between treatment arms using items without evidence of differential functioning. Among 27 items, nine (33%) did not exhibit any evidence of differential functioning in both linguistic comparisons (English versus French, English versus German). Although 18 items functioned differently, there was no evidence of systematic bias. In a sensitivity analysis, adjustment for differential functioning affected the magnitude, but not the direction or interpretation of clinical trial treatment arm differences. Sufficient sample sizes were available for only three of the eight language groups. Identification of differential functioning in two-thirds of the items suggests that current psychometric methods may be too sensitive. Enhanced methodologies are needed to differentiate trivial from substantive differential item functioning. Systematic variability in HRQL across different groups can be evaluated for its effect upon clinical trial results; a practice recommended when data are pooled across cultural or linguistic groups to make conclusions about treatment effects.
High-Speed Real-Time Resting-State fMRI Using Multi-Slab Echo-Volumar Imaging
Posse, Stefan; Ackley, Elena; Mutihac, Radu; Zhang, Tongsheng; Hummatov, Ruslan; Akhtari, Massoud; Chohan, Muhammad; Fisch, Bruce; Yonas, Howard
2013-01-01
We recently demonstrated that ultra-high-speed real-time fMRI using multi-slab echo-volumar imaging (MEVI) significantly increases sensitivity for mapping task-related activation and resting-state networks (RSNs) compared to echo-planar imaging (Posse et al., 2012). In the present study we characterize the sensitivity of MEVI for mapping RSN connectivity dynamics, comparing independent component analysis (ICA) and a novel seed-based connectivity analysis (SBCA) that combines sliding-window correlation analysis with meta-statistics. This SBCA approach is shown to minimize the effects of confounds, such as movement, and CSF and white matter signal changes, and enables real-time monitoring of RSN dynamics at time scales of tens of seconds. We demonstrate highly sensitive mapping of eloquent cortex in the vicinity of brain tumors and arterio-venous malformations, and detection of abnormal resting-state connectivity in epilepsy. In patients with motor impairment, resting-state fMRI provided focal localization of sensorimotor cortex compared with more diffuse activation in task-based fMRI. The fast acquisition speed of MEVI enabled segregation of cardiac-related signal pulsation using ICA, which revealed distinct regional differences in pulsation amplitude and waveform, elevated signal pulsation in patients with arterio-venous malformations and a trend toward reduced pulsatility in gray matter of patients compared with healthy controls. Mapping cardiac pulsation in cortical gray matter may carry important functional information that distinguishes healthy from diseased tissue vasculature. This novel fMRI methodology is particularly promising for mapping eloquent cortex in patients with neurological disease, having variable degree of cooperation in task-based fMRI. In conclusion, ultra-high-real-time speed fMRI enhances the sensitivity of mapping the dynamics of resting-state connectivity and cerebro-vascular pulsatility for clinical and neuroscience research applications. PMID:23986677
NASA Technical Reports Server (NTRS)
Smith, Andrew; LaVerde, Bruce; Fulcher, Clay; Hunt, Ron
2012-01-01
An approach for predicting the vibration, strain, and force responses of a flight-like vehicle panel assembly to acoustic pressures is presented. Important validation for the approach is provided by comparison to ground test measurements in a reverberant chamber. The test article and the corresponding analytical model were assembled in several configurations to demonstrate the suitability of the approach for response predictions when the vehicle panel is integrated with equipment. Critical choices in the analysis necessary for convergence of the predicted and measured responses are illustrated through sensitivity studies. The methodology includes representation of spatial correlation of the pressure field over the panel surface. Therefore, it is possible to demonstrate the effects of hydrodynamic coincidence in the response. The sensitivity to pressure patch density clearly illustrates the onset of coincidence effects on the panel response predictions.
Analysis and Implementation of Methodologies for the Monitoring of Changes in Eye Fundus Images
NASA Astrophysics Data System (ADS)
Gelroth, A.; Rodríguez, D.; Salvatelli, A.; Drozdowicz, B.; Bizai, G.
2011-12-01
We present a support system for changes detection in fundus images of the same patient taken at different time intervals. This process is useful for monitoring pathologies lasting for long periods of time, as are usually the ophthalmologic. We propose a flow of preprocessing, processing and postprocessing applied to a set of images selected from a public database, presenting pathological advances. A test interface was developed designed to select the images to be compared in order to apply the different methods developed and to display the results. We measure the system performance in terms of sensitivity, specificity and computation times. We have obtained good results, higher than 84% for the first two parameters and processing times lower than 3 seconds for 512x512 pixel images. For the specific case of detection of changes associated with bleeding, the system responds with sensitivity and specificity over 98%.
Enhanced sensitivity in a butterfly gyroscope with a hexagonal oblique beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Dingbang; Cao, Shijie; Hou, Zhanqiang, E-mail: houzhanqiang@nudt.edu.cn
2015-04-15
A new approach to improve the performance of a butterfly gyroscope is developed. The methodology provides a simple way to improve the gyroscope’s sensitivity and stability, by reducing the resonant frequency mismatch between the drive and sense modes. This method was verified by simulations and theoretical analysis. The size of the hexagonal section oblique beam is the major factor that influences the resonant frequency mismatch. A prototype, which has the appropriately sized oblique beam, was fabricated using precise, time-controlled multilayer pre-buried masks. The performance of this prototype was compared with a non-tuned gyroscope. The scale factor of the prototype reachesmore » 30.13 mV/ °/s, which is 15 times larger than that obtained from the non-tuned gyroscope. The bias stability of the prototype is 0.8 °/h, which is better than the 5.2 °/h of the non-tuned devices.« less
New Trends in Food Allergens Detection: Toward Biosensing Strategies.
Alves, Rita C; Barroso, M Fátima; González-García, María Begoña; Oliveira, M Beatriz P P; Delerue-Matos, Cristina
2016-10-25
Food allergens are a real threat to sensitized individuals. Although food labeling is crucial to provide information to consumers with food allergies, accidental exposure to allergenic proteins may result from undeclared allergenic substances by means of food adulteration, fraud or uncontrolled cross-contamination. Allergens detection in foodstuffs can be a very hard task, due to their presence usually in trace amounts, together with the natural interference of the matrix. Methods for allergens analysis can be mainly divided in two large groups: the immunological assays and the DNA-based ones. Mass spectrometry has also been used as a confirmatory tool. Recently, biosensors appeared as innovative, sensitive, selective, environmentally friendly, cheaper and fast techniques (especially when automated and/or miniaturized), able to effectively replace the classical methodologies. In this review, we present the advances in the field of food allergens detection toward the biosensing strategies and discuss the challenges and future perspectives of this technology.
Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.
Kiparissides, A; Hatzimanikatis, V
2017-01-01
The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Zhang, Peige; Zhang, Li; Zheng, Shaoping; Yu, Cheng; Xie, Mingxing; Lv, Qing
2016-01-01
To evaluate the overall performance of acoustic radiation force impulse imaging (ARFI) in differentiating between benign and malignant lymph nodes (LNs) by conducting a meta-analysis. PubMed, Embase, Web of Science, the Cochrane Library and the China National Knowledge Infrastructure were comprehensively searched for potential studies through August 13th, 2016. Studies that investigated the diagnostic power of ARFI for the differential diagnosis of benign and malignant LNs by using virtual touch tissue quantification (VTQ) or virtual touch tissue imaging quantification (VTIQ) were collected. The included articles were published in English or Chinese. Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) was used to evaluate the methodological quality. The pooled sensitivity, specificity, and the area under the summary receiver operating characteristic (SROC) curve (AUC) were calculated by means of a bivariate mixed-effects regression model. Meta-regression analysis was performed to identify the potential sources of between study heterogeneity. Fagan plot analysis was used to explore the clinical utilities. Publication bias was assessed using Deek's funnel plot. Nine studies involving 1084 LNs from 929 patients were identified to analyze in the meta-analysis. The summary sensitivity and specificity of ARFI in detecting malignant LNs were 0.87 (95% confidence interval [CI], 0.83-0.91) and 0.88 (95% CI, 0.82-0.92), respectively. The AUC was 0.93 (95% CI, 0.90-0.95). The pooled DOR was 49.59 (95% CI, 26.11-94.15). Deek's funnel plot revealed no significant publication bias. ARFI is a promising tool for the differentiation of benign and malignant LNs with high sensitivity and specificity.
Yu, Cheng; Xie, Mingxing; Lv, Qing
2016-01-01
Objective To evaluate the overall performance of acoustic radiation force impulse imaging (ARFI) in differentiating between benign and malignant lymph nodes (LNs) by conducting a meta-analysis. Methods PubMed, Embase, Web of Science, the Cochrane Library and the China National Knowledge Infrastructure were comprehensively searched for potential studies through August 13th, 2016. Studies that investigated the diagnostic power of ARFI for the differential diagnosis of benign and malignant LNs by using virtual touch tissue quantification (VTQ) or virtual touch tissue imaging quantification (VTIQ) were collected. The included articles were published in English or Chinese. Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) was used to evaluate the methodological quality. The pooled sensitivity, specificity, and the area under the summary receiver operating characteristic (SROC) curve (AUC) were calculated by means of a bivariate mixed-effects regression model. Meta-regression analysis was performed to identify the potential sources of between study heterogeneity. Fagan plot analysis was used to explore the clinical utilities. Publication bias was assessed using Deek’s funnel plot. Results Nine studies involving 1084 LNs from 929 patients were identified to analyze in the meta-analysis. The summary sensitivity and specificity of ARFI in detecting malignant LNs were 0.87 (95% confidence interval [CI], 0.83–0.91) and 0.88 (95% CI, 0.82–0.92), respectively. The AUC was 0.93 (95% CI, 0.90–0.95). The pooled DOR was 49.59 (95% CI, 26.11–94.15). Deek’s funnel plot revealed no significant publication bias. Conclusion ARFI is a promising tool for the differentiation of benign and malignant LNs with high sensitivity and specificity. PMID:27855188
Aydin, Ümit; Vorwerk, Johannes; Küpper, Philipp; Heers, Marcel; Kugel, Harald; Galka, Andreas; Hamid, Laith; Wellmer, Jörg; Kellinghaus, Christoph; Rampp, Stefan; Wolters, Carsten Hermann
2014-01-01
To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP) and field (SEF) data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data. PMID:24671208
Staub, Leonardo Jönck; Biscaro, Roberta Rodolfo Mazzali; Kaszubowski, Erikson; Maurici, Rosemeri
2018-03-01
To assess the accuracy of the chest ultrasonography for the emergency diagnosis of traumatic pneumothorax and haemothorax in adults. Systematic review and meta-analysis. PubMed, EMBASE, Scopus, Web of Science and LILACS (up to 2016) were systematically searched for prospective studies on the diagnostic accuracy of ultrasonography for pneumothorax and haemothorax in adult trauma patients. The references of other systematic reviews and the included studies were checked for further articles. The characteristics and results of the studies were extracted using a standardised form, and their methodological quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Primary analysis was performed considering each hemithorax as an independent unit, while secondary analysis considered each patient. The global diagnostic accuracy of the chest ultrasonography was estimated using the Rutter-Gatsonis hierarchical summary ROC method. Moreover, Reitsma's bivariate model was used to estimate the sensitivity, specificity, positive likelihood ratio (LR + ) and negative likelihood ratio (LR-) of each sonographic sign. This review was previously registered (PROSPERO CRD42016048085). Nineteen studies were included in the review, 17 assessing pneumothorax and 5 assessing haemothorax. The reference standard was always chest tomography, alone or in parallel with chest radiography and observation of the chest tube. The overall methodological quality of the studies was low. The diagnostic accuracy of chest ultrasonography had an area under the curve (AUC) of 0.979 for pneumothorax (Fig). The absence of lung sliding and comet-tail artefacts was the most reported sonographic sign of pneumothorax, with a sensitivity of 0.81 (95% confidence interval [95%CI], 0.71-0.88), specificity of 0.98 (95%CI, 0.97-0.99), LR+ of 67.9 (95%CI, 26.3-148) and LR- of 0.18 (95%CI, 0.11-0.29). An echo-poor or anechoic area in the pleural space was the only sonographic sign for haemothorax, with a sensitivity of 0.60 (95%CI, 0.31-0.86), specificity of 0.98 (95%CI, 0.94-0.99), LR+ of 37.5 (95%CI, 5.26-207.5), LR- of 0.40 (95%CI, 0.17-0.72) and AUC of 0.953. Notwithstanding the limitations of the included studies, this systematic review and meta-analysis suggested that chest ultrasonography is an accurate tool for the diagnostic assessment of traumatic pneumothorax and haemothorax in adults. Copyright © 2018 Elsevier Ltd. All rights reserved.
Self-Knowledge, Capacity and Sensitivity: Prerequisites to Authentic Leadership by School Principals
ERIC Educational Resources Information Center
Begley, Paul T.
2006-01-01
Purpose: The article proposes three prerequisites to authentic leadership by school principals: self-knowledge, a capacity for moral reasoning, and sensitivity to the orientations of others. Design/methodology/approach: A conceptual framework, based on research on the valuation processes of school principals and their strategic responses to…
Recent Advances in the Measurement of Arsenic, Cadmium, and Mercury in Rice and Other Foods
Punshon, Tracy
2015-01-01
Trace element analysis of foods is of increasing importance because of raised consumer awareness and the need to evaluate and establish regulatory guidelines for toxic trace metals and metalloids. This paper reviews recent advances in the analysis of trace elements in food, including challenges, state-of-the art methods, and use of spatially resolved techniques for localizing the distribution of As and Hg within rice grains. Total elemental analysis of foods is relatively well-established but the push for ever lower detection limits requires that methods be robust from potential matrix interferences which can be particularly severe for food. Inductively coupled plasma mass spectrometry (ICP-MS) is the method of choice, allowing for multi-element and highly sensitive analyses. For arsenic, speciation analysis is necessary because the inorganic forms are more likely to be subject to regulatory limits. Chromatographic techniques coupled to ICP-MS are most often used for arsenic speciation and a range of methods now exist for a variety of different arsenic species in different food matrices. Speciation and spatial analysis of foods, especially rice, can also be achieved with synchrotron techniques. Sensitive analytical techniques and methodological advances provide robust methods for the assessment of several metals in animal and plant-based foods, in particular for arsenic, cadmium and mercury in rice and arsenic speciation in foodstuffs. PMID:25938012
Lucid dreaming incidence: A quality effects meta-analysis of 50years of research.
Saunders, David T; Roe, Chris A; Smith, Graham; Clegg, Helen
2016-07-01
We report a quality effects meta-analysis on studies from the period 1966-2016 measuring either (a) lucid dreaming prevalence (one or more lucid dreams in a lifetime); (b) frequent lucid dreaming (one or more lucid dreams in a month) or both. A quality effects meta-analysis allows for the minimisation of the influence of study methodological quality on overall model estimates. Following sensitivity analysis, a heterogeneous lucid dreaming prevalence data set of 34 studies yielded a mean estimate of 55%, 95% C. I. [49%, 62%] for which moderator analysis showed no systematic bias for suspected sources of variability. A heterogeneous lucid dreaming frequency data set of 25 studies yielded a mean estimate of 23%, 95% C. I. [20%, 25%], moderator analysis revealed no suspected sources of variability. These findings are consistent with earlier estimates of lucid dreaming prevalence and frequent lucid dreaming in the population but are based on more robust evidence. Copyright © 2016 Elsevier Inc. All rights reserved.
Nadeau, Geneviève; Lippel, Katherine
2014-09-10
Emerging fields such as environmental health have been challenged, in recent years, to answer the growing methodological calls for a finer integration of sex and gender in health-related research and policy-making. Through a descriptive examination of 25 peer-reviewed social science papers published between 1996 and 2011, we explore, by examining methodological designs and theoretical standpoints, how the social sciences have integrated gender sensitivity in empirical work on Multiple Chemical Sensitivities (MCS). MCS is a "diagnosis" associated with sensitivities to chronic and low-dose chemical exposures, which remains contested in both the medical and institutional arenas, and is reported to disproportionately affect women. We highlighted important differences between papers that did integrate a gender lens and those that did not. These included characteristics of the authorship, purposes, theoretical frameworks and methodological designs of the studies. Reviewed papers that integrated gender tended to focus on the gender roles and identity of women suffering from MCS, emphasizing personal strategies of adaptation. More generally, terminological confusions in the use of sex and gender language and concepts, such as a conflation of women and gender, were observed. Although some men were included in most of the study samples reviewed, specific data relating to men was undereported in results and only one paper discussed issues specifically experienced by men suffering from MCS. Papers that overlooked gender dimensions generally addressed more systemic social issues such as the dynamics of expertise and the medical codification of MCS, from more consistently outlined theoretical frameworks. Results highlight the place for a critical, systematic and reflexive problematization of gender and for the development of methodological and theoretical tools on how to integrate gender in research designs when looking at both micro and macro social dimensions of environmental health conditions. This paper contributes to a discussion on the methodological and policy implications of taking sex and gender into account appropriately in order to contribute to better equity in health, especially where the critical social contexts of definition and medico-legal recognition play a major role such as in the case of MCS.
ERIC Educational Resources Information Center
Aarnio, Pauliina; Kulmala, Teija
2016-01-01
Self-interview methods such as audio computer-assisted self-interviewing (ACASI) are used to improve the accuracy of interview data on sensitive topics in large trials. Small field studies on sensitive topics would benefit from methodological alternatives. In a study on male involvement in antenatal HIV testing in a largely illiterate population…
Siontorou, Christina G; Batzias, Fragiskos A
2014-03-01
Biosensor technology began in the 1960s to revolutionize instrumentation and measurement. Despite the glucose sensor market success that revolutionized medical diagnostics, and artificial pancreas promise currently the approval stage, the industry is reluctant to capitalize on other relevant university-produced knowledge and innovation. On the other hand, the scientific literature is extensive and persisting, while the number of university-hosted biosensor groups is growing. Considering the limited marketability of biosensors compared to the available research output, the biosensor field has been used by the present authors as a suitable paradigm for developing a methodological combined framework for "roadmapping" university research output in this discipline. This framework adopts the basic principles of the Analytic Hierarchy Process (AHP), replacing the lower level of technology alternatives with internal barriers (drawbacks, limitations, disadvantages), modeled through fault tree analysis (FTA) relying on fuzzy reasoning to count for uncertainty. The proposed methodology is validated retrospectively using ion selective field effect transistor (ISFET) - based biosensors as a case example, and then implemented prospectively membrane biosensors, putting an emphasis on the manufacturability issues. The analysis performed the trajectory of membrane platforms differently than the available market roadmaps that, considering the vast industrial experience in tailoring and handling crystallic forms, suggest the technology path of biomimetic and synthetic materials. The results presented herein indicate that future trajectories lie along with nanotechnology, and especially nanofabrication and nano-bioinformatics, and focused, more on the science-path, that is, on controlling the natural process of self-assembly and the thermodynamics of bioelement-lipid interaction. This retained the nature-derived sensitivity of the biosensor platform, pointing out the differences between the scope of academic research and the market viewpoint.
NASA Astrophysics Data System (ADS)
Greco, R.; Sorriso-Valvo, M.
2013-09-01
Several authors, according to different methodological approaches, have employed logistic Regression (LR), a multivariate statistical analysis adopted to assess the spatial probability of landslide, even though its fundamental principles have remained unaltered. This study aims at assessing the influence of some of these methodological approaches on the performance of LR, through a series of sensitivity analyses developed over a test area of about 300 km2 in Calabria (southern Italy). In particular, four types of sampling (1 - the whole study area; 2 - transects running parallel to the general slope direction of the study area with a total surface of about 1/3 of the whole study area; 3 - buffers surrounding the phenomena with a 1/1 ratio between the stable and the unstable area; 4 - buffers surrounding the phenomena with a 1/2 ratio between the stable and the unstable area), two variable coding modes (1 - grouped variables; 2 - binary variables), and two types of elementary land (1 - cells units; 2 - slope units) units have been tested. The obtained results must be considered as statistically relevant in all cases (Aroc values > 70%), thus confirming the soundness of the LR analysis which maintains high predictive capacities notwithstanding the features of input data. As for the area under investigation, the best performing methodological choices are the following: (i) transects produced the best results (0 < P(y) ≤ 93.4%; Aroc = 79.5%); (ii) as for sampling modalities, binary variables (0 < P(y) ≤ 98.3%; Aroc = 80.7%) provide better performance than ordinated variables; (iii) as for the choice of elementary land units, slope units (0 < P(y) ≤ 100%; Aroc = 84.2%) have obtained better results than cells matrix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Economopoulou, M.A.; Economopoulou, A.A.; Economopoulos, A.P., E-mail: eco@otenet.gr
2013-11-15
Highlights: • A two-step (strategic and detailed optimal planning) methodology is used for solving complex MSW management problems. • A software package is outlined, which can be used for generating detailed optimal plans. • Sensitivity analysis compares alternative scenarios that address objections and/or wishes of local communities. • A case study shows the application of the above procedure in practice and demonstrates the results and benefits obtained. - Abstract: The paper describes a software system capable of formulating alternative optimal Municipal Solid Wastes (MSWs) management plans, each of which meets a set of constraints that may reflect selected objections and/ormore » wishes of local communities. The objective function to be minimized in each plan is the sum of the annualized capital investment and annual operating cost of all transportation, treatment and final disposal operations involved, taking into consideration the possible income from the sale of products and any other financial incentives or disincentives that may exist. For each plan formulated, the system generates several reports that define the plan, analyze its cost elements and yield an indicative profile of selected types of installations, as well as data files that facilitate the geographic representation of the optimal solution in maps through the use of GIS. A number of these reports compare the technical and economic data from all scenarios considered at the study area, municipality and installation level constituting in effect sensitivity analysis. The generation of alternative plans offers local authorities the opportunity of choice and the results of the sensitivity analysis allow them to choose wisely and with consensus. The paper presents also an application of this software system in the capital Region of Attica in Greece, for the purpose of developing an optimal waste transportation system in line with its approved waste management plan. The formulated plan was able to: (a) serve 113 Municipalities and Communities that generate nearly 2 million t/y of comingled MSW with distinctly different waste collection patterns, (b) take into consideration several existing waste transfer stations (WTS) and optimize their use within the overall plan, (c) select the most appropriate sites among the potentially suitable (new and in use) ones, (d) generate the optimal profile of each WTS proposed, and (e) perform sensitivity analysis so as to define the impact of selected sets of constraints (limitations in the availability of sites and in the capacity of their installations) on the design and cost of the ensuing optimal waste transfer system. The results show that optimal planning offers significant economic savings to municipalities, while reducing at the same time the present levels of traffic, fuel consumptions and air emissions in the congested Athens basin.« less
SU-F-T-294: The Analysis of Gamma Criteria for Delta4 Dosimetry Using Statistical Process Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, S; Ahn, S; Kim, J
Purpose: To evaluate the sensitivity of gamma criteria for patient-specific volumetric modulated arc therapy(VMAT) quality assurance of the Delta{sup 4} dosimetry program using the statistical process control(SPC) methodology. Methods: The authors selected 20 patient-specific VMAT QA cases which were undertaken MapCHECK and ArcCHECK with gamma pass rate better than 97%. The QAs data were collected Delta4 Phantom+ and Elekta Agility six megavolts without using an angle incrementer. The gamma index(GI) were calculated in 2D planes with normalizing deviation to local dose(local gamma). The sensitivity of the GI methodology using criterion of 3%/3mm, 3%/2mm and 2%/3mm was analyzed with using processmore » acceptability indices. We used local confidence(LC) level, the upper control limit(UCL) and lower control limit(LCL) of I-MR chart for process capability index(Cp) and a process acceptability index (Cpk). Results: The lower local confidence levels of 3%/3mm, 3%/2mm and 2%/3mm were 92.0%, 83.6% and 78.8% respectively. All of the calculated Cp and Cpk values that used LC level were under 1.0 in this study. The calculated LCLs of I-MR charts were 89.5%, 79.0% and 70.5% respectively. These values were higher than 1.0 which means good quality of QA. For the generally used lower limit of 90%, we acquired over 1.3 of Cp value for the gamma index of 3%/3mm and lower than 1.0 in the rest of GI. Conclusion: We applied SPC methodology to evaluate the sensitivity of gamma criteria and could see the lower control limits of VMAT QA for the Delta 4 dosimetry and could see that Delta 4 phantom+ dosimetry more affected by the position error and the I-MR chart derived values are more suitable for establishing lower limits. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (No. 2015R1D1A1A01060463)« less
Food allergy and risk assessment: Current status and future directions
NASA Astrophysics Data System (ADS)
Remington, Benjamin C.
2017-09-01
Risk analysis is a three part, interactive process that consists of a scientific risk assessment, a risk management strategy and an exchange of information through risk communication. Quantitative risk assessment methodologies are now available and widely used for assessing risks regarding the unintentional consumption of major, regulated allergens but new or modified proteins can also pose a risk of de-novo sensitization. The risks due to de-novo sensitization to new food allergies are harder to quantify. There is a need for a systematic, comprehensive battery of tests and assessment strategy to identify and characterise de-novo sensitization to new proteins and the risks associated with them. A risk assessment must be attuned to answer the risk management questions and needs. Consequently, the hazard and risk assessment methods applied and the desired information are determined by the requested outcome for risk management purposes and decisions to be made. The COST Action network (ImpARAS, www.imparas.eu) has recently started to discuss these risk management criteria from first principles and will continue with the broader subject of improving strategies for allergen risk assessment throughout 2016-2018/9.
Chaves, Gabriela Costa
2007-01-01
Abstract Objective This study aims to propose a framework for measuring the degree of public health-sensitivity of patent legislation reformed after the World Trade Organization’s TRIPS (Trade-Related Aspects of Intellectual Property Rights) Agreement entered into force. Methods The methodology for establishing and testing the proposed framework involved three main steps:(1) a literature review on TRIPS flexibilities related to the protection of public health and provisions considered “TRIPS-plus”; (2) content validation through consensus techniques (an adaptation of Delphi method); and (3) an analysis of patent legislation from nineteen Latin American and Caribbean countries. Findings The results show that the framework detected relevant differences in countries’ patent legislation, allowing for country comparisons. Conclusion The framework’s potential usefulness in monitoring patent legislation changes arises from its clear parameters for measuring patent legislation’s degree of health sensitivity. Nevertheless, it can be improved by including indicators related to government and organized society initiatives that minimize free-trade agreements’ negative effects on access to medicines. PMID:17242758
Evaluation of Aspergillus PCR protocols for testing serum specimens.
White, P Lewis; Mengoli, Carlo; Bretagne, Stéphane; Cuenca-Estrella, Manuel; Finnstrom, Niklas; Klingspor, Lena; Melchers, Willem J G; McCulloch, Elaine; Barnes, Rosemary A; Donnelly, J Peter; Loeffler, Juergen
2011-11-01
A panel of human serum samples spiked with various amounts of Aspergillus fumigatus genomic DNA was distributed to 23 centers within the European Aspergillus PCR Initiative to determine analytical performance of PCR. Information regarding specific methodological components and PCR performance was requested. The information provided was made anonymous, and meta-regression analysis was performed to determine any procedural factors that significantly altered PCR performance. Ninety-seven percent of protocols were able to detect a threshold of 10 genomes/ml on at least one occasion, with 83% of protocols reproducibly detecting this concentration. Sensitivity and specificity were 86.1% and 93.6%, respectively. Positive associations between sensitivity and the use of larger sample volumes, an internal control PCR, and PCR targeting the internal transcribed spacer (ITS) region were shown. Negative associations between sensitivity and the use of larger elution volumes (≥100 μl) and PCR targeting the mitochondrial genes were demonstrated. Most Aspergillus PCR protocols used to test serum generate satisfactory analytical performance. Testing serum requires less standardization, and the specific recommendations shown in this article will only improve performance.
Evaluation of Aspergillus PCR Protocols for Testing Serum Specimens▿†
White, P. Lewis; Mengoli, Carlo; Bretagne, Stéphane; Cuenca-Estrella, Manuel; Finnstrom, Niklas; Klingspor, Lena; Melchers, Willem J. G.; McCulloch, Elaine; Barnes, Rosemary A.; Donnelly, J. Peter; Loeffler, Juergen
2011-01-01
A panel of human serum samples spiked with various amounts of Aspergillus fumigatus genomic DNA was distributed to 23 centers within the European Aspergillus PCR Initiative to determine analytical performance of PCR. Information regarding specific methodological components and PCR performance was requested. The information provided was made anonymous, and meta-regression analysis was performed to determine any procedural factors that significantly altered PCR performance. Ninety-seven percent of protocols were able to detect a threshold of 10 genomes/ml on at least one occasion, with 83% of protocols reproducibly detecting this concentration. Sensitivity and specificity were 86.1% and 93.6%, respectively. Positive associations between sensitivity and the use of larger sample volumes, an internal control PCR, and PCR targeting the internal transcribed spacer (ITS) region were shown. Negative associations between sensitivity and the use of larger elution volumes (≥100 μl) and PCR targeting the mitochondrial genes were demonstrated. Most Aspergillus PCR protocols used to test serum generate satisfactory analytical performance. Testing serum requires less standardization, and the specific recommendations shown in this article will only improve performance. PMID:21940479
Multi-scale landslide hazard assessment: Advances in global and regional methodologies
NASA Astrophysics Data System (ADS)
Kirschbaum, Dalia; Peters-Lidard, Christa; Adler, Robert; Hong, Yang
2010-05-01
The increasing availability of remotely sensed surface data and precipitation provides a unique opportunity to explore how smaller-scale landslide susceptibility and hazard assessment methodologies may be applicable at larger spatial scales. This research first considers an emerging satellite-based global algorithm framework, which evaluates how the landslide susceptibility and satellite derived rainfall estimates can forecast potential landslide conditions. An analysis of this algorithm using a newly developed global landslide inventory catalog suggests that forecasting errors are geographically variable due to improper weighting of surface observables, resolution of the current susceptibility map, and limitations in the availability of landslide inventory data. These methodological and data limitation issues can be more thoroughly assessed at the regional level, where available higher resolution landslide inventories can be applied to empirically derive relationships between surface variables and landslide occurrence. The regional empirical model shows improvement over the global framework in advancing near real-time landslide forecasting efforts; however, there are many uncertainties and assumptions surrounding such a methodology that decreases the functionality and utility of this system. This research seeks to improve upon this initial concept by exploring the potential opportunities and methodological structure needed to advance larger-scale landslide hazard forecasting and make it more of an operational reality. Sensitivity analysis of the surface and rainfall parameters in the preliminary algorithm indicates that surface data resolution and the interdependency of variables must be more appropriately quantified at local and regional scales. Additionally, integrating available surface parameters must be approached in a more theoretical, physically-based manner to better represent the physical processes underlying slope instability and landslide initiation. Several rainfall infiltration and hydrological flow models have been developed to model slope instability at small spatial scales. This research investigates the potential of applying a more quantitative hydrological model to larger spatial scales, utilizing satellite and surface data inputs that are obtainable over different geographic regions. Due to the significant role that data and methodological uncertainties play in the effectiveness of landslide hazard assessment outputs, the methodology and data inputs are considered within an ensemble uncertainty framework in order to better resolve the contribution and limitations of model inputs and to more effectively communicate the model skill for improved landslide hazard assessment.
Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Mital, Subodh K.; Shah, Ashwin R.
1997-01-01
The properties of ceramic matrix composites (CMC's) are known to display a considerable amount of scatter due to variations in fiber/matrix properties, interphase properties, interphase bonding, amount of matrix voids, and many geometry- or fabrication-related parameters, such as ply thickness and ply orientation. This paper summarizes preliminary studies in which formal probabilistic descriptions of the material-behavior- and fabrication-related parameters were incorporated into micromechanics and macromechanics for CMC'S. In this process two existing methodologies, namely CMC micromechanics and macromechanics analysis and a fast probability integration (FPI) technique are synergistically coupled to obtain the probabilistic composite behavior or response. Preliminary results in the form of cumulative probability distributions and information on the probability sensitivities of the response to primitive variables for a unidirectional silicon carbide/reaction-bonded silicon nitride (SiC/RBSN) CMC are presented. The cumulative distribution functions are computed for composite moduli, thermal expansion coefficients, thermal conductivities, and longitudinal tensile strength at room temperature. The variations in the constituent properties that directly affect these composite properties are accounted for via assumed probabilistic distributions. Collectively, the results show that the present technique provides valuable information about the composite properties and sensitivity factors, which is useful to design or test engineers. Furthermore, the present methodology is computationally more efficient than a standard Monte-Carlo simulation technique; and the agreement between the two solutions is excellent, as shown via select examples.