Sample records for detailed sensitivity analysis

  1. Hyperspectral data analysis procedures with reduced sensitivity to noise

    NASA Technical Reports Server (NTRS)

    Landgrebe, David A.

    1993-01-01

    Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.

  2. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  3. Preparation for implementation of the mechanistic-empirical pavement design guide in Michigan : part 2 - evaluation of rehabilitation fixes (part 2).

    DOT National Transportation Integrated Search

    2013-08-01

    The overall goal of Global Sensitivity Analysis (GSA) is to determine sensitivity of pavement performance prediction models to the variation in the design input values. The main difference between GSA and detailed sensitivity analyses is the way the ...

  4. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  5. Remote sensing of selective logging in Amazonia Assessing limitations based on detailed field observations, Landsat ETM+, and textural analysis.

    Treesearch

    Gregory P. Asner; Michael Keller; Rodrigo Pereira; Johan C. Zweede

    2002-01-01

    We combined a detailed field study of forest canopy damage with calibrated Landsat 7 Enhanced Thematic Mapper Plus (ETM+) reflectance data and texture analysis to assess the sensitivity of basic broadband optical remote sensing to selective logging in Amazonia. Our field study encompassed measurements of ground damage and canopy gap fractions along a chronosequence of...

  6. Saugus River and Tributaries Flood Damage Reduction Study: Lynn, Malden, Revere and Saugus, Massachusetts. Section 1. Feasibility Report.

    DTIC Science & Technology

    1989-12-01

    57 Table 5 Sensitivity Analysis - Point of Pines LPP 61 Table 6 Plan Comparison 64 Table 7 NED Plan Project Costs 96 Table 8 Estimated Operation...Costs 99 Table 13 Selected Plan/Estimated Annual Benefits 101 Table 14 Comparative Impacts - NED Regional Floodgate Plan 102 Table 15 Economic Analysis ...Includes detailed descriptions, plans and profiles and design considerations of the selected plan; coastal analysis of the shorefront; detailed project

  7. Simulations of the HDO and H2O-18 atmospheric cycles using the NASA GISS general circulation model - Sensitivity experiments for present-day conditions

    NASA Technical Reports Server (NTRS)

    Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.

    1991-01-01

    Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.

  8. Probing 6D operators at future e - e + colliders

    NASA Astrophysics Data System (ADS)

    Chiu, Wen Han; Leung, Sze Ching; Liu, Tao; Lyu, Kun-Feng; Wang, Lian-Tao

    2018-05-01

    We explore the sensitivities at future e - e + colliders to probe a set of six-dimensional operators which can modify the SM predictions on Higgs physics and electroweak precision measurements. We consider the case in which the operators are turned on simultaneously. Such an analysis yields a "conservative" interpretation on the collider sensitivities, complementary to the "optimistic" scenario where the operators are individually probed. After a detail analysis at CEPC in both "conservative" and "optimistic" scenarios, we also considered the sensitivities for FCC-ee and ILC. As an illustration of the potential of constraining new physics models, we applied sensitivity analysis to two benchmarks: holographic composite Higgs model and littlest Higgs model.

  9. Space shuttle navigation analysis

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.

    1976-01-01

    A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.

  10. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  11. Improved numerical solutions for chaotic-cancer-model

    NASA Astrophysics Data System (ADS)

    Yasir, Muhammad; Ahmad, Salman; Ahmed, Faizan; Aqeel, Muhammad; Akbar, Muhammad Zubair

    2017-01-01

    In biological sciences, dynamical system of cancer model is well known due to its sensitivity and chaoticity. Present work provides detailed computational study of cancer model by counterbalancing its sensitive dependency on initial conditions and parameter values. Cancer chaotic model is discretized into a system of nonlinear equations that are solved using the well-known Successive-Over-Relaxation (SOR) method with a proven convergence. This technique enables to solve large systems and provides more accurate approximation which is illustrated through tables, time history maps and phase portraits with detailed analysis.

  12. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  13. Detailed Functional and Proteomic Characterization of Fludarabine Resistance in Mantle Cell Lymphoma Cells

    PubMed Central

    Lorkova, Lucie; Scigelova, Michaela; Arrey, Tabiwang Ndipanquang; Vit, Ondrej; Pospisilova, Jana; Doktorova, Eliska; Klanova, Magdalena; Alam, Mahmudul; Vockova, Petra; Maswabi, Bokang

    2015-01-01

    Mantle cell lymphoma (MCL) is a chronically relapsing aggressive type of B-cell non-Hodgkin lymphoma considered incurable by currently used treatment approaches. Fludarabine is a purine analog clinically still widely used in the therapy of relapsed MCL. Molecular mechanisms of fludarabine resistance have not, however, been studied in the setting of MCL so far. We therefore derived fludarabine-resistant MCL cells (Mino/FR) and performed their detailed functional and proteomic characterization compared to the original fludarabine sensitive cells (Mino). We demonstrated that Mino/FR were highly cross-resistant to other antinucleosides (cytarabine, cladribine, gemcitabine) and to an inhibitor of Bruton tyrosine kinase (BTK) ibrutinib. Sensitivity to other types of anti-lymphoma agents was altered only mildly (methotrexate, doxorubicin, bortezomib) or remained unaffacted (cisplatin, bendamustine). The detailed proteomic analysis of Mino/FR compared to Mino cells unveiled over 300 differentially expressed proteins. Mino/FR were characterized by the marked downregulation of deoxycytidine kinase (dCK) and BTK (thus explaining the observed crossresistance to antinucleosides and ibrutinib), but also by the upregulation of several enzymes of de novo nucleotide synthesis, as well as the up-regulation of the numerous proteins of DNA repair and replication. The significant upregulation of the key antiapoptotic protein Bcl-2 in Mino/FR cells was associated with the markedly increased sensitivity of the fludarabine-resistant MCL cells to Bcl-2-specific inhibitor ABT199 compared to fludarabine-sensitive cells. Our data thus demonstrate that a detailed molecular analysis of drug-resistant tumor cells can indeed open a way to personalized therapy of resistant malignancies. PMID:26285204

  14. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.

    2010-09-15

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination ofmore » the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article. (author)« less

  15. Basic research for the geodynamics program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The mathematical models of space very long base interferometry (VLBI) observables suitable for least squares covariance analysis were derived and estimatability problems inherent in the space VLBI system were explored, including a detailed rank defect analysis and sensitivity analysis. An important aim is to carry out a comparative analysis of the mathematical models of the ground-based VLBI and space VLBI observables in order to describe the background in detail. Computer programs were developed in order to check the relations, assess errors, and analyze sensitivity. In order to investigate the estimatability of different geodetic and geodynamic parameters from the space VLBI observables, the mathematical models for time delay and time delay rate observables of space VLBI were analytically derived along with the partial derivatives with respect to the parameters. Rank defect analysis was carried out both by analytical and numerical testing of linear dependencies between the columns of the normal matrix thus formed. Definite conclusions were formed about the rank defects in the system.

  16. Spatial variability in sensitivity of reference crop ET to accuracy of climate data in the Texas High Plains

    USDA-ARS?s Scientific Manuscript database

    A detailed sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1995 to 2008, fro...

  17. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  18. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  19. Sensitivity analysis of the space shuttle to ascent wind profiles

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Austin, L. D., Jr.

    1982-01-01

    A parametric sensitivity analysis of the space shuttle ascent flight to the wind profile is presented. Engineering systems parameters are obtained by flight simulations using wind profile models and samples of detailed (Jimsphere) wind profile measurements. The wind models used are the synthetic vector wind model, with and without the design gust, and a model of the vector wind change with respect to time. From these comparison analyses an insight is gained on the contribution of winds to ascent subsystems flight parameters.

  20. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.

  1. Raman imaging from microscopy to macroscopy: Quality and safety control of biological materials

    USDA-ARS?s Scientific Manuscript database

    Raman imaging can analyze biological materials by generating detailed chemical images. Over the last decade, tremendous advancements in Raman imaging and data analysis techniques have overcome problems such as long data acquisition and analysis times and poor sensitivity. This review article introdu...

  2. Detailed Analysis of the Binding Mode of Vanilloids to Transient Receptor Potential Vanilloid Type I (TRPV1) by a Mutational and Computational Study

    PubMed Central

    Mori, Yoshikazu; Ogawa, Kazuo; Warabi, Eiji; Yamamoto, Masahiro; Hirokawa, Takatsugu

    2016-01-01

    Transient receptor potential vanilloid type 1 (TRPV1) is a non-selective cation channel and a multimodal sensor protein. Since the precise structure of TRPV1 was obtained by electron cryo-microscopy, the binding mode of representative agonists such as capsaicin and resiniferatoxin (RTX) has been extensively characterized; however, detailed information on the binding mode of other vanilloids remains lacking. In this study, mutational analysis of human TRPV1 was performed, and four agonists (capsaicin, RTX, [6]-shogaol and [6]-gingerol) were used to identify amino acid residues involved in ligand binding and/or modulation of proton sensitivity. The detailed binding mode of each ligand was then simulated by computational analysis. As a result, three amino acids (L518, F591 and L670) were newly identified as being involved in ligand binding and/or modulation of proton sensitivity. In addition, in silico docking simulation and a subsequent mutational study suggested that [6]-gingerol might bind to and activate TRPV1 in a unique manner. These results provide novel insights into the binding mode of various vanilloids to the channel and will be helpful in developing a TRPV1 modulator. PMID:27606946

  3. Pressure sensitivity of low permeability sandstones

    USGS Publications Warehouse

    Kilmer, N.H.; Morrow, N.R.; Pitman, Janet K.

    1987-01-01

    Detailed core analysis has been carried out on 32 tight sandstones with permeabilities ranging over four orders of magnitude (0.0002 to 4.8 mD at 5000 psi confining pressure). Relationships between gas permeability and net confining pressure were measured for cycles of loading and unloading. For some samples, permeabilities were measured both along and across bedding planes. Large variations in stress sensitivity of permeability were observed from one sample to another. The ratio of permeability at a nominal confining pressure of 500 psi to that at 5000 psi was used to define a stress sensitivity ratio. For a given sample, confining pressure vs permeability followed a linear log-log relationship, the slope of which provided an index of pressure sensitivity. This index, as obtained for first unloading data, was used in testing relationships between stress sensitivity and other measured rock properties. Pressure sensitivity tended to increase with increase in carbonate content and depth, and with decrease in porosity, permeability and sodium feldspar. However, scatter in these relationships increased as permeability decreased. Tests for correlations between pressure sensitivity and various linear combinations of variables are reported. Details of pore structure related to diagenetic changes appears to be of much greater significance to pressure sensitivity than mineral composition. ?? 1987.

  4. Material and morphology parameter sensitivity analysis in particulate composite materials

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Oskay, Caglar

    2017-12-01

    This manuscript presents a novel parameter sensitivity analysis framework for damage and failure modeling of particulate composite materials subjected to dynamic loading. The proposed framework employs global sensitivity analysis to study the variance in the failure response as a function of model parameters. In view of the computational complexity of performing thousands of detailed microstructural simulations to characterize sensitivities, Gaussian process (GP) surrogate modeling is incorporated into the framework. In order to capture the discontinuity in response surfaces, the GP models are integrated with a support vector machine classification algorithm that identifies the discontinuities within response surfaces. The proposed framework is employed to quantify variability and sensitivities in the failure response of polymer bonded particulate energetic materials under dynamic loads to material properties and morphological parameters that define the material microstructure. Particular emphasis is placed on the identification of sensitivity to interfaces between the polymer binder and the energetic particles. The proposed framework has been demonstrated to identify the most consequential material and morphological parameters under vibrational and impact loads.

  5. Accelerated Insertion of Materials - Composites

    DTIC Science & Technology

    2001-08-28

    Details • Damage Tolerance • Repair • Validation of Analysis Methodology • Fatigue • Static • Acoustic • Configuration Details • Damage Tolerance...Sensitivity – Fatigue – Adhesion – Damage Tolerance – All critical modes and environments Products: Material Specifications, B-Basis Design Allowables...Demonstrate damage tolerance AIM-C DARPA DARPA Workshop, Annapolis, August 27-28, 2001 Requalification of Polymer / Composite Parts • Material Changes – Raw

  6. Simulation on a car interior aerodynamic noise control based on statistical energy analysis

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Wang, Dengfeng; Ma, Zhengdong

    2012-09-01

    How to simulate interior aerodynamic noise accurately is an important question of a car interior noise reduction. The unsteady aerodynamic pressure on body surfaces is proved to be the key effect factor of car interior aerodynamic noise control in high frequency on high speed. In this paper, a detail statistical energy analysis (SEA) model is built. And the vibra-acoustic power inputs are loaded on the model for the valid result of car interior noise analysis. The model is the solid foundation for further optimization on car interior noise control. After the most sensitive subsystems for the power contribution to car interior noise are pointed by SEA comprehensive analysis, the sound pressure level of car interior aerodynamic noise can be reduced by improving their sound and damping characteristics. The further vehicle testing results show that it is available to improve the interior acoustic performance by using detailed SEA model, which comprised by more than 80 subsystems, with the unsteady aerodynamic pressure calculation on body surfaces and the materials improvement of sound/damping properties. It is able to acquire more than 2 dB reduction on the central frequency in the spectrum over 800 Hz. The proposed optimization method can be looked as a reference of car interior aerodynamic noise control by the detail SEA model integrated unsteady computational fluid dynamics (CFD) and sensitivity analysis of acoustic contribution.

  7. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  8. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    NASA Technical Reports Server (NTRS)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  9. Navigation Design and Analysis for the Orion Cislunar Exploration Missions

    NASA Technical Reports Server (NTRS)

    D'Souza, Christopher; Holt, Greg; Gay, Robert; Zanetti, Renato

    2014-01-01

    This paper details the design and analysis of the cislunar optical navigation system being proposed for the Orion Earth-Moon (EM) missions. In particular, it presents the mathematics of the navigation filter. It also presents the sensitivity analysis that has been performed to understand the performance of the proposed system, with particular attention paid to entry flight path angle constraints and the DELTA V performance

  10. Sensitivity Tests Between Vs30 and Detailed Shear Wave Profiles Using 1D and 3D Site Response Analysis, Las Vegas Valley

    NASA Astrophysics Data System (ADS)

    West, Loyd Travis

    Site characterization is an essential aspect of hazard analysis and the time-averaged shear-wave velocity to 30 m depth "Vs30" for site-class has become a critical parameter in site-specific and probabilistic hazard analysis. Yet, the general applicability of Vs30 can be ambiguous and much debate and research surround its application. In 2007, in part to mitigate the uncertainty associated with the use of Vs30 in Las Vegas Valley, the Clark County Building Department (CCBD) in collaboration with the Nevada System of Higher Education (NSHE) embarked on an endeavor to map Vs30 using a geophysical methods approach for a site-class microzonation map of over 500 square miles (1500 km2) in southern Nevada. The resulting dataset, described by Pancha et al. (2017), contains over 10,700 1D shear-wave-velocity-depth profiles (SWVP) that constitute a rich database of 3D shear-wave velocity structure that is both laterally and vertical heterogenous. This study capitalizes on the uniquely detailed and spatially dense CCBD database to carry out sensitivity tests on the detailed shear-wave-velocity-profiles and the Vs30 utilizing 1D and 3D site-response approaches. Sensitivity tests are derived from the 1D oscillator response of a single-degree-of-freedom-oscillator and from 3D finite-difference deterministic simulations up to 15 Hz frequency using similar model parameters. Results demonstrate that the detailed SWVP are amplifying ground motions by roughly 50% over the simple Vs30 models, above 4.6 Hz frequency. Numerical simulations also depict significant lateral resonance, focusing, and scattering from seismic energy attributed to the 3D small-scale heterogeneities of the shear-wave-velocity profiles that result in a 70% increase in peak ground velocity. Additionally, PGV ratio maps clearly establish that the increased amplification from the detailed SWVPs is consistent throughout the model space. As a corollary, this study demonstrates the use of finite-differencing numerical based methods to simulate ground motions at high frequencies, up to 15 Hz.

  11. The Active for Life Year 5 (AFLY5) school-based cluster randomised controlled trial protocol: detailed statistical analysis plan.

    PubMed

    Lawlor, Debbie A; Peters, Tim J; Howe, Laura D; Noble, Sian M; Kipping, Ruth R; Jago, Russell

    2013-07-24

    The Active For Life Year 5 (AFLY5) randomised controlled trial protocol was published in this journal in 2011. It provided a summary analysis plan. This publication is an update of that protocol and provides a detailed analysis plan. This update provides a detailed analysis plan of the effectiveness and cost-effectiveness of the AFLY5 intervention. The plan includes details of how variables will be quality control checked and the criteria used to define derived variables. Details of four key analyses are provided: (a) effectiveness analysis 1 (the effect of the AFLY5 intervention on primary and secondary outcomes at the end of the school year in which the intervention is delivered); (b) mediation analyses (secondary analyses examining the extent to which any effects of the intervention are mediated via self-efficacy, parental support and knowledge, through which the intervention is theoretically believed to act); (c) effectiveness analysis 2 (the effect of the AFLY5 intervention on primary and secondary outcomes 12 months after the end of the intervention) and (d) cost effectiveness analysis (the cost-effectiveness of the AFLY5 intervention). The details include how the intention to treat and per-protocol analyses were defined and planned sensitivity analyses for dealing with missing data. A set of dummy tables are provided in Additional file 1. This detailed analysis plan was written prior to any analyst having access to any data and was approved by the AFLY5 Trial Steering Committee. Its publication will ensure that analyses are in accordance with an a priori plan related to the trial objectives and not driven by knowledge of the data. ISRCTN50133740.

  12. Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane

    The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less

  13. Idealized simulation of the Colorado hailstorm case: comparison of bulk and detailed microphysics

    NASA Astrophysics Data System (ADS)

    Geresdi, I.

    One of the purposes of the Fourth Cloud Modeling Workshop was to compare different microphysical treatments. In this paper, the results of a widely used bulk treatment and five versions of a detailed microphysical model are presented. Sensitivity analysis was made to investigate the effect of bulk parametrization, ice initiation technique, CCN concentration and collision efficiency of rimed ice crystal-drop collision. The results show that: (i) The mixing ratios of different species of hydrometeors calculated by bulk and one of the detailed models show some similarity. However, the processes of hail/graupel formation are different in the bulk and the detailed models. (ii) Using different ice initiation in the detailed models' different processes became important in the hail and graupel formation. (iii) In the case of higher CCN concentration, the mixing ratio of liquid water, hail and graupel were more sensitive to the value of collision efficiency of rimed ice crystal-drop collision. (iv) The Bergeron-Findeisen process does not work in the updraft core of a convective cloud. The vapor content was always over water saturation; moreover, the supersaturation gradually increased after the appearance of precipitation ice particles.

  14. Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling

    NASA Astrophysics Data System (ADS)

    Sung, Chih-Jen; Niemeyer, Kyle E.

    2010-05-01

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.

  15. Sensitive and comprehensive analysis of O-glycosylation in biotherapeutics: a case study of novel erythropoiesis stimulating protein.

    PubMed

    Kim, Unyong; Oh, Myung Jin; Seo, Youngsuk; Jeon, Yinae; Eom, Joon-Ho; An, Hyun Joo

    2017-09-01

    Glycosylation of recombinant human erythropoietins (rhEPOs) is significantly associated with drug's quality and potency. Thus, comprehensive characterization of glycosylation is vital to assess the biotherapeutic quality and establish the equivalency of biosimilar rhEPOs. However, current glycan analysis mainly focuses on the N-glycans due to the absence of analytical tools to liberate O-glycans with high sensitivity. We developed selective and sensitive method to profile native O-glycans on rhEPOs. O-glycosylation on rhEPO including O-acetylation on a sialic acid was comprehensively characterized. Details such as O-glycan structure and O-acetyl-modification site were obtained from tandem MS. This method may be applied to QC and batch analysis of not only rhEPOs but also other biotherapeutics bearing multiple O-glycosylations.

  16. A new sensitivity analysis for structural optimization of composite rotor blades

    NASA Technical Reports Server (NTRS)

    Venkatesan, C.; Friedmann, P. P.; Yuan, Kuo-An

    1993-01-01

    This paper presents a detailed mathematical derivation of the sensitivity derivatives for the structural dynamic, aeroelastic stability and response characteristics of a rotor blade in hover and forward flight. The formulation is denoted by the term semianalytical approach, because certain derivatives have to be evaluated by a finite difference scheme. Using the present formulation, sensitivity derivatives for the structural dynamic and aeroelastic stability characteristics, were evaluated for both isotropic and composite rotor blades. Based on the results, useful conclusions are obtained regarding the relative merits of the semi-analytical approach, for calculating sensitivity derivatives, when compared to a pure finite difference approach.

  17. A detailed analysis of the erythropoietic control system in the human, squirrel, monkey, rat and mouse

    NASA Technical Reports Server (NTRS)

    Nordheim, A. W.

    1985-01-01

    The erythropoiesis modeling performed in support of the Body Fluid and Blood Volume Regulation tasks is described. The mathematical formulation of the species independent model, the solutions to the steady state and dynamic versions of the model, and the individual species specific models for the human, squirrel monkey, rat and mouse are outlined. A detailed sensitivity analysis of the species independent model response to parameter changes and how those responses change from species to species is presented. The species to species response to a series of simulated stresses directly related to blood volume regulation during space flight is analyzed.

  18. Image encryption based on a delayed fractional-order chaotic logistic system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na

    2012-05-01

    A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.

  19. Sensitivity of Combustion-Acoustic Instabilities to Boundary Conditions for Premixed Gas Turbine Combustors

    NASA Technical Reports Server (NTRS)

    Darling, Douglas; Radhakrishnan, Krishnan; Oyediran, Ayo

    1995-01-01

    Premixed combustors, which are being considered for low NOx engines, are susceptible to instabilities due to feedback between pressure perturbations and combustion. This feedback can cause damaging mechanical vibrations of the system as well as degrade the emissions characteristics and combustion efficiency. In a lean combustor instabilities can also lead to blowout. A model was developed to perform linear combustion-acoustic stability analysis using detailed chemical kinetic mechanisms. The Lewis Kinetics and Sensitivity Analysis Code, LSENS, was used to calculate the sensitivities of the heat release rate to perturbations in density and temperature. In the present work, an assumption was made that the mean flow velocity was small relative to the speed of sound. Results of this model showed the regions of growth of perturbations to be most sensitive to the reflectivity of the boundary when reflectivities were close to unity.

  20. iTOUGH2 v7.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL

    2016-09-15

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less

  1. Methods for comparative evaluation of propulsion system designs for supersonic aircraft

    NASA Technical Reports Server (NTRS)

    Tyson, R. M.; Mairs, R. Y.; Halferty, F. D., Jr.; Moore, B. E.; Chaloff, D.; Knudsen, A. W.

    1976-01-01

    The propulsion system comparative evaluation study was conducted to define a rapid, approximate method for evaluating the effects of propulsion system changes for an advanced supersonic cruise airplane, and to verify the approximate method by comparing its mission performance results with those from a more detailed analysis. A table look up computer program was developed to determine nacelle drag increments for a range of parametric nacelle shapes and sizes. Aircraft sensitivities to propulsion parameters were defined. Nacelle shapes, installed weights, and installed performance was determined for four study engines selected from the NASA supersonic cruise aircraft research (SCAR) engine studies program. Both rapid evaluation method (using sensitivities) and traditional preliminary design methods were then used to assess the four engines. The method was found to compare well with the more detailed analyses.

  2. Revisiting photon-statistics effects on multiphoton ionization

    NASA Astrophysics Data System (ADS)

    Mouloudakis, G.; Lambropoulos, P.

    2018-05-01

    We present a detailed analysis of the effects of photon statistics on multiphoton ionization. Through a detailed study of the role of intermediate states, we evaluate the conditions under which the premise of nonresonant processes is valid. The limitations of its validity are manifested in the dependence of the process on the stochastic properties of the radiation and found to be quite sensitive to the intensity. The results are quantified through detailed calculations for coherent, chaotic, and squeezed vacuum radiation. Their significance in the context of recent developments in radiation sources such as the short-wavelength free-electron laser and squeezed vacuum radiation is also discussed.

  3. Autonomous Mars ascent and orbit rendezvous for earth return missions

    NASA Technical Reports Server (NTRS)

    Edwards, H. C.; Balmanno, W. F.; Cruz, Manuel I.; Ilgen, Marc R.

    1991-01-01

    The details of tha assessment of autonomous Mars ascent and orbit rendezvous for earth return missions are presented. Analyses addressing navigation system assessments, trajectory planning, targeting approaches, flight control guidance strategies, and performance sensitivities are included. Tradeoffs in the analysis and design process are discussed.

  4. SENSITIVITY OF BLIND PULSAR SEARCHES WITH THE FERMI LARGE AREA TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dormody, M.; Johnson, R. P.; Atwood, W. B.

    2011-12-01

    We quantitatively establish the sensitivity to the detection of young to middle-aged, isolated, gamma-ray pulsars through blind searches of Fermi Large Area Telescope (LAT) data using a Monte Carlo simulation. We detail a sensitivity study of the time-differencing blind search code used to discover gamma-ray pulsars in the first year of observations. We simulate 10,000 pulsars across a broad parameter space and distribute them across the sky. We replicate the analysis in the Fermi LAT First Source Catalog to localize the sources, and the blind search analysis to find the pulsars. We analyze the results and discuss the effect ofmore » positional error and spin frequency on gamma-ray pulsar detections. Finally, we construct a formula to determine the sensitivity of the blind search and present a sensitivity map assuming a standard set of pulsar parameters. The results of this study can be applied to population studies and are useful in characterizing unidentified LAT sources.« less

  5. Comparison between two methodologies for urban drainage decision aid.

    PubMed

    Moura, P M; Baptista, M B; Barraud, S

    2006-01-01

    The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.

  6. Sensitive Multi-Species Emissions Monitoring: Infrared Laser-Based Detection of Trace-Level Contaminants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steill, Jeffrey D.; Huang, Haifeng; Hoops, Alexandra A.

    This report summarizes our development of spectroscopic chemical analysis techniques and spectral modeling for trace-gas measurements of highly-regulated low-concentration species present in flue gas emissions from utility coal boilers such as HCl under conditions of high humidity. Detailed spectral modeling of the spectroscopy of HCl and other important combustion and atmospheric species such as H 2 O, CO 2 , N 2 O, NO 2 , SO 2 , and CH 4 demonstrates that IR-laser spectroscopy is a sensitive multi-component analysis strategy. Experimental measurements from techniques based on IR laser spectroscopy are presented that demonstrate sub-ppm sensitivity levels to thesemore » species. Photoacoustic infrared spectroscopy is used to detect and quantify HCl at ppm levels with extremely high signal-to-noise even under conditions of high relative humidity. Additionally, cavity ring-down IR spectroscopy is used to achieve an extremely high sensitivity to combustion trace gases in this spectral region; ppm level CH 4 is one demonstrated example. The importance of spectral resolution in the sensitivity of a trace-gas measurement is examined by spectral modeling in the mid- and near-IR, and efforts to improve measurement resolution through novel instrument development are described. While previous project reports focused on benefits and complexities of the dual-etalon cavity ring-down infrared spectrometer, here details on steps taken to implement this unique and potentially revolutionary instrument are described. This report also illustrates and critiques the general strategy of IR- laser photodetection of trace gases leading to the conclusion that mid-IR laser spectroscopy techniques provide a promising basis for further instrument development and implementation that will enable cost-effective sensitive detection of multiple key contaminant species simultaneously.« less

  7. The rise of multiple imputation: a review of the reporting and implementation of the method in medical research.

    PubMed

    Hayati Rezvan, Panteha; Lee, Katherine J; Simpson, Julie A

    2015-04-07

    Missing data are common in medical research, which can lead to a loss in statistical power and potentially biased results if not handled appropriately. Multiple imputation (MI) is a statistical method, widely adopted in practice, for dealing with missing data. Many academic journals now emphasise the importance of reporting information regarding missing data and proposed guidelines for documenting the application of MI have been published. This review evaluated the reporting of missing data, the application of MI including the details provided regarding the imputation model, and the frequency of sensitivity analyses within the MI framework in medical research articles. A systematic review of articles published in the Lancet and New England Journal of Medicine between January 2008 and December 2013 in which MI was implemented was carried out. We identified 103 papers that used MI, with the number of papers increasing from 11 in 2008 to 26 in 2013. Nearly half of the papers specified the proportion of complete cases or the proportion with missing data by each variable. In the majority of the articles (86%) the imputed variables were specified. Of the 38 papers (37%) that stated the method of imputation, 20 used chained equations, 8 used multivariate normal imputation, and 10 used alternative methods. Very few articles (9%) detailed how they handled non-normally distributed variables during imputation. Thirty-nine papers (38%) stated the variables included in the imputation model. Less than half of the papers (46%) reported the number of imputations, and only two papers compared the distribution of imputed and observed data. Sixty-six papers presented the results from MI as a secondary analysis. Only three articles carried out a sensitivity analysis following MI to assess departures from the missing at random assumption, with details of the sensitivity analyses only provided by one article. This review outlined deficiencies in the documenting of missing data and the details provided about imputation. Furthermore, only a few articles performed sensitivity analyses following MI even though this is strongly recommended in guidelines. Authors are encouraged to follow the available guidelines and provide information on missing data and the imputation process.

  8. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    PubMed

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  9. Coupled reactors analysis: New needs and advances using Monte Carlo methodology

    DOE PAGES

    Aufiero, M.; Palmiotti, G.; Salvatores, M.; ...

    2016-08-20

    Coupled reactors and the coupling features of large or heterogeneous core reactors can be investigated with the Avery theory that allows a physics understanding of the main features of these systems. However, the complex geometries that are often encountered in association with coupled reactors, require a detailed geometry description that can be easily provided by modern Monte Carlo (MC) codes. This implies a MC calculation of the coupling parameters defined by Avery and of the sensitivity coefficients that allow further detailed physics analysis. The results presented in this paper show that the MC code SERPENT has been successfully modifed tomore » meet the required capabilities.« less

  10. Laser synthesized super-hydrophobic conducting carbon with broccoli-type morphology as a counter-electrode for dye sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Gokhale, Rohan; Agarkar, Shruti; Debgupta, Joyashish; Shinde, Deodatta; Lefez, Benoit; Banerjee, Abhik; Jog, Jyoti; More, Mahendra; Hannoyer, Beatrice; Ogale, Satishchandra

    2012-10-01

    A laser photochemical process is introduced to realize superhydrophobic conducting carbon coatings with broccoli-type hierarchical morphology for use as a metal-free counter electrode in a dye sensitized solar cell. The process involves pulsed excimer laser irradiation of a thin layer of liquid haloaromatic organic solvent o-dichlorobenzene (DCB). The coating reflects a carbon nanoparticle-self assembled and process-controlled morphology that yields solar to electric power conversion efficiency of 5.1% as opposed to 6.2% obtained with the conventional Pt-based electrode.A laser photochemical process is introduced to realize superhydrophobic conducting carbon coatings with broccoli-type hierarchical morphology for use as a metal-free counter electrode in a dye sensitized solar cell. The process involves pulsed excimer laser irradiation of a thin layer of liquid haloaromatic organic solvent o-dichlorobenzene (DCB). The coating reflects a carbon nanoparticle-self assembled and process-controlled morphology that yields solar to electric power conversion efficiency of 5.1% as opposed to 6.2% obtained with the conventional Pt-based electrode. Electronic supplementary information (ESI) available: Materials and equipment details, solar cell fabrication protocol, electrolyte spreading time measurement details, XPS spectra, electronic study, film adhesion test detailed analysis and field emission results. See DOI: 10.1039/c2nr32082g

  11. CASL L1 Milestone report : CASL.P4.01, sensitivity and uncertainty analysis for CIPS with VIPRE-W and BOA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sung, Yixing; Adams, Brian M.; Secker, Jeffrey R.

    2011-12-01

    The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. Thismore » report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.« less

  12. Monolayer Graphene Bolometer as a Sensitive Far-IR Detector

    NASA Technical Reports Server (NTRS)

    Karasik, Boris S.; McKitterick, Christopher B.; Prober, Daniel E.

    2014-01-01

    In this paper we give a detailed analysis of the expected sensitivity and operating conditions in the power detection mode of a hot-electron bolometer (HEB) made from a few micro m(sup 2) of monolayer graphene (MLG) flake which can be embedded into either a planar antenna or waveguide circuit via NbN (or NbTiN) superconducting contacts with critical temperature approx. 14 K. Recent data on the strength of the electron-phonon coupling are used in the present analysis and the contribution of the readout noise to the Noise Equivalent Power (NEP) is explicitly computed. The readout scheme utilizes Johnson Noise Thermometry (JNT) allowing for Frequency-Domain Multiplexing (FDM) using narrowband filter coupling of the HEBs. In general, the filter bandwidth and the summing amplifier noise have a significant effect on the overall system sensitivity.

  13. Hamiltonian Markov Chain Monte Carlo Methods for the CUORE Neutrinoless Double Beta Decay Sensitivity

    NASA Astrophysics Data System (ADS)

    Graham, Eleanor; Cuore Collaboration

    2017-09-01

    The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.

  14. Mechanical performance and parameter sensitivity analysis of 3D braided composites joints.

    PubMed

    Wu, Yue; Nan, Bo; Chen, Liang

    2014-01-01

    3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N.

  15. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  16. Detailed mechanism of benzene oxidation

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1987-01-01

    A detailed quantitative mechanism for the oxidation of benzene in both argon and nitrogen diluted systems is presented. Computed ignition delay time for argon diluted mixtures are in satisfactory agreement with experimental results for a wide range of initial conditions. An experimental temperature versus time profile for a nitrogen diluted oxidation was accurately matched and several concentration profiles were matched qualitatively. Application of sensitivity analysis has given approximate rate constant expressions for the two dominant heat release reactions, the oxidation of C6H5 and C5H5 radicals by molecular oxygen.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, A.W.; Leavitt, W.Z.

    A preliminary study was conducted to determine the feasiblllty of uslng neutron radiography for examining ordnance materials. Depleted U and 4140 steel castings in various thicknesses were examined. In addition, a thin section of Be, a grasshopper and a cricket were radiographed to illustrate the ability of neutrons to determine defects in low-density materials. For uranium thicknesses up to 5 in., a contrast sensitivity between 1.5 and 7.5%o was obtained using thermal neutrons, In foil, and industrial x-ray film transfer techniques. For 6 in. of U, a contrast sensitivity between 6.3 and 12.5% was obtained. For U thicknesses up tomore » 6 in., a detail sensitivity of 2.1% was obtained. For thicknesses up to 4 in. of steel, a contrast sensitivity between 1.9 and 9.4% and a detail sensitivity between 0.8 and 3.1% were obtained. For 5.125 and 6 in. of steel, contrast sensitivities between 14.6 and 29.2% and 14.6 and 39.6%, respectively, and detail sensitivities of no less than 4.9% were obtained. Both a contrast and a detail sensitivity of about 5% were obtained in examining a thin section of Be. Recommendations for further studies are given. (auth)« less

  18. Biochemical analysis of force-sensitive responses using a large-scale cell stretch device.

    PubMed

    Renner, Derrick J; Ewald, Makena L; Kim, Timothy; Yamada, Soichiro

    2017-09-03

    Physical force has emerged as a key regulator of tissue homeostasis, and plays an important role in embryogenesis, tissue regeneration, and disease progression. Currently, the details of protein interactions under elevated physical stress are largely missing, therefore, preventing the fundamental, molecular understanding of mechano-transduction. This is in part due to the difficulty isolating large quantities of cell lysates exposed to force-bearing conditions for biochemical analysis. We designed a simple, easy-to-fabricate, large-scale cell stretch device for the analysis of force-sensitive cell responses. Using proximal biotinylation (BioID) analysis or phospho-specific antibodies, we detected force-sensitive biochemical changes in cells exposed to prolonged cyclic substrate stretch. For example, using promiscuous biotin ligase BirA* tagged α-catenin, the biotinylation of myosin IIA increased with stretch, suggesting the close proximity of myosin IIA to α-catenin under a force bearing condition. Furthermore, using phospho-specific antibodies, Akt phosphorylation was reduced upon stretch while Src phosphorylation was unchanged. Interestingly, phosphorylation of GSK3β, a downstream effector of Akt pathway, was also reduced with stretch, while the phosphorylation of other Akt effectors was unchanged. These data suggest that the Akt-GSK3β pathway is force-sensitive. This simple cell stretch device enables biochemical analysis of force-sensitive responses and has potential to uncover molecules underlying mechano-transduction.

  19. Analysis of single quantum-dot mobility inside 1D nanochannel devices

    NASA Astrophysics Data System (ADS)

    Hoang, H. T.; Segers-Nolten, I. M.; Tas, N. R.; van Honschoten, J. W.; Subramaniam, V.; Elwenspoek, M. C.

    2011-07-01

    We visualized individual quantum dots using a combination of a confining nanochannel and an ultra-sensitive microscope system, equipped with a high numerical aperture lens and a highly sensitive camera. The diffusion coefficients of the confined quantum dots were determined from the experimentally recorded trajectories according to the classical diffusion theory for Brownian motion in two dimensions. The calculated diffusion coefficients were three times smaller than those in bulk solution. These observations confirm and extend the results of Eichmann et al (2008 Langmuir 24 714-21) to smaller particle diameters and more narrow confinement. A detailed analysis shows that the observed reduction in mobility cannot be explained by conventional hydrodynamic theory.

  20. Application of sensitivity-analysis techniques to the calculation of topological quantities

    NASA Astrophysics Data System (ADS)

    Gilchrist, Stuart

    2017-08-01

    Magnetic reconnection in the corona occurs preferentially at sites where the magnetic connectivity is either discontinuous or has a large spatial gradient. Hence there is a general interest in computing quantities (like the squashing factor) that characterize the gradient in the field-line mapping function. Here we present an algorithm for calculating certain (quasi)topological quantities using mathematical techniques from the field of ``sensitivity-analysis''. The method is based on the calculation of a three dimensional field-line mapping Jacobian from which all the present topological quantities of interest can be derived. We will present the algorithm and the details of a publicly available set of libraries that implement the algorithm.

  1. Photoanode Thickness Optimization and Impedance Spectroscopic Analysis of Dye-Sensitized Solar Cells based on a Carbazole-Containing Ruthenium Dye

    NASA Astrophysics Data System (ADS)

    Choi, Jongwan; Kim, Felix Sunjoo

    2018-03-01

    We studied the influence of photoanode thickness on the photovoltaic characteristics and impedance responses of the dye-sensitized solar cells based on a ruthenium dye containing a hexyloxyl-substituted carbazole unit (Ru-HCz). As the thickness of photoanode increases from 4.2 μm to 14.8 μm, the dye-loading amount and the efficiency increase. The device with thicker photoanode shows a decrease in the efficiency due to the higher probability of recombination of electron-hole pairs before charge extraction. We also analyzed the electron-transfer and recombination characteristics as a function of photoanode thickness through detailed electrochemical impedance spectroscopy analysis.

  2. iTOUGH2 V6.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, Stefan A.

    2010-11-01

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less

  3. Numerical Analysis of the Trailblazer Inlet Flowfield for Hypersonic Mach Numbers

    NASA Technical Reports Server (NTRS)

    Steffen, C. J., Jr.; DeBonis, J. R.

    1999-01-01

    A study of the Trailblazer vehicle inlet was conducted using the Global Air Sampling Program (GASP) code for flight Mach numbers ranging from 4-12. Both perfect gas and finite rate chemical analysis were performed with the intention of making detailed comparisons between the two results. Inlet performance was assessed using total pressure recovery and kinetic energy efficiency. These assessments were based upon a one-dimensional stream-thrust-average of the axisymmetric flowfield. Flow visualization utilized to examine the detailed shock structures internal to this mixed-compression inlet. Kinetic energy efficiency appeared to be the least sensitive to differences between the perfect gas and finite rate chemistry results. Total pressure recovery appeared to be the most sensitive discriminator between the perfect gas and finite rate chemistry results for flight Mach numbers above Mach 6. Adiabatic wall temperature was consistently overpredicted by the perfect gas model for flight Mach numbers above Mach 4. The predicted shock structures were noticeably different for Mach numbers from 6-12. At Mach 4, the perfect gas and finite rate chemistry models collapse to the same result.

  4. Multidisciplinary optimization of an HSCT wing using a response surface methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giunta, A.A.; Grossman, B.; Mason, W.H.

    1994-12-31

    Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less

  5. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  6. Parameterization and Sensitivity Analysis of a Complex Simulation Model for Mosquito Population Dynamics, Dengue Transmission, and Their Control

    PubMed Central

    Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.

    2011-01-01

    Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844

  7. Terrain-analysis procedures for modeling radar backscatter

    USGS Publications Warehouse

    Schaber, Gerald G.; Pike, Richard J.; Berlin, Graydon Lennis

    1978-01-01

    The collection and analysis of detailed information on the surface of natural terrain are important aspects of radar-backscattering modeling. Radar is especially sensitive to surface-relief changes in the millimeter- to-decimeter scale four conventional K-band (~1-cm wavelength) to L-band (~25-cm wavelength) radar systems. Surface roughness statistics that characterize these changes in detail have been generated by a comprehensive set of seven programmed calculations for radar-backscatter modeling from sets of field measurements. The seven programs are 1) formatting of data in readable form for subsequent topographic analysis program; 2) relief analysis; 3) power spectral analysis; 4) power spectrum plots; 5) slope angle between slope reversals; 6) slope angle against slope interval plots; and 7) base length slope angle and curvature. This complete Fortran IV software package, 'Terrain Analysis', is here presented for the first time. It was originally developed a decade ago for investigations of lunar morphology and surface trafficability for the Apollo Lunar Roving Vehicle.

  8. Supersonic molecular beam-hyperthermal surface ionisation coupled with time-of-flight mass spectrometry applied to trace level detection of polynuclear aromatic hydrocarbons in drinking water for reduced sample preparation and analysis time.

    PubMed

    Davis, S C; Makarov, A A; Hughes, J D

    1999-01-01

    Analysis of sub-ppb levels of polynuclear aromatic hydrocarbons (PAHs) in drinking water by high performance liquid chromatography (HPLC) fluorescence detection typically requires large water samples and lengthy extraction procedures. The detection itself, although selective, does not give compound identity confirmation. Benchtop gas chromatography/mass spectrometry (GC/MS) systems operating in the more sensitive selected ion monitoring (SIM) acquisition mode discard spectral information and, when operating in scanning mode, are less sensitive and scan too slowly. The selectivity of hyperthermal surface ionisation (HSI), the high column flow rate capacity of the supersonic molecular beam (SMB) GC/MS interface, and the high acquisition rate of time-of-flight (TOF) mass analysis, are combined here to facilitate a rapid, specific and sensitive technique for the analysis of trace levels of PAHs in water. This work reports the advantages gained by using the GC/HSI-TOF system over the HPLC fluorescence method, and discusses in some detail the nature of the instrumentation used.

  9. Space shuttle entry and landing navigation analysis

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Crawford, B. S.

    1974-01-01

    A navigation system for the entry phase of a Space Shuttle mission which is an aided-inertial system which uses a Kalman filter to mix IMU data with data derived from external navigation aids is evaluated. A drag pseudo-measurement used during radio blackout is treated as an additional external aid. A comprehensive truth model with 101 states is formulated and used to generate detailed error budgets at several significant time points -- end-of-blackout, start of final approach, over runway threshold, and touchdown. Sensitivity curves illustrating the effect of variations in the size of individual error sources on navigation accuracy are presented. The sensitivity of the navigation system performance to filter modifications is analyzed. The projected overall performance is shown in the form of time histories of position and velocity error components. The detailed results are summarized and interpreted, and suggestions are made concerning possible software improvements.

  10. A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na

    2013-01-01

    We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.

  11. Transfusion Indication Threshold Reduction (TITRe2) randomized controlled trial in cardiac surgery: statistical analysis plan.

    PubMed

    Pike, Katie; Nash, Rachel L; Murphy, Gavin J; Reeves, Barnaby C; Rogers, Chris A

    2015-02-22

    The Transfusion Indication Threshold Reduction (TITRe2) trial is the largest randomized controlled trial to date to compare red blood cell transfusion strategies following cardiac surgery. This update presents the statistical analysis plan, detailing how the study will be analyzed and presented. The statistical analysis plan has been written following recommendations from the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, prior to database lock and the final analysis of trial data. Outlined analyses are in line with the Consolidated Standards of Reporting Trials (CONSORT). The study aims to randomize 2000 patients from 17 UK centres. Patients are randomized to either a restrictive (transfuse if haemoglobin concentration <7.5 g/dl) or liberal (transfuse if haemoglobin concentration <9 g/dl) transfusion strategy. The primary outcome is a binary composite outcome of any serious infectious or ischaemic event in the first 3 months following randomization. The statistical analysis plan details how non-adherence with the intervention, withdrawals from the study, and the study population will be derived and dealt with in the analysis. The planned analyses of the trial primary and secondary outcome measures are described in detail, including approaches taken to deal with multiple testing, model assumptions not being met and missing data. Details of planned subgroup and sensitivity analyses and pre-specified ancillary analyses are given, along with potential issues that have been identified with such analyses and possible approaches to overcome such issues. ISRCTN70923932 .

  12. Enhanced photovoltaic performance of a quantum dot-sensitized solar cell using a Nb-doped TiO2 electrode.

    PubMed

    Jiang, Lei; You, Ting; Deng, Wei-Qiao

    2013-10-18

    In this work Nb-doped anatase TiO2 nanocrystals are used as the photoanode of quantum-dot-sensitized solar cells. A solar cell with CdS/CdSe quantum dots co-sensitized 2.5 mol% Nb-doped anatase TiO2 nanocrystals can achieve a photovoltaic conversion efficiency of 3.3%, which is almost twice as high as the 1.7% obtained by a cell based on undoped TiO2 nanocrystals. The incident photon-to-current conversion efficiency can reach as high as 91%, which is a record for all quantum-dot-sensitized solar cells. Detailed analysis shows that such an enhancement is due to improved lifetime and diffusion length of electrons in the solar cell.

  13. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  14. Validation of a next-generation sequencing assay for clinical molecular oncology.

    PubMed

    Cottrell, Catherine E; Al-Kateb, Hussam; Bredemeyer, Andrew J; Duncavage, Eric J; Spencer, David H; Abel, Haley J; Lockwood, Christina M; Hagemann, Ian S; O'Guin, Stephanie M; Burcea, Lauren C; Sawyer, Christopher S; Oschwald, Dayna M; Stratman, Jennifer L; Sher, Dorie A; Johnson, Mark R; Brown, Justin T; Cliften, Paul F; George, Bijoy; McIntosh, Leslie D; Shrivastava, Savita; Nguyen, Tudung T; Payton, Jacqueline E; Watson, Mark A; Crosby, Seth D; Head, Richard D; Mitra, Robi D; Nagarajan, Rakesh; Kulkarni, Shashikant; Seibert, Karen; Virgin, Herbert W; Milbrandt, Jeffrey; Pfeifer, John D

    2014-01-01

    Currently, oncology testing includes molecular studies and cytogenetic analysis to detect genetic aberrations of clinical significance. Next-generation sequencing (NGS) allows rapid analysis of multiple genes for clinically actionable somatic variants. The WUCaMP assay uses targeted capture for NGS analysis of 25 cancer-associated genes to detect mutations at actionable loci. We present clinical validation of the assay and a detailed framework for design and validation of similar clinical assays. Deep sequencing of 78 tumor specimens (≥ 1000× average unique coverage across the capture region) achieved high sensitivity for detecting somatic variants at low allele fraction (AF). Validation revealed sensitivities and specificities of 100% for detection of single-nucleotide variants (SNVs) within coding regions, compared with SNP array sequence data (95% CI = 83.4-100.0 for sensitivity and 94.2-100.0 for specificity) or whole-genome sequencing (95% CI = 89.1-100.0 for sensitivity and 99.9-100.0 for specificity) of HapMap samples. Sensitivity for detecting variants at an observed 10% AF was 100% (95% CI = 93.2-100.0) in HapMap mixes. Analysis of 15 masked specimens harboring clinically reported variants yielded concordant calls for 13/13 variants at AF of ≥ 15%. The WUCaMP assay is a robust and sensitive method to detect somatic variants of clinical significance in molecular oncology laboratories, with reduced time and cost of genetic analysis allowing for strategic patient management. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  15. Method of confidence domains in the analysis of noise-induced extinction for tritrophic population system

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana

    2017-09-01

    A problem of the analysis of the noise-induced extinction in multidimensional population systems is considered. For the investigation of conditions of the extinction caused by random disturbances, a new approach based on the stochastic sensitivity function technique and confidence domains is suggested, and applied to tritrophic population model of interacting prey, predator and top predator. This approach allows us to analyze constructively the probabilistic mechanisms of the transition to the noise-induced extinction from both equilibrium and oscillatory regimes of coexistence. In this analysis, a method of principal directions for the reducing of the dimension of confidence domains is suggested. In the dispersion of random states, the principal subspace is defined by the ratio of eigenvalues of the stochastic sensitivity matrix. A detailed analysis of two scenarios of the noise-induced extinction in dependence on parameters of considered tritrophic system is carried out.

  16. Who Suffers during Recessions? NBER Working Paper No. 17951

    ERIC Educational Resources Information Center

    Hoynes, Hilary W.; Miller, Douglas L.; Schaller, Jessamyn

    2012-01-01

    In this paper we examine how business cycles affect labor market outcomes in the United States. We conduct a detailed analysis of how cycles affect outcomes differentially across persons of differing age, education, race, and gender, and we compare the cyclical sensitivity during the Great Recession to that in the early 1980s recession. We present…

  17. ON THE USE OF SHOT NOISE FOR PHOTON COUNTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zmuidzinas, Jonas, E-mail: jonas@caltech.edu

    Lieu et al. have recently claimed that it is possible to substantially improve the sensitivity of radio-astronomical observations. In essence, their proposal is to make use of the intensity of the photon shot noise as a measure of the photon arrival rate. Lieu et al. provide a detailed quantum-mechanical calculation of a proposed measurement scheme that uses two detectors and conclude that this scheme avoids the sensitivity degradation that is associated with photon bunching. If correct, this result could have a profound impact on radio astronomy. Here I present a detailed analysis of the sensitivity attainable using shot-noise measurement schemesmore » that use either one or two detectors, and demonstrate that neither scheme can avoid the photon bunching penalty. I perform both semiclassical and fully quantum calculations of the sensitivity, obtaining consistent results, and provide a formal proof of the equivalence of these two approaches. These direct calculations are furthermore shown to be consistent with an indirect argument based on a correlation method that establishes an independent limit to the sensitivity of shot-noise measurement schemes. Furthermore, these calculations are directly applicable to the regime of interest identified by Lieu et al. Collectively, these results conclusively demonstrate that the photon-bunching sensitivity penalty applies to shot-noise measurement schemes just as it does to ordinary photon counting, in contradiction to the fundamental claim made by Lieu et al. The source of this contradiction is traced to a logical fallacy in their argument.« less

  18. Development and Sensitivity Analysis of a Frost Risk model based primarily on freely distributed Earth Observation data

    NASA Astrophysics Data System (ADS)

    Louka, Panagiota; Petropoulos, George; Papanikolaou, Ioannis

    2015-04-01

    The ability to map the spatiotemporal distribution of extreme climatic conditions, such as frost, is a significant tool in successful agricultural management and decision making. Nowadays, with the development of Earth Observation (EO) technology, it is possible to obtain accurately, timely and in a cost-effective way information on the spatiotemporal distribution of frost conditions, particularly over large and otherwise inaccessible areas. The present study aimed at developing and evaluating a frost risk prediction model, exploiting primarily EO data from MODIS and ASTER sensors and ancillary ground observation data. For the evaluation of our model, a region in north-western Greece was selected as test site and a detailed sensitivity analysis was implemented. The agreement between the model predictions and the observed (remotely sensed) frost frequency obtained by MODIS sensor was evaluated thoroughly. Also, detailed comparisons of the model predictions were performed against reference frost ground observations acquired from the Greek Agricultural Insurance Organization (ELGA) over a period of 10-years (2000-2010). Overall, results evidenced the ability of the model to produce reasonably well the frost conditions, following largely explainable patterns in respect to the study site and local weather conditions characteristics. Implementation of our proposed frost risk model is based primarily on satellite imagery analysis provided nowadays globally at no cost. It is also straightforward and computationally inexpensive, requiring much less effort in comparison for example to field surveying. Finally, the method is adjustable to be potentially integrated with other high resolution data available from both commercial and non-commercial vendors. Keywords: Sensitivity analysis, frost risk mapping, GIS, remote sensing, MODIS, Greece

  19. SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less

  20. A computer program for detailed analysis of the takeoff and approach performance capabilities of transport category aircraft

    NASA Technical Reports Server (NTRS)

    Foss, W. E., Jr.

    1979-01-01

    The takeoff and approach performance of an aircraft is calculated in accordance with the airworthiness standards of the Federal Aviation Regulations. The aircraft and flight constraints are represented in sufficient detail to permit realistic sensitivity studies in terms of either configuration modifications or changes in operational procedures. The program may be used to investigate advanced operational procedures for noise alleviation such as programmed throttle and flap controls. Extensive profile time history data are generated and are placed on an interface file which can be input directly to the NASA aircraft noise prediction program (ANOPP).

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinthavali, Madhu Sudhan; Wang, Zhiqiang

    This paper presents a detailed parametric sensitivity analysis for a wireless power transfer (WPT) system in electric vehicle application. Specifically, several key parameters for sensitivity analysis of a series-parallel (SP) WPT system are derived first based on analytical modeling approach, which includes the equivalent input impedance, active / reactive power, and DC voltage gain. Based on the derivation, the impact of primary side compensation capacitance, coupling coefficient, transformer leakage inductance, and different load conditions on the DC voltage gain curve and power curve are studied and analyzed. It is shown that the desired power can be achieved by just changingmore » frequency or voltage depending on the design value of coupling coefficient. However, in some cases both have to be modified in order to achieve the required power transfer.« less

  2. Practical applications of surface analytic tools in tribology

    NASA Technical Reports Server (NTRS)

    Ferrante, J.

    1980-01-01

    Many of the currently, widely used tools available for surface analysis are described. Those which have the highest applicability for giving elemental and/or compound analysis for problems of interest in tribology and are truly surface sensitive (that is, less than 10 atomic layers) are presented. The latter group is evaluated in detail in terms of strengths and weaknesses. Emphasis is placed on post facto analysis of experiments performed under 'real' conditions (e.g., in air with lubricants). It is further indicated that such equipment could be used for screening and quality control.

  3. Calibrating Detailed Chemical Analysis of M dwarfs

    NASA Astrophysics Data System (ADS)

    Veyette, Mark; Muirhead, Philip Steven; Mann, Andrew; Brewer, John; Allard, France; Homeier, Derek

    2018-01-01

    The ability to perform detailed chemical analysis of Sun-like F-, G-, and K-type stars is a powerful tool with many applications including studying the chemical evolution of the Galaxy, assessing membership in stellar kinematic groups, and constraining planet formation theories. Unfortunately, complications in modeling cooler stellar atmospheres has hindered similar analysis of M-dwarf stars. Large surveys of FGK abundances play an important role in developing methods to measure the compositions of M dwarfs by providing benchmark FGK stars that have widely-separated M dwarf companions. These systems allow us to empirically calibrate metallicity-sensitive features in M dwarf spectra. However, current methods to measure metallicity in M dwarfs from moderate-resolution spectra are limited to measuring overall metallicity and largely rely on astrophysical abundance correlations in stellar populations. In this talk, I will discuss how large, homogeneous catalogs of precise FGK abundances are crucial to advancing chemical analysis of M dwarfs beyond overall metallicity to direct measurements of individual elemental abundances. I will present a new method to analyze high-resolution, NIR spectra of M dwarfs that employs an empirical calibration of synthetic M dwarf spectra to infer effective temperature, Fe abundance, and Ti abundance. This work is a step toward detailed chemical analysis of M dwarfs at a similar precision achieved for FGK stars.

  4. A Model of Network Porosity

    DTIC Science & Technology

    2016-11-09

    the model does not become a full probabilistic attack graph analysis of the network , whose data requirements are currently unrealistic. The second...flow. – Untrustworthy persons may intentionally try to exfiltrate known sensitive data to ex- ternal networks . People may also unintentionally leak...section will provide details on the components, procedures, data requirements, and parameters required to instantiate the network porosity model. These

  5. Rotating permanent magnet excitation for blood flow measurement.

    PubMed

    Nair, Sarath S; Vinodkumar, V; Sreedevi, V; Nagesh, D S

    2015-11-01

    A compact, portable and improved blood flow measurement system for an extracorporeal circuit having a rotating permanent magnetic excitation scheme is described in this paper. The system consists of a set of permanent magnets rotating near blood or any conductive fluid to create high-intensity alternating magnetic field in it and inducing a sinusoidal varying voltage across the column of fluid. The induced voltage signal is acquired, conditioned and processed to determine its flow rate. Performance analysis shows that a sensitivity of more than 250 mV/lpm can be obtained, which is more than five times higher than conventional flow measurement systems. Choice of rotating permanent magnet instead of an electromagnetic core generates alternate magnetic field of smooth sinusoidal nature which in turn reduces switching and interference noises. These results in reduction in complex electronic circuitry required for processing the signal to a great extent and enable the flow measuring device to be much less costlier, portable and light weight. The signal remains steady even with changes in environmental conditions and has an accuracy of greater than 95%. This paper also describes the construction details of the prototype, the factors affecting sensitivity and detailed performance analysis at various operating conditions.

  6. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  7. Radiological performance assessment for the E-Area Vaults Disposal Facility. Appendices A through M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, J.R.

    1994-04-15

    These document contains appendices A-M for the performance assessment. They are A: details of models and assumptions, B: computer codes, C: data tabulation, D: geochemical interactions, E: hydrogeology of the Savannah River Site, F: software QA plans, G: completeness review guide, H: performance assessment peer review panel recommendations, I: suspect soil performance analysis, J: sensitivity/uncertainty analysis, K: vault degradation study, L: description of naval reactor waste disposal, M: porflow input file. (GHH)

  8. Analysis of Composite Panels Subjected to Thermo-Mechanical Loads

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1999-01-01

    The results of a detailed study of the effect of cutout on the nonlinear response of curved unstiffened panels are presented. The panels are subjected to combined temperature gradient through-the-thickness combined with pressure loading and edge shortening or edge shear. The analysis is based on a first-order, shear deformation, Sanders-Budiansky-type shell theory with the effects of large displacements, moderate rotations, transverse shear deformation, and laminated anisotropic material behavior included. A mixed formulation is used with the fundamental unknowns consisting of the generalized displacements and the stress resultants of the panel. The nonlinear displacements, strain energy, principal strains, transverse shear stresses, transverse shear strain energy density, and their hierarchical sensitivity coefficients are evaluated. The hierarchical sensitivity coefficients measure the sensitivity of the nonlinear response to variations in the panel parameters, as well as in the material properties of the individual layers. Numerical results are presented for cylindrical panels and show the effects of variations in the loading and the size of the cutout on the global and local response quantities as well as their sensitivity to changes in the various panel, layer, and micromechanical parameters.

  9. Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niemeyer, Kyle E.; Sung, Chih-Jen

    Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less

  10. Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels

    DOE PAGES

    Niemeyer, Kyle E.; Sung, Chih-Jen

    2014-11-01

    Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less

  11. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    NASA Astrophysics Data System (ADS)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  12. Sensitivity Analysis of earth and environmental models: a systematic review to guide scientific advancement

    NASA Astrophysics Data System (ADS)

    Wagener, Thorsten; Pianosi, Francesca

    2016-04-01

    Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in earth and environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. Here we provide some practical advice regarding best practice in SA and discuss important open questions based on a detailed recent review of the existing body of work in SA. Open questions relate to the consideration of input factor interactions, methods for factor mapping and the formal inclusion of discrete factors in SA (for example for model structure comparison). We will analyse these questions using relevant examples and discuss possible ways forward. We aim at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.

  13. Simulation analysis of an integrated model for dynamic cellular manufacturing system

    NASA Astrophysics Data System (ADS)

    Hao, Chunfeng; Luan, Shichao; Kong, Jili

    2017-05-01

    Application of dynamic cellular manufacturing system (DCMS) is a well-known strategy to improve manufacturing efficiency in the production environment with high variety and low volume of production. Often, neither the trade-off of inter and intra-cell material movements nor the trade-off of hiring and firing of operators are examined in details. This paper presents simulation results of an integrated mixed-integer model including sensitivity analysis for several numerical examples. The comprehensive model includes cell formation, inter and intracellular materials handling, inventory and backorder holding, operator assignment (including resource adjustment) and flexible production routing. The model considers multi-production planning with flexible resources (machines and operators) where each period has different demands. The results verify the validity and sensitivity of the proposed model using a genetic algorithm.

  14. Allergic contact dermatitis pattern in Kuwait: nickel leads the pack. In-depth analysis of nickel allergy based on the results from a large prospective patch test series report

    PubMed Central

    Almutawa, Fahad

    2017-01-01

    Introduction Contact dermatitis is a relatively common dermatosis reported among several population groups from all around the globe. However, the data from Kuwait is unavailable. Patch tests are essential for the diagnosis of contact sensitization. Aim To determine a relative frequency and pattern of sensitizers to different allergens in patients of suspected contact dermatitis in Kuwait and, also to study the role of the commonest sensitizer in detail. Material and methods Patch tests were performed in 2461 consecutive patients with a clinical diagnosis of contact dermatitis seen at our hospital between September 1, 2014 and August 31, 2015. Out of the total of 1381 (56.1%) patients with positive patch test results to at least one allergen, 546 (22.2%) patients with a single positive reaction to nickel only (single largest sensitizer) were selected as the study population for further detailed analysis. Results At least one positive patch test reaction was found in 1381 (56.12%) patients. Nickel was found to be the most common sensitizer seen in 546 (40%) patients. The mean age was 37.3 ±13.8 years and the mean duration of disease was 27.3 ±13.8 months. Most (387/546) patients sensitized were females. The forearms/hands and wrists were the most prevalent sites (52.56% of the participants). In 58.91% of women, dermatitis was more often confined to other sites, mostly ears and the neck due to earrings and necklaces. Just more than half of the number (51.09%) of nickel allergic patients were found in the age group of 15–25 years. Hairdressers/beauticians were the most affected group followed by house workers (housewives, cleaners, housekeepers). Conclusions Nickel is the single most common sensitizer found in our patients, and female sex, young age, occupation with long hours of contact to nickel are high risk factors. We recommend that a directive, which limits the release of nickel from products with extended skin contact, be approved in Kuwait. PMID:28670248

  15. Information transfer satellite concept study. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    Bergin, P.; Kincade, C.; Kurpiewski, D.; Leinhaupel, F.; Millican, F.; Onstad, R.

    1971-01-01

    A wide range of information transfer demands were identified and analyzed. They were then combined into an appropriate set of requirements for satellite communication services. In this process the demands were ranked and combined into single and multipurpose satellite systems. A detailed analysis was performed on each satellite system to determine: total system cost, including both ground and space segments; sensitivities of the systems to various system tradeoffs; and forcing functions which control the system variations. A listing of candidate missions for detailed study is presented, along with a description of the conceptual system design and an identification of the technology developments required to bring these systems to fruition.

  16. Analysis of world terror networks from the reduced Google matrix of Wikipedia

    NASA Astrophysics Data System (ADS)

    El Zant, Samer; Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.

    2018-01-01

    We apply the reduced Google matrix method to analyze interactions between 95 terrorist groups and determine their relationships and influence on 64 world countries. This is done on the basis of the Google matrix of the English Wikipedia (2017) composed of 5 416 537 articles which accumulate a great part of global human knowledge. The reduced Google matrix takes into account the direct and hidden links between a selection of 159 nodes (articles) appearing due to all paths of a random surfer moving over the whole network. As a result we obtain the network structure of terrorist groups and their relations with selected countries including hidden indirect links. Using the sensitivity of PageRank to a weight variation of specific links we determine the geopolitical sensitivity and influence of specific terrorist groups on world countries. The world maps of the sensitivity of various countries to influence of specific terrorist groups are obtained. We argue that this approach can find useful application for more extensive and detailed data bases analysis.

  17. 3D MR flow analysis in realistic rapid-prototyping model systems of the thoracic aorta: comparison with in vivo data and computational fluid dynamics in identical vessel geometries.

    PubMed

    Canstein, C; Cachot, P; Faust, A; Stalder, A F; Bock, J; Frydrychowicz, A; Küffer, J; Hennig, J; Markl, M

    2008-03-01

    The knowledge of local vascular anatomy and function in the human body is of high interest for the diagnosis and treatment of cardiovascular disease. A comprehensive analysis of the hemodynamics in the thoracic aorta is presented based on the integration of flow-sensitive 4D MRI with state-of-the-art rapid prototyping technology and computational fluid dynamics (CFD). Rapid prototyping was used to transform aortic geometries as measured by contrast-enhanced MR angiography into realistic vascular models with large anatomical coverage. Integration into a flow circuit with patient-specific pulsatile in-flow conditions and application of flow-sensitive 4D MRI permitted detailed analysis of local and global 3D flow dynamics in a realistic vascular geometry. Visualization of characteristic 3D flow patterns and quantitative comparisons of the in vitro experiments with in vivo data and CFD simulations in identical vascular geometries were performed to evaluate the accuracy of vascular model systems. The results indicate the potential of such patient-specific model systems for detailed experimental simulation of realistic vascular hemodynamics. Further studies are warranted to examine the influence of refined boundary conditions of the human circulatory system such as fluid-wall interaction and their effect on normal and pathological blood flow characteristics associated with vascular geometry. (c) 2008 Wiley-Liss, Inc.

  18. On the topological structure of multinationals network

    NASA Astrophysics Data System (ADS)

    Joyez, Charlie

    2017-05-01

    This paper uses a weighted network analysis to examine the structure of multinationals' implantation countries network. Based on French firm-level dataset of multinational enterprises (MNEs) the network analysis provides information on each country position in the network and in internationalization strategies of French MNEs through connectivity preferences among the nodes. The paper also details network-wide features and their recent evolution toward a more decentralized structure. While much has been said on international trade network, this paper shows that multinational firms' studies would also benefit from network analysis, notably by investigating the sensitivity of the network construction to firm heterogeneity.

  19. An approach to measure parameter sensitivity in watershed ...

    EPA Pesticide Factsheets

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the relative sensitivities of the hydrologic parameters of these two models, we used Normalized Root Mean Square Error (NRMSE). By combining the NRMSE index with the flow duration curve analysis, we derived an approach to measure parameter sensitivities under different flow regimes. Results show that the parameters related to groundwater are highly sensitive in the LMR watershed, whereas the LVW watershed is primarily sensitive to near surface and impervious parameters. The high and medium flows are more impacted by most of the parameters. Low flow regime was highly sensitive to groundwater related parameters. Moreover, our approach is found to be useful in facilitating model development and calibration. This journal article describes hydrological modeling of climate change and land use changes on stream hydrology, and elucidates the importance of hydrological model construction in generating valid modeling results.

  20. Analysis of DNA Cytosine Methylation Patterns Using Methylation-Sensitive Amplification Polymorphism (MSAP).

    PubMed

    Guevara, María Ángeles; de María, Nuria; Sáez-Laguna, Enrique; Vélez, María Dolores; Cervera, María Teresa; Cabezas, José Antonio

    2017-01-01

    Different molecular techniques have been developed to study either the global level of methylated cytosines or methylation at specific gene sequences. One of them is the methylation-sensitive amplified polymorphism technique (MSAP) which is a modification of amplified fragment length polymorphism (AFLP). It has been used to study methylation of anonymous CCGG sequences in different fungi, plants, and animal species. The main variation of this technique resides on the use of isoschizomers with different methylation sensitivity (such as HpaII and MspI) as a frequent-cutter restriction enzyme. For each sample, MSAP analysis is performed using both EcoRI/HpaII- and EcoRI/MspI-digested samples. A comparative analysis between EcoRI/HpaII and EcoRI/MspI fragment patterns allows the identification of two types of polymorphisms: (1) methylation-insensitive polymorphisms that show common EcoRI/HpaII and EcoRI/MspI patterns but are detected as polymorphic amplified fragments among samples and (2) methylation-sensitive polymorphisms which are associated with the amplified fragments that differ in their presence or absence or in their intensity between EcoRI/HpaII and EcoRI/MspI patterns. This chapter describes a detailed protocol of this technique and discusses the modifications that can be applied to adjust the technology to different species of interest.

  1. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  2. Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points

    PubMed Central

    Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.

    2015-01-01

    Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758

  3. Lipid decorated liquid crystal pressure sensors

    NASA Astrophysics Data System (ADS)

    Lopatkina, Tetiana; Popov, Piotr; Honaker, Lawrence; Jakli, Antal; Mann, Elizabeth; Mann's Group Collaboration; Jakli's Group Collaboration

    Surfactants usually promote the alignment of liquid crystal (LC) director parallel to the surfactant chains, and thus on average normal to the substrate (homeotropic), whereas water promotes tangential (planar) alignment. A water-LC interface is therefore very sensitive to the presence of surfactants, such as lipids: this is the principle of LC-based chemical and biological sensing introduced by Abbott et al.Using a modified configuration, we found that at higher than 10 micro molar lipid concentration, the uniformly dark texture seen for homeotropic alignment between left-, and right-handed circular polarizers becomes unstable and slowly brightens again. This texture shows extreme sensitivity to external air pressure variations offering its use for sensitive pressure sensors. Our analysis indicates an osmotic pressure induced bending of the suspended films explaining both the birefringence and pressure sensitivity. In the talk we will discuss the experimental details of these effects. This work was financially supported by NSF DMR No. DMR-0907055.

  4. Polyvalent type IV sensitizations to multiple fragrances and a skin protection cream in a metal worker.

    PubMed

    Tanko, Zita; Shab, Arna; Diepgen, Thomas Ludwig; Weisshaar, Elke

    2009-06-01

    Fragrances are very common in everyday products. A metalworker with chronic hand eczema and previously diagnosed type IV sensitizations to epoxy resin, balsam of Peru, fragrance mix and fragrance mix II was diagnosed with additional type IV sensitizations to geraniol, hydroxycitronellal, lilial, tree moss, oak moss absolute, citral, citronellol, farnesol, Lyral, fragrance mix II and fragrance mix (with sorbitan sesquioleate). In addition, a type IV sensitization to the skin protection cream containing geraniol and citronellol used at the workplace was detected, and deemed occupationally relevant in this case. The patient could have had contact to fragrances through private use of cosmetics and detergents. On the other hand, the fragrance-containing skin protection cream supports occupational exposure. This case report demonstrates that fragrance contact allergy has to be searched for and clarified individually, which requires a thorough history and a detailed analysis of the work place.

  5. Systems Analysis Of Advanced Coal-Based Power Plants

    NASA Technical Reports Server (NTRS)

    Ferrall, Joseph F.; Jennings, Charles N.; Pappano, Alfred W.

    1988-01-01

    Report presents appraisal of integrated coal-gasification/fuel-cell power plants. Based on study comparing fuel-cell technologies with each other and with coal-based alternatives and recommends most promising ones for research and development. Evaluates capital cost, cost of electricity, fuel consumption, and conformance with environmental standards. Analyzes sensitivity of cost of electricity to changes in fuel cost, to economic assumptions, and to level of technology. Recommends further evaluation of integrated coal-gasification/fuel-cell integrated coal-gasification/combined-cycle, and pulverized-coal-fired plants. Concludes with appendixes detailing plant-performance models, subsystem-performance parameters, performance goals, cost bases, plant-cost data sheets, and plant sensitivity to fuel-cell performance.

  6. Advances in biological dosimetry

    NASA Astrophysics Data System (ADS)

    Ivashkevich, A.; Ohnesorg, T.; Sparbier, C. E.; Elsaleh, H.

    2017-01-01

    Rapid retrospective biodosimetry methods are essential for the fast triage of persons occupationally or accidentally exposed to ionizing radiation. Identification and detection of a radiation specific molecular ‘footprint’ should provide a sensitive and reliable measurement of radiation exposure. Here we discuss conventional (cytogenetic) methods of detection and assessment of radiation exposure in comparison to emerging approaches such as gene expression signatures and DNA damage markers. Furthermore, we provide an overview of technical and logistic details such as type of sample required, time for sample preparation and analysis, ease of use and potential for a high throughput analysis.

  7. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.

  8. International Space Station 2A Array Modal Analysis

    NASA Technical Reports Server (NTRS)

    Laible, Michael; Fitzpatrick, Kristin; Grygier, Michael

    2012-01-01

    On December 9th 2009, the International Space Station (ISS) 2A solar array mast experienced prolonged longeron shadowing during a Soyuz undocking. Analytical reconstruction of induced thermal and dynamic structural loads showed an exceedance of the mast buckling limit. Possible structural damage to the solar array mast could have occurred during this event. A Low fidelity video survey of the 2A mast showed no obvious damage of the mast longerons or battens. The decision was made to conduct an on-orbit dynamic test of the 2A array on December 18th, 2009. The test included thruster pluming on the array while photogrammetry data was recorded. The test was similar to other Dedicated Thruster Firings (DTFs) that were performed to measure structural frequency and damping of a solar array. Results of the DTF indicated lower frequency mast modes than model predictions, thus leading to speculation of mast damage. A detailed nonlinear analysis was performed on the 2A array model to assess possible solutions to modal differences. The setup of the parametric nonlinear trade study included the use of a detailed array model and the reduced mass and stiffness matrices of the entire ISS being applied to the array interface. The study revealed that the array attachment structure is nonlinear and thus was the source of error in the model prediction of mast modes. In addition, a detailed study was performed to determine mast mode sensitivity to mast longeron damage. This sensitivity study was performed to assess if the ISS program has sufficient instrumentation for mast damage detection.

  9. The Measurement of the Flux and Spectrum of the Crab by HAWC

    NASA Astrophysics Data System (ADS)

    Smith, Andrew; HAWC Collaboration Collaboration

    2017-01-01

    The HAWC observatory was completed and began full operation in early 2015. Located at an elevation of 4100m, HAWC has an energy threshold for gamma-ray detection well below 1 TeV and a sensitivity to TeV-scale gamma-ray sources an order of magnitude better than previous air-shower arrays. The detector operates 24 hours/day and observes the overhead sky (2 sr), making it an ideal survey instrument. We describe the details of the high significance detection (>100 sigma) of the Crab PWN and explain in detail the measurement the VHE spectrum of this important gamma-ray source. At the high end of the VHE range, above 10 TeV, HAWC's sensitivity is better than that of IACTs due mainly to its large effective area and unprecedented exposure. Measuring the high energy behavior of this source is critical to the understanding of the acceleration dynamics and the environment in vicinity of the pulsar. Furthermore, as the Crab is bright, point-like and steady, as detected by VHE gamma-ray instruments, it serves as the best source for verification of detector performance and measurement of systematic errors. This presentation will also describe in detail the analysis methodology utilized by a number of presentations from the HAWC collaboration.

  10. Detailed low-energy electron diffraction analysis of the (4×4) surface structure of C60 on Cu(111): Seven-atom-vacancy reconstruction

    NASA Astrophysics Data System (ADS)

    Xu, Geng; Shi, Xing-Qiang; Zhang, R. Q.; Pai, Woei Wu; Jeng, H. T.; Van Hove, M. A.

    2012-08-01

    A detailed and exhaustive structural analysis by low-energy electron diffraction (LEED) is reported for the C60-induced reconstruction of Cu(111), in the system Cu(111) + (4 × 4)-C60. A wide LEED energy range allows enhanced sensitivity to the crucial C60-metal interface that is buried below the 7-Å-thick molecular layer. The analysis clearly favors a seven-Cu-atom vacancy model (with Pendry R-factor Rp = 0.376) over a one-Cu-atom vacancy model (Rp = 0.608) and over nonreconstructed models (Rp = 0.671 for atop site and Rp = 0.536 for hcp site). The seven-Cu-atom vacancy forms a (4 × 4) lattice of bowl-like holes. In each hole, a C60 molecule can nestle by forming strong bonds (shorter than 2.30 Å) between 15 C atoms of the molecule and 12 Cu atoms of the outermost and second Cu layers.

  11. Detailed chemical analysis of regional-scale air pollution in western Portugal using an adapted version of MCM v3.1.

    PubMed

    Pinho, P G; Lemos, L T; Pio, C A; Evtyugina, M G; Nunes, T V; Jenkin, M E

    2009-03-01

    A version of the Master Chemical Mechanism (MCM) v3.1, refined on the basis of recent chamber evaluations, has been incorporated into a Photochemical Trajectory Model (PTM) and applied to the simulation of boundary layer photochemistry in the Portuguese west coast region. Comparison of modelled concentrations of ozone and a number of other species (NO(x) and selected hydrocarbons and organic oxygenates) was carried out, using data from three connected sites on two case study days when well-defined sea breeze conditions were established. The ozone concentrations obtained through the application of the PTM are a good approximation to the measured values, the average difference being ca. 15%, indicating that the model was acceptable for evaluation of the details of the chemical processing. The detailed chemistry is examined, allowing conclusions to be drawn concerning chemical interferences in the measurements of NO(2), and in relation to the sensitivity of ozone formation to changes in ambient temperature. Three important, and comparable, contributions to the temperature sensitivity are identified and quantified, namely (i) an effect of increasing biogenic emissions with temperature; (ii) an effect of increasing ambient water vapour concentration with temperature, and its influence on radical production; and (iii) an increase in VOC oxidation chain lengths resulting from the temperature-dependence of the kinetic parameters, particularly in relation to the stability of PAN and its higher analogues. The sensitivity of the simulations to the refinements implemented into MCM v3.1 are also presented and discussed.

  12. Guide to the economic analysis of community energy systems

    NASA Astrophysics Data System (ADS)

    Pferdehirt, W. P.; Croke, K. G.; Hurter, A. P.; Kennedy, A. S.; Lee, C.

    1981-08-01

    This guidebook provides a framework for the economic analysis of community energy systems. The analysis facilitates a comparison of competing configurations in community energy systems, as well as a comparison with conventional energy systems. Various components of costs and revenues to be considered are discussed in detail. Computational procedures and accompanying worksheets are provided for calculating the net present value, straight and discounted payback periods, the rate of return, and the savings to investment ratio for the proposed energy system alternatives. These computations are based on a projection of the system's costs and revenues over its economic lifetimes. The guidebook also discusses the sensitivity of the results of this economic analysis to changes in various parameters and assumptions.

  13. Benzene-induced myelotoxicity: application of flow cytofluorometry for the evaluation of early proliferative change in bone marrow.

    PubMed Central

    Irons, R D

    1981-01-01

    A detailed description of flow cytofluorometric DNA cell cycle analysis is presented. A number of studies by the author and other investigators are reviewed in which a method is developed for the analysis of cell cycle phase in bone marrow of experimental animals. Bone marrow cell cycle analysis is a sensitive indicator of changes in bone marrow proliferative activity occurring early in chemically-induced myelotoxicity. Cell cycle analysis, used together with other hematologic methods, has revealed benzene-induced toxicity in proliferating bone marrow cells to be cycle specific, appearing to affect a population in late S phase which then accumulate in G2/M. PMID:7016521

  14. Economic Analysis of a Multi-Site Prevention Program: Assessment of Program Costs and Characterizing Site-level Variability

    PubMed Central

    Corso, Phaedra S.; Ingels, Justin B.; Kogan, Steven M.; Foster, E. Michael; Chen, Yi-Fu; Brody, Gene H.

    2013-01-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95% confidence interval) incremental difference was $2149 ($397, $3901). With the probabilistic sensitivity analysis approach, the incremental difference was $2583 ($778, $4346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention. PMID:23299559

  15. Economic analysis of a multi-site prevention program: assessment of program costs and characterizing site-level variability.

    PubMed

    Corso, Phaedra S; Ingels, Justin B; Kogan, Steven M; Foster, E Michael; Chen, Yi-Fu; Brody, Gene H

    2013-10-01

    Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.

  16. Chemical fingerprinting of Arabidopsis using Fourier transform infrared (FT-IR) spectroscopic approaches.

    PubMed

    Gorzsás, András; Sundberg, Björn

    2014-01-01

    Fourier transform infrared (FT-IR) spectroscopy is a fast, sensitive, inexpensive, and nondestructive technique for chemical profiling of plant materials. In this chapter we discuss the instrumental setup, the basic principles of analysis, and the possibilities for and limitations of obtaining qualitative and semiquantitative information by FT-IR spectroscopy. We provide detailed protocols for four fully customizable techniques: (1) Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS): a sensitive and high-throughput technique for powders; (2) attenuated total reflectance (ATR) spectroscopy: a technique that requires no sample preparation and can be used for solid samples as well as for cell cultures; (3) microspectroscopy using a single element (SE) detector: a technique used for analyzing sections at low spatial resolution; and (4) microspectroscopy using a focal plane array (FPA) detector: a technique for rapid chemical profiling of plant sections at cellular resolution. Sample preparation, measurement, and data analysis steps are listed for each of the techniques to help the user collect the best quality spectra and prepare them for subsequent multivariate analysis.

  17. Study of effects of fuel properties in turbine-powered business aircraft

    NASA Technical Reports Server (NTRS)

    Powell, F. D.; Biegen, R. J.; Weitz, P. G., Jr.; Duke, A. M.

    1984-01-01

    Increased interest in research and technology concerning aviation turbine fuels and their properties was prompted by recent changes in the supply and demand situation of these fuels. The most obvious change is the rapid increase in fuel price. For commercial airplanes, fuel costs now approach 50 percent of the direct operating costs. In addition, there were occasional local supply disruptions and gradual shifts in delivered values of certain fuel properties. Dwindling petroleum reserves and the politically sensitive nature of the major world suppliers make the continuation of these trends likely. A summary of the principal findings, and conclusions are presented. Much of the material, especially the tables and graphs, is considered in greater detail later. The economic analysis and examination of operational considerations are described. Because some of the assumptions on which the economic analysis is founded are not easily verified, the sensitivity of the analysis to alternates for these assumptions is examined. The data base on which the analyses are founded is defined in a set of appendices.

  18. Increase in sensitization to oil of turpentine: recent data from a multicenter study on 45,005 patients from the German-Austrian Information Network of Departments of Dermatology (IVDK).

    PubMed

    Treudler, R; Richter, G; Geier, J; Schnuch, A; Orfanos, C E; Tebbe, B

    2000-02-01

    Contact allergy to oil of turpentine was reported to have become rare. However, the evaluation of standardized data of 45,005 patients tested 1992-1997 in 30 Dermatological Centers associated with the German-Austrian Information Network of Departments of Dermatology (IVDK) showed an increase in positive patch test reactions to turpentine from 0.5% during the years 1992-1995, up to 1.7% in 1996 and 3.1% in 1997. In particular, 17,347 patients tested in 1996-1997 were evaluated in detail by comparing 431 individuals with positive patch test reactions with the rest of the group found negative to turpentine. Using the so-called MOAHLFA index, the following characteristics were shown. Turpentine allergy (a) was found to be significantly less frequent in men and in patients with occupational dermatitis, (b) showed no difference in its association with atopic dermatitis, (c) patients with turpentine allergy had significantly less symptoms of the hands, more symptoms of the legs or in the face and (d) were significantly more often aged over 60 years. Also, patients sensitized to turpentine had increased rates of additional sensitizations. The definite reason for the increase in turpentine sensitization in the population tested here is not clear. Therefore, a detailed exposure analysis is necessary; the new increase in turpentine allergies may be due to popular topical remedies or household chemicals.

  19. Design and fabrication of a basic mass analyzer and vacuum system

    NASA Technical Reports Server (NTRS)

    Judson, C. M.; Josias, C.; Lawrence, J. L., Jr.

    1977-01-01

    A two-inch hyperbolic rod quadrupole mass analyzer with a mass range of 400 to 200 amu and a sensitivity exceeding 100 packs per billion has been developed and tested. This analyzer is the basic hardware portion of a microprocessor-controlled quadrupole mass spectrometer for a Gas Analysis and Detection System (GADS). The development and testing of the hyperbolic-rod quadrupole mass spectrometer and associated hardware are described in detail.

  20. Topological Insulators and Superconductors for Innovative Devices

    DTIC Science & Technology

    2015-03-20

    bulk-sensitive experiment with hard x ray or low-energy photons.) This demon- strates that the bulk band gap can be enhanced by taking advantage of the...crystallinity in X - ray Laue analysis, and their detailed transport properties are described in the Supplementary Information. ARPES measurements were...high quality of our fi lms grown at high temperatures, including ultrathin ones, is evident from the X - ray diffraction patterns shown in Figure 2 d

  1. The 2006 Cape Canaveral Air Force Station Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the National Aeronautics and Space Administration's Space Shuttle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian

    2008-01-01

    The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  2. Refractive microlensarray made of silver-halide sensitized gelatin (SHSG) etched by enzyme with SLM-based lithography

    NASA Astrophysics Data System (ADS)

    Guo, Xiaowei; Chen, Mingyong; Zhu, Jianhua; Ma, Yanqin; Du, Jinglei; Guo, Yongkang; Du, Chunlei

    2006-01-01

    A novel method for the fabrication of continuous micro-optical components is presented in this paper. It employs a computer controlled digital-micromirror-device(DMD TM) as a switchable projection mask and silver-halide sensitized gelatin (SHSG) as recording material. By etching SHSG with enzyme solution, the micro-optical components with relief modulation can be generated through special processing procedures. The principles of etching SHSG with enzyme and theoretical analysis for deep etching are also discussed in detail, and the detailed quantitative experiments on the processing procedures are conducted to determine optimum technique parameters. A good linear relationship within a depth range of 4μm was experimentally obtained between exposure dose and relief depth. At last, the microlensarray with 256.8μm radius and 2.572μm depth was achieved. This method is simple, cheap and the aberration in processing procedures can be corrected in the step of designing mask, so it is a practical method to fabricate good continuous profile for low-volume production.

  3. Theoretical study of strain-dependent optical absorption in a doped self-assembled InAs/InGaAs/GaAs/AlGaAs quantum dot

    PubMed Central

    Tankasala, Archana; Hsueh, Yuling; Charles, James; Fonseca, Jim; Povolotskyi, Michael; Kim, Jun Oh; Krishna, Sanjay; Allen, Monica S; Allen, Jeffery W; Rahman, Rajib; Klimeck, Gerhard

    2018-01-01

    A detailed theoretical study of the optical absorption in doped self-assembled quantum dots is presented. A rigorous atomistic strain model as well as a sophisticated 20-band tight-binding model are used to ensure accurate prediction of the single particle states in these devices. We also show that for doped quantum dots, many-particle configuration interaction is also critical to accurately capture the optical transitions of the system. The sophisticated models presented in this work reproduce the experimental results for both undoped and doped quantum dot systems. The effects of alloy mole fraction of the strain controlling layer and quantum dot dimensions are discussed. Increasing the mole fraction of the strain controlling layer leads to a lower energy gap and a larger absorption wavelength. Surprisingly, the absorption wavelength is highly sensitive to the changes in the diameter, but almost insensitive to the changes in dot height. This behavior is explained by a detailed sensitivity analysis of different factors affecting the optical transition energy. PMID:29719758

  4. Individual differences affecting caffeine intake. Analysis of consumption behaviours for different times of day and caffeine sources.

    PubMed

    Penolazzi, Barbara; Natale, Vincenzo; Leone, Luigi; Russo, Paolo Maria

    2012-06-01

    The main purpose of the present study was to investigate the individual variables contributing to determine the high variability in the consumption behaviours of caffeine, a psychoactive substance which is still poorly investigated in comparison with other drugs. The effects of a large set of specific personality traits (i.e., Impulsivity, Sensation Seeking, Anxiety, Reward Sensitivity and Circadian Preference) were compared along with some relevant socio-demographic variables (i.e., gender and age) and cigarette smoking behaviour. Analyses revealed that daily caffeine intake was significantly higher for males, older people, participants smoking more cigarettes and showing higher scores on Impulsivity, Sensation Seeking and a facet of Reward Sensitivity. However, more detailed analyses showed that different patterns of individual variables predicted caffeine consumption when the times of day and the caffeine sources were considered. The present results suggest that such detailed analyses are required to detect the critical predictive variables that could be obscured when only total caffeine intake during the entire day is considered. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  6. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  7. Evaluation of Contamination Inspection and Analysis Methods through Modeling System Performance

    NASA Technical Reports Server (NTRS)

    Seasly, Elaine; Dever, Jason; Stuban, Steven M. F.

    2016-01-01

    Contamination is usually identified as a risk on the risk register for sensitive space systems hardware. Despite detailed, time-consuming, and costly contamination control efforts during assembly, integration, and test of space systems, contaminants are still found during visual inspections of hardware. Improved methods are needed to gather information during systems integration to catch potential contamination issues earlier and manage contamination risks better. This research explores evaluation of contamination inspection and analysis methods to determine optical system sensitivity to minimum detectable molecular contamination levels based on IEST-STD-CC1246E non-volatile residue (NVR) cleanliness levels. Potential future degradation of the system is modeled given chosen modules representative of optical elements in an optical system, minimum detectable molecular contamination levels for a chosen inspection and analysis method, and determining the effect of contamination on the system. By modeling system performance based on when molecular contamination is detected during systems integration and at what cleanliness level, the decision maker can perform trades amongst different inspection and analysis methods and determine if a planned method is adequate to meet system requirements and manage contamination risk.

  8. The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis

    PubMed Central

    A Rashid, Ahmad Safuan; Ali, Nazri

    2014-01-01

    Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652

  9. The contribution of particle swarm optimization to three-dimensional slope stability analysis.

    PubMed

    Kalatehjari, Roohollah; Rashid, Ahmad Safuan A; Ali, Nazri; Hajihassani, Mohsen

    2014-01-01

    Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes.

  10. Practical applications of surface analytic tools in tribology

    NASA Technical Reports Server (NTRS)

    Ferrante, J.

    1980-01-01

    A brief description of many of the widely used tools is presented. Of this list, those which have the highest applicability for giving elemental and/or compound analysis for problems of interest in tribology along with being truly surface sensitive (that is less than 10 atomic layers) are presented. The latter group is critiqued in detail in terms of strengths and weaknesses. Emphasis is placed on post facto analysis of experiments performed under real conditions (e.g., in air with lubricants). It is further indicated that such equipment could be used for screening and quality control.

  11. Basin-scale geothermal model calibration: experience from the Perth Basin, Australia

    NASA Astrophysics Data System (ADS)

    Wellmann, Florian; Reid, Lynn

    2014-05-01

    The calibration of large-scale geothermal models for entire sedimentary basins is challenging as direct measurements of rock properties and subsurface temperatures are commonly scarce and the basal boundary conditions poorly constrained. Instead of the often applied "trial-and-error" manual model calibration, we examine here if we can gain additional insight into parameter sensitivities and model uncertainty with a model analysis and calibration study. Our geothermal model is based on a high-resolution full 3-D geological model, covering an area of more than 100,000 square kilometers and extending to a depth of 55 kilometers. The model contains all major faults (>80 ) and geological units (13) for the entire basin. This geological model is discretised into a rectilinear mesh with a lateral resolution of 500 x 500 m, and a variable resolution at depth. The highest resolution of 25 m is applied to a depth range of 1000-3000 m where most temperature measurements are available. The entire discretised model consists of approximately 50 million cells. The top thermal boundary condition is derived from surface temperature measurements on land and ocean floor. The base of the model extents below the Moho, and we apply the heat flux over the Moho as a basal heat flux boundary condition. Rock properties (thermal conductivity, porosity, and heat production) have been compiled from several existing data sets. The conductive geothermal forward simulation is performed with SHEMAT, and we then use the stand-alone capabilities of iTOUGH2 for sensitivity analysis and model calibration. Simulated temperatures are compared to 130 quality weighted bottom hole temperature measurements. The sensitivity analysis provided a clear insight into the most sensitive parameters and parameter correlations. This proved to be of value as strong correlations, for example between basal heat flux and heat production in deep geological units, can significantly influence the model calibration procedure. The calibration resulted in a better determination of subsurface temperatures, and, in addition, provided an insight into model quality. Furthermore, a detailed analysis of the measurements used for calibration highlighted potential outliers, and limitations with the model assumptions. Extending the previously existing large-scale geothermal simulation with iTOUGH2 provided us with a valuable insight into the sensitive parameters and data in the model, which would clearly not be possible with a simple trial-and-error calibration method. Using the gained knowledge, future work will include more detailed studies on the influence of advection and convection.

  12. Coping with confounds in multivoxel pattern analysis: what should we do about reaction time differences? A comment on Todd, Nystrom & Cohen 2013.

    PubMed

    Woolgar, Alexandra; Golland, Polina; Bode, Stefan

    2014-09-01

    Multivoxel pattern analysis (MVPA) is a sensitive and increasingly popular method for examining differences between neural activation patterns that cannot be detected using classical mass-univariate analysis. Recently, Todd et al. ("Confounds in multivariate pattern analysis: Theory and rule representation case study", 2013, NeuroImage 77: 157-165) highlighted a potential problem for these methods: high sensitivity to confounds at the level of individual participants due to the use of directionless summary statistics. Unlike traditional mass-univariate analyses where confounding activation differences in opposite directions tend to approximately average out at group level, group level MVPA results may be driven by any activation differences that can be discriminated in individual participants. In Todd et al.'s empirical data, factoring out differences in reaction time (RT) reduced a classifier's ability to distinguish patterns of activation pertaining to two task rules. This raises two significant questions for the field: to what extent have previous multivoxel discriminations in the literature been driven by RT differences, and by what methods should future studies take RT and other confounds into account? We build on the work of Todd et al. and compare two different approaches to remove the effect of RT in MVPA. We show that in our empirical data, in contrast to that of Todd et al., the effect of RT on rule decoding is negligible, and results were not affected by the specific details of RT modelling. We discuss the meaning of and sensitivity for confounds in traditional and multivoxel approaches to fMRI analysis. We observe that the increased sensitivity of MVPA comes at a price of reduced specificity, meaning that these methods in particular call for careful consideration of what differs between our conditions of interest. We conclude that the additional complexity of the experimental design, analysis and interpretation needed for MVPA is still not a reason to favour a less sensitive approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Critical evaluation of Jet-A spray combustion using propane chemical kinetics in gas turbine combustion simulated by KIVA-2

    NASA Technical Reports Server (NTRS)

    Nguyen, H. L.; Ying, S.-J.

    1990-01-01

    Jet-A spray combustion has been evaluated in gas turbine combustion with the use of propane chemical kinetics as the first approximation for the chemical reactions. Here, the numerical solutions are obtained by using the KIVA-2 computer code. The KIVA-2 code is the most developed of the available multidimensional combustion computer programs for application of the in-cylinder combustion dynamics of internal combustion engines. The released version of KIVA-2 assumes that 12 chemical species are present; the code uses an Arrhenius kinetic-controlled combustion model governed by a four-step global chemical reaction and six equilibrium reactions. Researchers efforts involve the addition of Jet-A thermophysical properties and the implementation of detailed reaction mechanisms for propane oxidation. Three different detailed reaction mechanism models are considered. The first model consists of 131 reactions and 45 species. This is considered as the full mechanism which is developed through the study of chemical kinetics of propane combustion in an enclosed chamber. The full mechanism is evaluated by comparing calculated ignition delay times with available shock tube data. However, these detailed reactions occupy too much computer memory and CPU time for the computation. Therefore, it only serves as a benchmark case by which to evaluate other simplified models. Two possible simplified models were tested in the existing computer code KIVA-2 for the same conditions as used with the full mechanism. One model is obtained through a sensitivity analysis using LSENS, the general kinetics and sensitivity analysis program code of D. A. Bittker and K. Radhakrishnan. This model consists of 45 chemical reactions and 27 species. The other model is based on the work published by C. K. Westbrook and F. L. Dryer.

  14. From Secure Memories to Smart Card Security

    NASA Astrophysics Data System (ADS)

    Handschuh, Helena; Trichina, Elena

    Non-volatile memory is essential in most embedded security applications. It will store the key and other sensitive materials for cryptographic and security applications. In this chapter, first an overview is given of current flash memory architectures. Next the standard security features which form the basis of so-called secure memories are described in more detail. Smart cards are a typical embedded application that is very vulnerable to attacks and that at the same time has a high need for secure non-volatile memory. In the next part of this chapter, the secure memories of so-called flash-based high-density smart cards are described. It is followed by a detailed analysis of what the new security challenges for such objects are.

  15. Energy Return on Investment (EROI) for Forty Global Oilfields Using a Detailed Engineering-Based Model of Oil Production.

    PubMed

    Brandt, Adam R; Sun, Yuchi; Bharadwaj, Sharad; Livingston, David; Tan, Eugene; Gordon, Deborah

    2015-01-01

    Studies of the energy return on investment (EROI) for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER) ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER) ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs), nor does it include other indirect energy uses such as labor or services.

  16. Energy Return on Investment (EROI) for Forty Global Oilfields Using a Detailed Engineering-Based Model of Oil Production

    PubMed Central

    Brandt, Adam R.; Sun, Yuchi; Bharadwaj, Sharad; Livingston, David; Tan, Eugene; Gordon, Deborah

    2015-01-01

    Studies of the energy return on investment (EROI) for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER) ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER) ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs), nor does it include other indirect energy uses such as labor or services. PMID:26695068

  17. Cost/Effort Drivers and Decision Analysis

    NASA Technical Reports Server (NTRS)

    Seidel, Jonathan

    2010-01-01

    Engineering trade study analyses demand consideration of performance, cost and schedule impacts across the spectrum of alternative concepts and in direct reference to product requirements. Prior to detailed design, requirements are too often ill-defined (only goals ) and prone to creep, extending well beyond the Systems Requirements Review. Though lack of engineering design and definitive requirements inhibit the ability to perform detailed cost analyses, affordability trades still comprise the foundation of these future product decisions and must evolve in concert. This presentation excerpts results of the recent NASA subsonic Engine Concept Study for an Advanced Single Aisle Transport to demonstrate an affordability evaluation of performance characteristics and the subsequent impacts on engine architecture decisions. Applying the Process Based Economic Analysis Tool (PBEAT), development cost, production cost, as well as operation and support costs were considered in a traditional weighted ranking of the following system-level figures of merit: mission fuel burn, take-off noise, NOx emissions, and cruise speed. Weighting factors were varied to ascertain the architecture ranking sensitivities to these performance figures of merit with companion cost considerations. A more detailed examination of supersonic variable cycle engine cost is also briefly presented, with observations and recommendations for further refinements.

  18. Fundamentals and practice for ultrasensitive laser-induced fluorescence detection in microanalytical systems.

    PubMed

    Johnson, Mitchell E; Landers, James P

    2004-11-01

    Laser-induced fluorescence is an extremely sensitive method for detection in chemical separations. In addition, it is well-suited to detection in small volumes, and as such is widely used for capillary electrophoresis and microchip-based separations. This review explores the detailed instrumental conditions required for sub-zeptomole, sub-picomolar detection limits. The key to achieving the best sensitivity is to use an excitation and emission volume that is matched to the separation system and that, simultaneously, will keep scattering and luminescence background to a minimum. We discuss how this is accomplished with confocal detection, 90 degrees on-capillary detection, and sheath-flow detection. It is shown that each of these methods have their advantages and disadvantages, but that all can be used to produce extremely sensitive detectors for capillary- or microchip-based separations. Analysis of these capabilities allows prediction of the optimal means of achieving ultrasensitive detection on microchips.

  19. Polarization Sensitive Coherent Anti-Stokes Raman Spectroscopy of DCVJ in Doped Polymer

    NASA Astrophysics Data System (ADS)

    Ujj, Laszlo

    2014-05-01

    Coherent Raman Microscopy is an emerging technic and method to image biological samples such as living cells by recording vibrational fingerprints of molecules with high spatial resolution. The race is on to record the entire image during the shortest time possible in order to increase the time resolution of the recorded cellular events. The electronically enhanced polarization sensitive version of Coherent anti-Stokes Raman scattering is one of the method which can shorten the recording time and increase the sharpness of an image by enhancing the signal level of special molecular vibrational modes. In order to show the effectiveness of the method a model system, a highly fluorescence sample, DCVJ in a polymer matrix is investigated. Polarization sensitive resonance CARS spectra are recorded and analyzed. Vibrational signatures are extracted with model independent methods. Details of the measurements and data analysis will be presented. The author gratefully acknowledge the UWF for financial support.

  20. Near-infrared Spectroscopy to Reduce Prophylactic Fasciotomies for and Missed Cases of Acute Compartment Syndrome in Soldiers Injured in OEF/OIF

    DTIC Science & Technology

    2012-10-01

    studies demonstrated that NIRS measurement of hemoglobin oxygen saturation in the tibial compartment provided reliable and sensitive correlation to...pressure increases with muscle damage, there is not a complete loss of tissue oxygen saturation in the tissue over the 14 hours of the protocol. In...allow greater detail of information and flexibility in the analysis of tissue oxygenation levels. Although the 7610 oximeter has not been

  1. Targeting a Novel Androgen Receptor-Repressed Pathway in Prostate Cancer Therapy

    DTIC Science & Technology

    2017-09-01

    AURKA and CENPE. [ Study is on - going] Major Task 3: Test the hypothesis that androgen deprivation-induced PKD1 expression. Major Task 4: Test the...Subtask 3: Conduct detail analysis of the pathway identified in Subtask 2 and assessing its impact on PKD1 expression. This study is on -going. We...enzalutamide, in androgen-sensitivity PrCa cells. We have completed this subtask. In this study , although PKD2 overexpression had little impact on LNCaP

  2. Effort and Accuracy in Choice.

    DTIC Science & Technology

    1984-01-01

    AUTOMIS . CONTPRACT OR GANT MUNGICAfO) Eric J. Johnson John W. Payne . NOOO-14-80-C-a 114 UPCOVRIMIS ORGANIZATION NDE1 AND AGOREM If- =AtI9ET PR 1T...concerning human rationality in the absence of a detailed analysis of the sensitivity of the criterion and the cost involved in evaluating the alternatives (p...can be thought of as being part of long-term memory. Arguments for the value of production systems as a representation of human cognitive processes

  3. Estimating costs in the economic evaluation of medical technologies.

    PubMed

    Luce, B R; Elixhauser, A

    1990-01-01

    The complexities and nuances of evaluating the costs associated with providing medical technologies are often underestimated by analysts engaged in economic evaluations. This article describes the theoretical underpinnings of cost estimation, emphasizing the importance of accounting for opportunity costs and marginal costs. The various types of costs that should be considered in an analysis are described; a listing of specific cost elements may provide a helpful guide to analysis. The process of identifying and estimating costs is detailed, and practical recommendations for handling the challenges of cost estimation are provided. The roles of sensitivity analysis and discounting are characterized, as are determinants of the types of costs to include in an analysis. Finally, common problems facing the analyst are enumerated with suggestions for managing these problems.

  4. Emulation and Sensitivity Analysis of the Community Multiscale Air Quality Model for a UK Ozone Pollution Episode.

    PubMed

    Beddows, Andrew V; Kitwiroon, Nutthida; Williams, Martin L; Beevers, Sean D

    2017-06-06

    Gaussian process emulation techniques have been used with the Community Multiscale Air Quality model, simulating the effects of input uncertainties on ozone and NO 2 output, to allow robust global sensitivity analysis (SA). A screening process ranked the effect of perturbations in 223 inputs, isolating the 30 most influential from emissions, boundary conditions (BCs), and reaction rates. Community Multiscale Air Quality (CMAQ) simulations of a July 2006 ozone pollution episode in the UK were made with input values for these variables plus ozone dry deposition velocity chosen according to a 576 point Latin hypercube design. Emulators trained on the output of these runs were used in variance-based SA of the model output to input uncertainties. Performing these analyses for every hour of a 21 day period spanning the episode and several days on either side allowed the results to be presented as a time series of sensitivity coefficients, showing how the influence of different input uncertainties changed during the episode. This is one of the most complex models to which these methods have been applied, and here, they reveal detailed spatiotemporal patterns of model sensitivities, with NO and isoprene emissions, NO 2 photolysis, ozone BCs, and deposition velocity being among the most influential input uncertainties.

  5. Antibody Desensitization Therapy in Highly Sensitized Lung Transplant Candidates

    PubMed Central

    Snyder, L. D.; Gray, A. L.; Reynolds, J. M.; Arepally, G. M.; Bedoya, A.; Hartwig, M. G.; Davis, R. D.; Lopes, K. E.; Wegner, W. E.; Chen, D. F.; Palmer, S. M.

    2015-01-01

    As HLAs antibody detection technology has evolved, there is now detailed HLA antibody information available on prospective transplant recipients. Determining single antigen antibody specificity allows for a calculated panel reactive antibodies (cPRA) value, providing an estimate of the effective donor pool. For broadly sensitized lung transplant candidates (cPRA ≥ 80%), our center adopted a pretransplant multimodal desensitization protocol in an effort to decrease the cPRA and expand the donor pool. This desensitization protocol included plasmapheresis, solumedrol, bortezomib and rituximab given in combination over 19 days followed by intravenous immunoglobulin. Eight of 18 candidates completed therapy with the primary reasons for early discontinuation being transplant (by avoiding unacceptable antigens) or thrombocytopenia. In a mixed-model analysis, there were no significant changes in PRA or cPRA changes over time with the protocol. A sub-analysis of the median fluorescence intensity (MFI) change indicated a small decline that was significant in antibodies with MFI 5000–10 000. Nine of 18 candidates subsequently had a transplant. Posttransplant survival in these nine recipients was comparable to other pretransplant-sensitized recipients who did not receive therapy. In summary, an aggressive multi-modal desensitization protocol does not significantly reduce pretransplant HLA antibodies in a broadly sensitized lung transplant candidate cohort. PMID:24666831

  6. Diagnostic Accuracy of Central Venous Catheter Confirmation by Bedside Ultrasound Versus Chest Radiography in Critically Ill Patients: A Systematic Review and Meta-Analysis.

    PubMed

    Ablordeppey, Enyo A; Drewry, Anne M; Beyer, Alexander B; Theodoro, Daniel L; Fowler, Susan A; Fuller, Brian M; Carpenter, Christopher R

    2017-04-01

    We performed a systematic review and meta-analysis to examine the accuracy of bedside ultrasound for confirmation of central venous catheter position and exclusion of pneumothorax compared with chest radiography. PubMed, Embase, Cochrane Central Register of Controlled Trials, reference lists, conference proceedings and ClinicalTrials.gov. Articles and abstracts describing the diagnostic accuracy of bedside ultrasound compared with chest radiography for confirmation of central venous catheters in sufficient detail to reconstruct 2 × 2 contingency tables were reviewed. Primary outcomes included the accuracy of confirming catheter positioning and detecting a pneumothorax. Secondary outcomes included feasibility, interrater reliability, and efficiency to complete bedside ultrasound confirmation of central venous catheter position. Investigators abstracted study details including research design and sonographic imaging technique to detect catheter malposition and procedure-related pneumothorax. Diagnostic accuracy measures included pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Fifteen studies with 1,553 central venous catheter placements were identified with a pooled sensitivity and specificity of catheter malposition by ultrasound of 0.82 (0.77-0.86) and 0.98 (0.97-0.99), respectively. The pooled positive and negative likelihood ratios of catheter malposition by ultrasound were 31.12 (14.72-65.78) and 0.25 (0.13-0.47). The sensitivity and specificity of ultrasound for pneumothorax detection was nearly 100% in the participating studies. Bedside ultrasound reduced mean central venous catheter confirmation time by 58.3 minutes. Risk of bias and clinical heterogeneity in the studies were high. Bedside ultrasound is faster than radiography at identifying pneumothorax after central venous catheter insertion. When a central venous catheter malposition exists, bedside ultrasound will identify four out of every five earlier than chest radiography.

  7. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  8. Development and testing of a fast conceptual river water quality model.

    PubMed

    Keupers, Ingrid; Willems, Patrick

    2017-04-15

    Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Nano-LC/MALDI-MS using a column-integrated spotting probe for analysis of complex biomolecule samples.

    PubMed

    Hioki, Yusaku; Tanimura, Ritsuko; Iwamoto, Shinichi; Tanaka, Koichi

    2014-03-04

    Nanoflow liquid chromatography (nano-LC) is an essential technique for highly sensitive analysis of complex biological samples, and matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is advantageous for rapid identification of proteins and in-depth analysis of post-translational modifications (PTMs). A combination of nano-LC and MALDI-MS (nano-LC/MALDI-MS) is useful for highly sensitive and detailed analysis in life sciences. However, the existing system does not fully utilize the advantages of each technique, especially in the interface of eluate transfer from nano-LC to a MALDI plate. To effectively combine nano-LC with MALDI-MS, we integrated a nano-LC column and a deposition probe for the first time (column probe) and incorporated it into a nano-LC/MALDI-MS system. Spotting nanoliter eluate droplets directly from the column onto the MALDI plate prevents postcolumn diffusion and preserves the chromatographic resolution. A DHB prespotted plate was prepared to suit the fabricated column probe to concentrate the droplets of nano-LC eluate. The performance of the advanced nano-LC/MALDI-MS system was substantiated by analyzing protein digests. When the system was coupled with multidimensional liquid chromatography (MDLC), trace amounts of glycopeptides that spiked into complex samples were successfully detected. Thus, a nano-LC/MALDI-MS direct-spotting system that eliminates postcolumn diffusion was constructed, and the efficacy of the system was demonstrated through highly sensitive analysis of the protein digests or spiked glycopeptides.

  10. Shock wave viscosity measurements

    NASA Astrophysics Data System (ADS)

    Celliers, Peter

    2013-06-01

    Several decades ago a method was proposed and demonstrated to measure the viscosity of fluids at high pressure by observing the oscillatory damping of sinusoidal perturbations on a shock front. A detailed mathematical analysis of the technique carried out subsequently by Miller and Ahrens revealed its potential, as well as a deep level of complexity in the analysis. We revisit the ideas behind this technique in the context of a recent experimental development: two-dimensional imaging velocimetry. The new technique allows one to capture a broad spectrum of perturbations down to few micron scale-lengths imposed on a shock front from an initial perturbation. The detailed evolution of the perturbation spectrum is sensitive to the viscosity in the fluid behind the shock front. Initial experiments are aimed at examining the viscosity of shock compressed SiO2 just above the shock melting transition. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. Environmental research program. 1995 Annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, N.J.

    1996-06-01

    The objective of the Environmental Research Program is to enhance the understanding of, and mitigate the effects of pollutants on health, ecological systems, global and regional climate, and air quality. The program is multidisciplinary and includes fundamental research and development in efficient and environmentally benign combustion, pollutant abatement and destruction, and novel methods of detection and analysis of criteria and noncriteria pollutants. This diverse group conducts investigations in combustion, atmospheric and marine processes, flue-gas chemistry, and ecological systems. Combustion chemistry research emphasizes modeling at microscopic and macroscopic scales. At the microscopic scale, functional sensitivity analysis is used to explore themore » nature of the potential-to-dynamics relationships for reacting systems. Rate coefficients are estimated using quantum dynamics and path integral approaches. At the macroscopic level, combustion processes are modelled using chemical mechanisms at the appropriate level of detail dictated by the requirements of predicting particular aspects of combustion behavior. Parallel computing has facilitated the efforts to use detailed chemistry in models of turbulent reacting flow to predict minor species concentrations.« less

  12. Distillation tray structural parameter study: Phase 1

    NASA Technical Reports Server (NTRS)

    Winter, J. Ronald

    1991-01-01

    The purpose here is to identify the structural parameters (plate thickness, liquid level, beam size, number of beams, tray diameter, etc.) that affect the structural integrity of distillation trays in distillation columns. Once the sensitivity of the trays' dynamic response to these parameters has been established, the designer will be able to use this information to prepare more accurate specifications for the construction of new trays. Information is given on both static and dynamic analysis, modal response, and tray failure details.

  13. Reticular reflex myoclonus: a physiological type of human post-hypoxic myoclonus.

    PubMed Central

    Hallett, M; Chadwick, D; Adam, J; Marsden, C D

    1977-01-01

    A patient with post-hypoxic myoclonus, sensitive to therapy with 5-hydroxytryptophan and clonazepam, was subjected to detailed electrophysiological investigation. Brief generalised jerks followed the critical stimulus of muscle stretch. The electroencephalogram showed generalised spikes that were associated with, but not time locked to, the myoclonus. The cranial nerve nuclei were activated upward. Analysis of the findings suggests that the mechanism of the myoclonus is hyperactivity of a reflex mediated in the reticular formation of the medulla oblongata. PMID:301926

  14. Space shuttle post-entry and landing analysis. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Crawford, B. S.; Duiven, E. M.

    1973-01-01

    Four candidate navigation systems for the space shuttle orbiter approach and landing phase are evaluated in detail. These include three conventional navaid systems and a single-station one-way Doppler system. In each case, a Kalman filter is assumed to be mechanized in the onboard computer, blending the navaid data with IMU and altimeter data. Filter state dimensions ranging from 6 to 24 are involved in the candidate systems. Comprehensive truth models with state dimensions ranging from 63 to 82 are formulated and used to generate detailed error budgets and sensitivity curves illustrating the effect of variations in the size of individual error sources on touchdown accuracy. The projected overall performance of each system is shown in the form of time histories of position and velocity error components.

  15. Analysis of Particulate Contamination During Launch of MMS Mission

    NASA Technical Reports Server (NTRS)

    Brieda, Lubos; Barrie, Alexander; Hughes, David; Errigo, Therese

    2010-01-01

    NASA's Magnetospheric MultiScale (MMS) is an unmanned constellation of four identical spacecraft designed to investigate magnetic reconnection by obtaining detailed measurements of plasma properties in Earth's magnetopause and magnetotail. Each of the four identical satellites carries a suite of instruments which characterize the ambient ion and electron energy spectrum and composition. Some of these instruments utilize microchannel plates and are sensitive to particulate contamination. In this paper, we analyze the transport of particulates during pre-launch, launch and ascent events, and use the analysis to obtain quantitative predictions of contamination impact on the instruments. Viewfactor calculation is performed by considering the gravitational and aerodynamic forces acting on the particles.

  16. Study on reaction mechanism by analysis of kinetic energy spectra of light particles and formation of final products

    NASA Astrophysics Data System (ADS)

    Giardina, G.; Mandaglio, G.; Nasirov, A. K.; Anastasi, A.; Curciarello, F.; Fazio, G.

    2018-05-01

    The sensitivity of reaction mechanism in the formation of compound nucleus (CN) by the analysis of kinetic energy spectra of light particles and of reaction products are shown. The dependence of the P CN fusion probability of reactants and W sur survival probability of CN against fission at its deexcitation on the mass and charge symmetries in the entrance channel of heavy-ion collisions, as well as on the neutron numbers is discussed. The possibility of conducting a complex program of investigations of the complete fusion by reliable ways depends on the detailed and refined methods of experimental and theoretical analyses.

  17. Analysis of Particle Image Velocimetry (PIV) Data for Acoustic Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    Acoustic velocity measurements were taken using Particle Image Velocimetry (PIV) in a Normal Incidence Tube configuration at various frequency, phase, and amplitude levels. This report presents the results of the PIV analysis and data reduction portions of the test and details the processing that was done. Estimates of lower measurement sensitivity levels were determined based on PIV image quality, correlation, and noise level parameters used in the test. Comparison of measurements with linear acoustic theory are presented. The onset of nonlinear, harmonic frequency acoustic levels were also studied for various decibel and frequency levels ranging from 90 to 132 dB and 500 to 3000 Hz, respectively.

  18. Modeling and analysis of the TF30-P-3 compressor system with inlet pressure distortion

    NASA Technical Reports Server (NTRS)

    Mazzawy, R. S.; Banks, G. A.

    1976-01-01

    Circumferential inlet distortion testing of a TF30-P-3 afterburning turbofan engine was conducted at NASA-Lewis Research Center. Pratt and Whitney Aircraft analyzed the data using its multiple segment parallel compressor model and classical compressor theory. Distortion attenuation analysis resulted in a detailed flow field calculation with good agreement between multiple segment model predictions and the test data. Sensitivity of the engine stall line to circumferential inlet distortion was calculated on the basis of parallel compressor theory to be more severe than indicated by the data. However, the calculated stall site location was in agreement with high response instrumentation measurements.

  19. Frontal affinity chromatography: A unique research tool for biospecific interaction that promotes glycobiology

    PubMed Central

    KASAI, Kenichi

    2014-01-01

    Combination of bioaffinity and chromatography gave birth to affinity chromatography. A further combination with frontal analysis resulted in creation of frontal affinity chromatography (FAC). This new versatile research tool enabled detailed analysis of weak interactions that play essential roles in living systems, especially those between complex saccharides and saccharide-binding proteins. FAC now becomes the best method for the investigation of saccharide-binding proteins (lectins) from viewpoints of sensitivity, accuracy, and efficiency, and is contributing greatly to the development of glycobiology. It opened a door leading to deeper understanding of the significance of saccharide recognition in life. The theory is also concisely described. PMID:25169774

  20. Boeing CST-100 Starliner Seat Test

    NASA Image and Video Library

    2017-02-21

    Engineers working with Boeing's CST-100 Starliner test the spacecraft's seat design in Mesa, Arizona, focusing on how the spacecraft seats would protect an astronaut's head, neck and spine during the 240-mile descent from the International Space Station. The company incorporated test dummies for a detailed analysis of impacts on a crew returning to earth. The human-sized dummies were equipped with sensitive instrumentation and secured in the seats for 30 drop tests at varying heights, angles, velocities and seat orientations in order to mimic actual landing conditions. High-speed cameras captured the footage for further analysis. The Starliner spacecraft is being developed in partnership with NASA's Commercial Crew Program.

  1. Fluorescence-Assisted Gamma Spectrometry for Surface Contamination Analysis

    NASA Astrophysics Data System (ADS)

    Ihantola, Sakari; Sand, Johan; Perajarvi, Kari; Toivonen, Juha; Toivonen, Harri

    2013-02-01

    A fluorescence-based alpha-gamma coincidence spectrometry approach has been developed for the analysis of alpha-emitting radionuclides. The thermalization of alpha particles in air produces UV light, which in turn can be detected over long distances. The simultaneous detection of UV and gamma photons allows detailed gamma analyses of a single spot of interest even in highly active surroundings. Alpha particles can also be detected indirectly from samples inside sealed plastic bags, which minimizes the risk of cross-contamination. The position-sensitive alpha-UV-gamma coincidence technique reveals the presence of alpha emitters and identifies the nuclides ten times faster than conventional gamma spectrometry.

  2. Analysis of all-optical temporal integrator employing phased-shifted DFB-SOA.

    PubMed

    Jia, Xin-Hong; Ji, Xiao-Ling; Xu, Cong; Wang, Zi-Nan; Zhang, Wei-Li

    2014-11-17

    All-optical temporal integrator using phase-shifted distributed-feedback semiconductor optical amplifier (DFB-SOA) is investigated. The influences of system parameters on its energy transmittance and integration error are explored in detail. The numerical analysis shows that, enhanced energy transmittance and integration time window can be simultaneously achieved by increased injected current in the vicinity of lasing threshold. We find that the range of input pulse-width with lower integration error is highly sensitive to the injected optical power, due to gain saturation and induced detuning deviation mechanism. The initial frequency detuning should also be carefully chosen to suppress the integration deviation with ideal waveform output.

  3. Differential die-away analysis system response modeling and detector design

    NASA Astrophysics Data System (ADS)

    Jordan, K. A.; Gozani, T.; Vujic, J.

    2008-05-01

    Differential die-away-analysis (DDAA) is a sensitive technique to detect presence of fissile materials such as 235U and 239Pu. DDAA uses a high-energy (14 MeV) pulsed neutron generator to interrogate a shipping container. The signature is a fast neutron signal hundreds of microseconds after the cessation of the neutron pulse. This fast neutron signal has decay time identical to the thermal neutron diffusion decay time of the inspected cargo. The theoretical aspects of a cargo inspection system based on the differential die-away technique are explored. A detailed mathematical model of the system is developed, and experimental results validating this model are presented.

  4. A Novel Shape Parameterization Approach

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1999-01-01

    This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.

  5. Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2000-01-01

    This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.

  6. Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2000-01-01

    This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.

  7. An experimental and numerical investigation of shock-wave induced turbulent boundary-layer separation at hypersonic speeds

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.; Horstman, C. C.; Rubesin, M. W.; Coakley, T. J.; Kussoy, M. I.

    1975-01-01

    An experiment designed to test and guide computations of the interaction of an impinging shock wave with a turbulent boundary layer is described. Detailed mean flow-field and surface data are presented for two shock strengths which resulted in attached and separated flows, respectively. Numerical computations, employing the complete time-averaged Navier-Stokes equations along with algebraic eddy-viscosity and turbulent Prandtl number models to describe shear stress and heat flux, are used to illustrate the dependence of the computations on the particulars of the turbulence models. Models appropriate for zero-pressure-gradient flows predicted the overall features of the flow fields, but were deficient in predicting many of the details of the interaction regions. Improvements to the turbulence model parameters were sought through a combination of detailed data analysis and computer simulations which tested the sensitivity of the solutions to model parameter changes. Computer simulations using these improvements are presented and discussed.

  8. Modeling Effects of Annealing on Coal Char Reactivity to O 2 and CO 2 , Based on Preparation Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy; Bhat, Sham; Marcy, Peter

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less

  9. Modeling Effects of Annealing on Coal Char Reactivity to O 2 and CO 2 , Based on Preparation Conditions

    DOE PAGES

    Holland, Troy; Bhat, Sham; Marcy, Peter; ...

    2017-08-25

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less

  10. Aerosol Retrievals over the Ocean using Channel 1 and 2 AVHRR Data: A Sensitivity Analysis and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Geogdzhayev, Igor V.; Cairns, Brian; Rossow, William B.; Lacis, Andrew A.

    1999-01-01

    This paper outlines the methodology of interpreting channel 1 and 2 AVHRR radiance data over the oceans and describes a detailed analysis of the sensitivity of monthly averages of retrieved aerosol parameters to the assumptions made in different retrieval algorithms. The analysis is based on using real AVHRR data and exploiting accurate numerical techniques for computing single and multiple scattering and spectral absorption of light in the vertically inhomogeneous atmosphere-ocean system. We show that two-channel algorithms can be expected to provide significantly more accurate and less biased retrievals of the aerosol optical thickness than one-channel algorithms and that imperfect cloud screening and calibration uncertainties are by far the largest sources of errors in the retrieved aerosol parameters. Both underestimating and overestimating aerosol absorption as well as the potentially strong variability of the real part of the aerosol refractive index may lead to regional and/or seasonal biases in optical thickness retrievals. The Angstrom exponent appears to be the most invariant aerosol size characteristic and should be retrieved along with optical thickness as the second aerosol parameter.

  11. Dependence of intramyocardial pressure and coronary flow on ventricular loading and contractility: a model study.

    PubMed

    Bovendeerd, Peter H M; Borsje, Petra; Arts, Theo; van De Vosse, Frans N

    2006-12-01

    The phasic coronary arterial inflow during the normal cardiac cycle has been explained with simple (waterfall, intramyocardial pump) models, emphasizing the role of ventricular pressure. To explain changes in isovolumic and low afterload beats, these models were extended with the effect of three-dimensional wall stress, nonlinear characteristics of the coronary bed, and extravascular fluid exchange. With the associated increase in the number of model parameters, a detailed parameter sensitivity analysis has become difficult. Therefore we investigated the primary relations between ventricular pressure and volume, wall stress, intramyocardial pressure and coronary blood flow, with a mathematical model with a limited number of parameters. The model replicates several experimental observations: the phasic character of coronary inflow is virtually independent of maximum ventricular pressure, the amplitude of the coronary flow signal varies about proportionally with cardiac contractility, and intramyocardial pressure in the ventricular wall may exceed ventricular pressure. A parameter sensitivity analysis shows that the normalized amplitude of coronary inflow is mainly determined by contractility, reflected in ventricular pressure and, at low ventricular volumes, radial wall stress. Normalized flow amplitude is less sensitive to myocardial coronary compliance and resistance, and to the relation between active fiber stress, time, and sarcomere shortening velocity.

  12. Effectiveness of urine fibronectin as a non-invasive diagnostic biomarker in bladder cancer patients: a systematic review and meta-analysis.

    PubMed

    Dong, Fan; Shen, Yifan; Xu, Tianyuan; Wang, Xianjin; Gao, Fengbin; Zhong, Shan; Chen, Shanwen; Shen, Zhoujun

    2018-03-21

    Previous researches pointed out that the measurement of urine fibronectin (Fn) could be a potential diagnostic test for bladder cancer (BCa). We conducted this meta-analysis to fully assess the diagnostic value of urine Fn for BCa detection. A systematic literature search in PubMed, ISI Web of Science, EMBASE, Cochrane library, and CBM was carried out to identify eligible studies evaluating the urine Fn in diagnosing BCa. Pooled sensitivity, specificity, and diagnostic odds ratio (DOR) with their 95% confidence intervals (CIs) were calculated, and summary receiver operating characteristic (SROC) curves were established. We applied the STATA 13.0, Meta-Disc 1.4, and RevMan 5.3 software to the meta-analysis. Eight separate studies with 744 bladder cancer patients were enrolled in this meta-analysis. The pooled sensitivity, specificity, and DOR were 0.80 (95%CI = 0.77-0.83), 0.79 (95%CI = 0.73-0.84), and 15.18 (95%CI = 10.07-22.87), respectively, and the area under the curve (AUC) of SROC was 0.83 (95%CI = 0.79-0.86). The diagnostic power of a combined method (urine Fn combined with urine cytology) was also evaluated, and its sensitivity and AUC were significantly higher (0.86 (95%CI = 0.82-0.90) and 0.89 (95%CI = 0.86-0.92), respectively). Meta-regression along with subgroup analysis based on various covariates revealed the potential sources of the heterogeneity and the detailed diagnostic value of each subgroup. Sensitivity analysis supported that the result was robust. No threshold effect and publication bias were found in this meta-analysis. Urine Fn may become a promising non-invasive biomarker for bladder cancer with a relatively satisfactory diagnostic power. And the combination of urine Fn with cytology could be an alternative option for detecting BCa in clinical practice. The potential value of urine Fn still needs to be validated in large, multi-center, and prospective studies.

  13. External details revisited - A new taxonomy for coding 'non-episodic' content during autobiographical memory retrieval.

    PubMed

    Strikwerda-Brown, Cherie; Mothakunnel, Annu; Hodges, John R; Piguet, Olivier; Irish, Muireann

    2018-04-24

    Autobiographical memory (ABM) is typically held to comprise episodic and semantic elements, with the vast majority of studies to date focusing on profiles of episodic details in health and disease. In this context, 'non-episodic' elements are often considered to reflect semantic processing or are discounted from analyses entirely. Mounting evidence suggests that rather than reflecting one unitary entity, semantic autobiographical information may contain discrete subcomponents, which vary in their relative degree of semantic or episodic content. This study aimed to (1) review the existing literature to formally characterize the variability in analysis of 'non-episodic' content (i.e., external details) on the Autobiographical Interview and (2) use these findings to create a theoretically grounded framework for coding external details. Our review exposed discrepancies in the reporting and interpretation of external details across studies, reinforcing the need for a new, consistent approach. We validated our new external details scoring protocol (the 'NExt' taxonomy) in patients with Alzheimer's disease (n = 18) and semantic dementia (n = 13), and 20 healthy older Control participants and compared profiles of the NExt subcategories across groups and time periods. Our results revealed increased sensitivity of the NExt taxonomy in discriminating between ABM profiles of patient groups, when compared to traditionally used internal and external detail metrics. Further, remote and recent autobiographical memories displayed distinct compositions of the NExt detail types. This study is the first to provide a fine-grained and comprehensive taxonomy to parse external details into intuitive subcategories and to validate this protocol in neurodegenerative disorders. © 2018 The British Psychological Society.

  14. Detailed statistical contact angle analyses; "slow moving" drops on inclining silicon-oxide surfaces.

    PubMed

    Schmitt, M; Groß, K; Grub, J; Heib, F

    2015-06-01

    Contact angle determination by sessile drop technique is essential to characterise surface properties in science and in industry. Different specific angles can be observed on every solid which are correlated with the advancing or the receding of the triple line. Different procedures and definitions for the determination of specific angles exist which are often not comprehensible or reproducible. Therefore one of the most important things in this area is to build standard, reproducible and valid methods for determining advancing/receding contact angles. This contribution introduces novel techniques to analyse dynamic contact angle measurements (sessile drop) in detail which are applicable for axisymmetric and non-axisymmetric drops. Not only the recently presented fit solution by sigmoid function and the independent analysis of the different parameters (inclination, contact angle, velocity of the triple point) but also the dependent analysis will be firstly explained in detail. These approaches lead to contact angle data and different access on specific contact angles which are independent from "user-skills" and subjectivity of the operator. As example the motion behaviour of droplets on flat silicon-oxide surfaces after different surface treatments is dynamically measured by sessile drop technique when inclining the sample plate. The triple points, the inclination angles, the downhill (advancing motion) and the uphill angles (receding motion) obtained by high-precision drop shape analysis are independently and dependently statistically analysed. Due to the small covered distance for the dependent analysis (<0.4mm) and the dominance of counted events with small velocity the measurements are less influenced by motion dynamics and the procedure can be called "slow moving" analysis. The presented procedures as performed are especially sensitive to the range which reaches from the static to the "slow moving" dynamic contact angle determination. They are characterised by small deviations of the computed values. Additional to the detailed introduction of this novel analytical approaches plus fit solution special motion relations for the drop on inclined surfaces and detailed relations about the reactivity of the freshly cleaned silicon wafer surface resulting in acceleration behaviour (reactive de-wetting) are presented. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. How Does Microanalysis of Mother-Infant Communication Inform Maternal Sensitivity and Infant Attachment?

    PubMed Central

    Beebe, Beatrice; Steele, Miriam

    2013-01-01

    Microanalysis research on 4-month mother-infant face-to-face communication operates like a “social microscope” and identifies aspects of maternal sensitivity and the origins of attachment with a more detailed lens. We hope to enhance a dialogue between these two paradigms, microanalysis of mother-infant communication and maternal sensitivity and emerging working models of attachment. The prediction of infant attachment from microanalytic approaches and their contribution to concepts of maternal sensitivity are described. We summarize aspects of one microanalytic study by Beebe and colleagues (2010) that documents new communication patterns between mothers and infants at 4 months that predict future disorganized (vs. secure) attachment. The microanalysis approach opens up a new window on the details of the micro-processes of face-to-face communication. It provides a new, rich set of behaviors with which to extend our understanding of the origins of infant attachment and of maternal sensitivity. PMID:24299136

  16. How does microanalysis of mother-infant communication inform maternal sensitivity and infant attachment?

    PubMed

    Beebe, Beatrice; Steele, Miriam

    2013-01-01

    Microanalysis research on 4-month infant-mother face-to-face communication operates like a "social microscope" and identifies aspects of maternal sensitivity and the origins of attachment with a more detailed lens. We hope to enhance a dialogue between these two paradigms, microanalysis of mother-infant communication and maternal sensitivity and emerging working models of attachment. The prediction of infant attachment from microanalytic approaches and their contribution to concepts of maternal sensitivity are described. We summarize aspects of one microanalytic study by Beebe and colleagues published in 2010 that documents new communication patterns between mothers and infants at 4 months that predict future disorganized (vs. secure) attachment. The microanalysis approach opens up a new window on the details of the micro-processes of face-to-face communication. It provides a new, rich set of behaviors with which to extend our understanding of the origins of infant attachment and of maternal sensitivity.

  17. Measuring voice outcomes: state of the science review.

    PubMed

    Carding, Pau N; Wilson, J A; MacKenzie, K; Deary, I J

    2009-08-01

    Researchers evaluating voice disorder interventions currently have a plethora of voice outcome measurement tools from which to choose. Faced with such a wide choice, it would be beneficial to establish a clear rationale to guide selection. This article reviews the published literature on the three main areas of voice outcome assessment: (1) perceptual rating of voice quality, (2) acoustic measurement of the speech signal and (3) patient self-reporting of voice problems. We analysed the published reliability, validity, sensitivity to change and utility of the common outcome measurement tools in each area. From the data, we suggest that routine voice outcome measurement should include (1) an expert rating of voice quality (using the Grade-Roughness-Breathiness-Asthenia-Strain rating scale) and (2) a short self-reporting tool (either the Vocal Performance Questionnaire or the Vocal Handicap Index 10). These measures have high validity, the best reported reliability to date, good sensitivity to change data and excellent utility ratings. However, their application and administration require attention to detail. Acoustic measurement has arguable validity and poor reliability data at the present time. Other areas of voice outcome measurement (e.g. stroboscopy and aerodynamic phonatory measurements) require similarly detailed research and analysis.

  18. Gene expression analysis upon lncRNA DDSR1 knockdown in human fibroblasts

    PubMed Central

    Jia, Li; Sun, Zhonghe; Wu, Xiaolin; Misteli, Tom; Sharma, Vivek

    2015-01-01

    Long non-coding RNAs (lncRNAs) play important roles in regulating diverse biological processes including DNA damage and repair. We have recently reported that the DNA damage inducible lncRNA DNA damage-sensitive RNA1 (DDSR1) regulates DNA repair by homologous recombination (HR). Since lncRNAs also modulate gene expression, we identified gene expression changes upon DDSR1 knockdown in human fibroblast cells. Gene expression analysis after RNAi treatment targeted against DDSR1 revealed 119 genes that show differential expression. Here we provide a detailed description of the microarray data (NCBI GEO accession number GSE67048) and the data analysis procedure associated with the publication by Sharma et al., 2015 in EMBO Reports [1]. PMID:26697398

  19. Disgust sensitivity and eating disorder symptoms in a non-clinical population.

    PubMed

    Mayer, Birgit; Muris, Peter; Bos, Arjan E R; Suijkerbuijk, Chantal

    2008-12-01

    In order to further explore the relationship between disgust sensitivity and eating disorder symptoms, 2 studies were carried out. In the first study, 352 higher education students (166 women, 186 men) completed a set of questionnaires measuring various aspects of disgust sensitivity and eating disorder symptoms. A correlational analysis revealed that there were few significant correlations between disgust scales and eating pathology scores. One exception was the relation between disgust sensitivity and external eating behavior, although this link only emerged in women. To investigate this relationship in more detail, Study 2 confronted women high (n=29) and low (n=30) on external eating behavior with a series of disgusting and neutral pictures. It was hypothesized that women who scored high on external eating would display shorter viewing times of disgusting pictures (i.e., show more avoidance behavior) than women scoring low on external eating. However, this hypothesis was not confirmed by the data. Altogether, the results of these studies suggest that there seems to be no convincing relationship between disgust sensitivity and eating disorder symptomatology, thereby casting doubts on the role of this individual difference factor in the development of eating pathology.

  20. Sensitivity of the Boundary Plasma to the Plasma-Material Interface

    DOE PAGES

    Canik, John M.; Tang, X. -Z.

    2017-01-01

    While the sensitivity of the scrape-off layer and divertor plasma to the highly uncertain cross-field transport assumptions is widely recognized, the plasma is also sensitive to the details of the plasma-material interface (PMI) models used as part of comprehensive predictive simulations. Here in this paper, these PMI sensitivities are studied by varying the relevant sub-models within the SOLPS plasma transport code. Two aspects are explored: the sheath model used as a boundary condition in SOLPS, and fast particle reflection rates for ions impinging on a material surface. Both of these have been the study of recent high-fidelity simulation efforts aimedmore » at improving the understanding and prediction of these phenomena. It is found that in both cases quantitative changes to the plasma solution result from modification of the PMI model, with a larger impact in the case of the reflection coefficient variation. Finally, this indicates the necessity to better quantify the uncertainties within the PMI models themselves, and perform thorough sensitivity analysis to propagate these throughout the boundary model; this is especially important for validation against experiment, where the error in the simulation is a critical and less-studied piece of the code-experiment comparison.« less

  1. Theoretical Study on Effects of Hydrogen-Bonding and Molecule-Cation Interactions on the Sensitivity of HMX.

    PubMed

    Li, Yunlu; Wu, Junpeng; Cao, Duanlin; Wang, Jianlong

    2016-10-04

    To assess the effects of weak interactions on the sensitivity of HMX, eleven complexes of HMX (where six of them are hydrogen-bonding complexes, and the other five are molecular-cation complexes) have been studied via quantum chemical treatment. The geometric and electronic structures were determined using DFT-B3LYP and MP2(full) methods with the 6-311++G(2df, 2p) and aug-cc-pVTZ basis sets. The changes of the bond dissociation energy (BDE) of the trigger bond (N-NO2 in HMX) and nitro group charge have been computed on the detail consideration to access the sensitivity changes of HMX. The results indicate that upon complex forming, the BDE increases and the charge of nitro group turns more negative in complexes, suggesting that the strength of the N-NO2 trigger bond is enhanced then the sensitivity of HMX is reduced. Atom-in-molecules analysis have also been carried to understand the nature of intermolecular interactions and the strength of trigger bonds.

  2. Wavenumber-frequency Spectra of Pressure Fluctuations Measured via Fast Response Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Panda, J.; Roozeboom, N. H.; Ross, J. C.

    2016-01-01

    The recent advancement in fast-response Pressure-Sensitive Paint (PSP) allows time-resolved measurements of unsteady pressure fluctuations from a dense grid of spatial points on a wind tunnel model. This capability allows for direct calculations of the wavenumber-frequency (k-?) spectrum of pressure fluctuations. Such data, useful for the vibro-acoustics analysis of aerospace vehicles, are difficult to obtain otherwise. For the present work, time histories of pressure fluctuations on a flat plate subjected to vortex shedding from a rectangular bluff-body were measured using PSP. The light intensity levels in the photographic images were then converted to instantaneous pressure histories by applying calibration constants, which were calculated from a few dynamic pressure sensors placed at selective points on the plate. Fourier transform of the time-histories from a large number of spatial points provided k-? spectra for pressure fluctuations. The data provides first glimpse into the possibility of creating detailed forcing functions for vibro-acoustics analysis of aerospace vehicles, albeit for a limited frequency range.

  3. Evaluation and Improvement of Liquid Propellant Rocket Chugging Analysis Techniques. Part 1: A One-Dimensional Analysis of Low Frequency Combustion Instability in the Fuel Preburner of the Space Shuttle Main Engine. Final Report M.S. Thesis - Aug. 1986

    NASA Technical Reports Server (NTRS)

    Lim, Kair Chuan

    1986-01-01

    Low frequency combustion instability, known as chugging, is consistently experienced during shutdown in the fuel and oxidizer preburners of the Space Shuttle Main Engines. Such problems always occur during the helium purge of the residual oxidizer from the preburner manifolds during the shutdown sequence. Possible causes and triggering mechanisms are analyzed and details in modeling the fuel preburner chug are presented. A linearized chugging model, based on the foundation of previous models, capable of predicting the chug occurrence is discussed and the predicted results are presented and compared to experimental work performed by NASA. Sensitivity parameters such as chamber pressure, fuel and oxidizer temperatures, and the effective bulk modulus of the liquid oxidizer are considered in analyzing the fuel preburner chug. The computer program CHUGTEST is utilized to generate the stability boundary for each sensitivity study and the region for stable operation is identified.

  4. The 2006 Kennedy Space Center Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the Performance of the National Aeronautics and Space Administration's Space Shuttle Vehicle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Decker, Ryan; Harrington, Brian; Merry, Carl

    2008-01-01

    The Kennedy Space Center (KSC) Range Reference Atmosphere (RRA) is a statistical model that summarizes wind and thermodynamic atmospheric variability from surface to 70 km. The National Aeronautics and Space Administration's (NASA) Space Shuttle program, which launches from KSC, utilizes the KSC RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the KSC RRA was recently completed. As part of the update, the Natural Environments Branch at NASA's Marshall Space Flight Center (MSFC) conducted a validation study and a comparison analysis to the existing KSC RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed by JSC/Ascent Flight Design Division to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  5. Sensitivity analysis for best-estimate thermal models of vertical dry cask storage systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVoe, Remy R.; Robb, Kevin R.; Skutnik, Steven E.

    Loading requirements for dry cask storage of spent nuclear fuel are driven primarily by decay heat capacity limitations, which themselves are determined through recommended limits on peak cladding temperature within the cask. This study examines the relative sensitivity of peak material temperatures within the cask to parameters that influence both the stored fuel residual decay heat as well as heat removal mechanisms. Here, these parameters include the detailed reactor operating history parameters (e.g., soluble boron concentrations and the presence of burnable poisons) as well as factors that influence heat removal, including non-dominant processes (such as conduction from the fuel basketmore » to the canister and radiation within the canister) and ambient environmental conditions. By examining the factors that drive heat removal from the cask alongside well-understood factors that drive decay heat, it is therefore possible to make a contextual analysis of the most important parameters to evaluation of peak material temperatures within the cask.« less

  6. User Manual for Whisper-1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2017-01-26

    Whisper is a statistical analysis package developed in 2014 to support nuclear criticality safety (NCS) validation [1-3]. It uses the sensitivity profile data for an application as computed by MCNP6 [4-6] along with covariance files [7,8] for the nuclear data to determine a baseline upper-subcritical-limit (USL) for the application. Whisper version 1.0 was first developed and used at LANL in 2014 [3]. During 2015- 2016, Whisper was updated to version 1.1 and is to be included with the upcoming release of MCNP6.2. This document describes the user input and options for running whisper-1.1, including 2 perl utility scripts that simplifymore » ordinary NCS work, whisper_mcnp.pl and whisper_usl.pl. For many detailed references on the theory, applications, nuclear data & covariances, SQA, verification-validation, adjointbased methods for sensitivity-uncertainty analysis, and more – see the Whisper – NCS Validation section of the MCNP Reference Collection at mcnp.lanl.gov. There are currently over 50 Whisper reference documents available.« less

  7. Sensitivity analysis for best-estimate thermal models of vertical dry cask storage systems

    DOE PAGES

    DeVoe, Remy R.; Robb, Kevin R.; Skutnik, Steven E.

    2017-07-08

    Loading requirements for dry cask storage of spent nuclear fuel are driven primarily by decay heat capacity limitations, which themselves are determined through recommended limits on peak cladding temperature within the cask. This study examines the relative sensitivity of peak material temperatures within the cask to parameters that influence both the stored fuel residual decay heat as well as heat removal mechanisms. Here, these parameters include the detailed reactor operating history parameters (e.g., soluble boron concentrations and the presence of burnable poisons) as well as factors that influence heat removal, including non-dominant processes (such as conduction from the fuel basketmore » to the canister and radiation within the canister) and ambient environmental conditions. By examining the factors that drive heat removal from the cask alongside well-understood factors that drive decay heat, it is therefore possible to make a contextual analysis of the most important parameters to evaluation of peak material temperatures within the cask.« less

  8. The Review of Nuclear Microscopy Techniques: An Approach for Nondestructive Trace Elemental Analysis and Mapping of Biological Materials.

    PubMed

    Mulware, Stephen Juma

    2015-01-01

    The properties of many biological materials often depend on the spatial distribution and concentration of the trace elements present in a matrix. Scientists have over the years tried various techniques including classical physical and chemical analyzing techniques each with relative level of accuracy. However, with the development of spatially sensitive submicron beams, the nuclear microprobe techniques using focused proton beams for the elemental analysis of biological materials have yielded significant success. In this paper, the basic principles of the commonly used microprobe techniques of STIM, RBS, and PIXE for trace elemental analysis are discussed. The details for sample preparation, the detection, and data collection and analysis are discussed. Finally, an application of the techniques to analysis of corn roots for elemental distribution and concentration is presented.

  9. Simulation of the effects of aerosol on mixed-phase orographic clouds using the WRF model with a detailed bin microphysics scheme

    NASA Astrophysics Data System (ADS)

    Xiao, Hui; Yin, Yan; Jin, Lianji; Chen, Qian; Chen, Jinghua

    2015-08-01

    The Weather Research Forecast (WRF) mesoscale model coupled with a detailed bin microphysics scheme is used to investigate the impact of aerosol particles serving as cloud condensation nuclei and ice nuclei on orographic clouds and precipitation. A mixed-phase orographic cloud developed under two scenarios of aerosol (a typical continental background and a relatively polluted urban condition) and ice nuclei over an idealized mountain is simulated. The results show that, when the initial aerosol condition is changed from the relatively clean case to the polluted scenario, more droplets are activated, leading to a delay in precipitation, but the precipitation amount over the terrain is increased by about 10%. A detailed analysis of the microphysical processes indicates that ice-phase particles play an important role in cloud development, and their contribution to precipitation becomes more important with increasing aerosol particle concentrations. The growth of ice-phase particles through riming and Wegener-Bergeron-Findeisen regime is more effective under more polluted conditions, mainly due to the increased number of droplets with a diameter of 10-30 µm. Sensitivity tests also show that a tenfold increase in the concentration of ice crystals formed from ice nucleation leads to about 7% increase in precipitation, and the sensitivity of the precipitation to changes in the concentration and size distribution of aerosol particles is becoming less pronounced when the concentration of ice crystals is also increased.

  10. Culturally Sensitive Parent Education: A Critical Review of Quantitative Research.

    ERIC Educational Resources Information Center

    Gorman, Jean Cheng; Balter, Lawrence

    1997-01-01

    Critically reviews the quantitative literature on culturally sensitive parent education programs, discussing issues of research methodology and program efficacy in producing change among ethnic minority parents and their children. Culturally sensitive programs for African American and Hispanic families are described in detail. Methodological flaws…

  11. Sensitivity of planetary cruise navigation to earth orientation calibration errors

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Folkner, W. M.

    1995-01-01

    A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.

  12. Detailed analysis of an optimized FPP-based 3D imaging system

    NASA Astrophysics Data System (ADS)

    Tran, Dat; Thai, Anh; Duong, Kiet; Nguyen, Thanh; Nehmetallah, Georges

    2016-05-01

    In this paper, we present detail analysis and a step-by-step implementation of an optimized fringe projection profilometry (FPP) based 3D shape measurement system. First, we propose a multi-frequency and multi-phase shifting sinusoidal fringe pattern reconstruction approach to increase accuracy and sensitivity of the system. Second, phase error compensation caused by the nonlinear transfer function of the projector and camera is performed through polynomial approximation. Third, phase unwrapping is performed using spatial and temporal techniques and the tradeoff between processing speed and high accuracy is discussed in details. Fourth, generalized camera and system calibration are developed for phase to real world coordinate transformation. The calibration coefficients are estimated accurately using a reference plane and several gauge blocks with precisely known heights and by employing a nonlinear least square fitting method. Fifth, a texture will be attached to the height profile by registering a 2D real photo to the 3D height map. The last step is to perform 3D image fusion and registration using an iterative closest point (ICP) algorithm for a full field of view reconstruction. The system is experimentally constructed using compact, portable, and low cost off-the-shelf components. A MATLAB® based GUI is developed to control and synchronize the whole system.

  13. Simulating observations with HARMONI: the integral field spectrograph for the European Extremely Large Telescope

    NASA Astrophysics Data System (ADS)

    Zieleniewski, Simon; Thatte, Niranjan; Kendrew, Sarah; Houghton, Ryan; Tecza, Matthias; Clarke, Fraser; Fusco, Thierry; Swinbank, Mark

    2014-07-01

    With the next generation of extremely large telescopes commencing construction, there is an urgent need for detailed quantitative predictions of the scientific observations that these new telescopes will enable. Most of these new telescopes will have adaptive optics fully integrated with the telescope itself, allowing unprecedented spatial resolution combined with enormous sensitivity. However, the adaptive optics point spread function will be strongly wavelength dependent, requiring detailed simulations that accurately model these variations. We have developed a simulation pipeline for the HARMONI integral field spectrograph, a first light instrument for the European Extremely Large Telescope. The simulator takes high-resolution input data-cubes of astrophysical objects and processes them with accurate atmospheric, telescope and instrumental effects, to produce mock observed cubes for chosen observing parameters. The output cubes represent the result of a perfect data reduc- tion process, enabling a detailed analysis and comparison between input and output, showcasing HARMONI's capabilities. The simulations utilise a detailed knowledge of the telescope's wavelength dependent adaptive op- tics point spread function. We discuss the simulation pipeline and present an early example of the pipeline functionality for simulating observations of high redshift galaxies.

  14. Simplification and its consequences in biological modelling: conclusions from a study of calcium oscillations in hepatocytes.

    PubMed

    Hetherington, James P J; Warner, Anne; Seymour, Robert M

    2006-04-22

    Systems Biology requires that biological modelling is scaled up from small components to system level. This can produce exceedingly complex models, which obscure understanding rather than facilitate it. The successful use of highly simplified models would resolve many of the current problems faced in Systems Biology. This paper questions whether the conclusions of simple mathematical models of biological systems are trustworthy. The simplification of a specific model of calcium oscillations in hepatocytes is examined in detail, and the conclusions drawn from this scrutiny generalized. We formalize our choice of simplification approach through the use of functional 'building blocks'. A collection of models is constructed, each a progressively more simplified version of a well-understood model. The limiting model is a piecewise linear model that can be solved analytically. We find that, as expected, in many cases the simpler models produce incorrect results. However, when we make a sensitivity analysis, examining which aspects of the behaviour of the system are controlled by which parameters, the conclusions of the simple model often agree with those of the richer model. The hypothesis that the simplified model retains no information about the real sensitivities of the unsimplified model can be very strongly ruled out by treating the simplification process as a pseudo-random perturbation on the true sensitivity data. We conclude that sensitivity analysis is, therefore, of great importance to the analysis of simple mathematical models in biology. Our comparisons reveal which results of the sensitivity analysis regarding calcium oscillations in hepatocytes are robust to the simplifications necessarily involved in mathematical modelling. For example, we find that if a treatment is observed to strongly decrease the period of the oscillations while increasing the proportion of the cycle during which cellular calcium concentrations are rising, without affecting the inter-spike or maximum calcium concentrations, then it is likely that the treatment is acting on the plasma membrane calcium pump.

  15. Ice phase in altocumulus clouds over Leipzig: remote sensing observations and detailed modeling

    NASA Astrophysics Data System (ADS)

    Simmel, M.; Bühl, J.; Ansmann, A.; Tegen, I.

    2015-09-01

    The present work combines remote sensing observations and detailed cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather high temperatures of -6 °C. For comparison, a second mixed phase case at about -25 °C is introduced. To further look into the details of cloud microphysical processes, a simple dynamics model of the Asai-Kasahara (AK) type is combined with detailed spectral microphysics (SPECS) forming the model system AK-SPECS. Vertical velocities are prescribed to force the dynamics, as well as main cloud features, to be close to the observations. Subsequently, sensitivity studies with respect to ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity), whereas the ice phase is much more sensitive to the microphysical parameters (ice nucleating particle (INP) number, ice particle shape). The choice of ice particle shape may induce large uncertainties that are on the same order as those for the temperature-dependent INP number distribution.

  16. Ice phase in altocumulus clouds over Leipzig: remote sensing observations and detailed modelling

    NASA Astrophysics Data System (ADS)

    Simmel, M.; Bühl, J.; Ansmann, A.; Tegen, I.

    2015-01-01

    The present work combines remote sensing observations and detailed cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather warm temperatures of -6 °C. For comparison, a second mixed phase case at about -25 °C is introduced. To further look into the details of cloud microphysical processes a simple dynamics model of the Asai-Kasahara type is combined with detailed spectral microphysics forming the model system AK-SPECS. Vertical velocities are prescribed to force the dynamics as well as main cloud features to be close to the observations. Subsequently, sensitivity studies with respect to ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity) whereas the ice phase is much more sensitive to the microphysical parameters (ice nuclei (IN) number, ice particle shape). The choice of ice particle shape may induce large uncertainties which are in the same order as those for the temperature-dependent IN number distribution.

  17. External Validity of Electronic Sniffers for Automated Recognition of Acute Respiratory Distress Syndrome.

    PubMed

    McKown, Andrew C; Brown, Ryan M; Ware, Lorraine B; Wanderer, Jonathan P

    2017-01-01

    Automated electronic sniffers may be useful for early detection of acute respiratory distress syndrome (ARDS) for institution of treatment or clinical trial screening. In a prospective cohort of 2929 critically ill patients, we retrospectively applied published sniffer algorithms for automated detection of acute lung injury to assess their utility in diagnosis of ARDS in the first 4 ICU days. Radiographic full-text reports were searched for "edema" OR ("bilateral" AND "infiltrate") and a more detailed algorithm for descriptions consistent with ARDS. Patients were flagged as possible ARDS if a radiograph met search criteria and had a PaO 2 /FiO 2 or SpO 2 /FiO 2 of 300 or 315, respectively. Test characteristics of the electronic sniffers and clinical suspicion of ARDS were compared to a gold standard of 2-physician adjudicated ARDS. Thirty percent of 2841 patients included in the analysis had gold standard diagnosis of ARDS. The simpler algorithm had sensitivity for ARDS of 78.9%, specificity of 52%, positive predictive value (PPV) of 41%, and negative predictive value (NPV) of 85.3% over the 4-day study period. The more detailed algorithm had sensitivity of 88.2%, specificity of 55.4%, PPV of 45.6%, and NPV of 91.7%. Both algorithms were more sensitive but less specific than clinician suspicion, which had sensitivity of 40.7%, specificity of 94.8%, PPV of 78.2%, and NPV of 77.7%. Published electronic sniffer algorithms for ARDS may be useful automated screening tools for ARDS and improve on clinical recognition, but they are limited to screening rather than diagnosis because their specificity is poor.

  18. Thermal analysis of a conceptual design for a 250 We GPHS/FPSE space power system

    NASA Technical Reports Server (NTRS)

    Mccomas, Thomas J.; Dugan, Edward T.

    1991-01-01

    A thermal analysis has been performed for a 250-We space nuclear power system which combines the US Department of Energy's general purpose heat source (GPHS) modules with a state-of-the-art free-piston Stirling engine (FPSE). The focus of the analysis is on the temperature of the indium fuel clad within the GPHS modules. The thermal analysis results indicate fuel clad temperatures slightly higher than the design goal temperature of 1573 K. The results are considered favorable due to numerous conservative assumptions used. To demonstrate the effects of the conservatism, a brief sensitivity analysis is performed in which a few of the key system parameters are varied to determine their effect on the fuel clad temperatures. It is shown that thermal analysis of a more detailed thermal mode should yield fuel clad temperatures below 1573 K.

  19. Fabrication and characterization of 3C-silicon carbide micro sensor for wireless blood pressure measurements

    NASA Astrophysics Data System (ADS)

    Basak, Nupur

    A potentially implantable single crystal 3C-SiC pressure sensor for blood pressure measurement was designed, simulated, fabricated, characterized and optimized. This research uses a single crystal 3C-SiC, for the first time, to demonstrate its application as a blood pressure measurement sensor. The sensor, which uses the epitaxial grown 3C-SiC membrane to measure changes in pressure, is designed to be wireless, biocompatible and linear. The SiC material was chosen for its superior physical, chemical and mechanical properties; the capacitive sensor uses a 3C-SiC membrane as one of the electrodes; and, the sensor system is wireless for comfort and to allow for convenient reading of real-time pressure data (wireless communication is enabled by connecting the sensor parallel to a planar inductor). Together, the variable capacitive sensor and planar inductor create a pressure sensitive resonant circuit. The sensor system described above allows for implantation into a human patient's body, after which the planar inductor can be coupled with an external inductor to receive data for real-time blood pressure measurement. Electroplating, thick photo-resist characterization, RIE etching, oxidation, CVD, chemical mechanical polishing and wafer bonding were optimized during the process of fabricating the sensor system and, in addition to detailing the sensor system simulation and characterization; the optimized processes are detailed in the dissertation. This absolute pressure sensor is designed to function optimally within the human blood pressure range of 50-350mmHg. The layout and modeling of the sensor uses finite element analysis (FEA) software. The simulations for membrane deflection, stress analysis and electro-mechanical analysis are performed for 100 μm2 and 400μm2sensors. The membrane deflection-pressure, capacitance-pressure and resonant frequency-pressure graphs were obtained, and detailed in the dissertation, along with the planar inductor simulation for differently sized inductors. Ultimately, an optimized sensor with a size of 400μm2 was chosen because of its high sensitivity. The sensor, and the planar inductor, which is 3mm 2, is comparable to the presently researched implantable chip size. The measured inductance of the gold electroplated inductor is 0.371μH. The capacitance changes from 0.934 pF to 0.997pF with frequency shift of 248MHz to 256 MHz. The sensitivity of the sensor is found to be 0.21 fF/mmHg or 27.462 kHz/mmHg with an average non-linearity of 0.23216%.

  20. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    ERIC Educational Resources Information Center

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  1. First status report on regional ground-water flow modeling for the Paradox Basin, Utah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, R.W.

    1984-05-01

    Regional ground-water flow within the principal hydrogeologic units of the Paradox Basin is evaluated by developing a conceptual model of the flow regime in the shallow aquifers and the deep-basin brine aquifers and testing these models using a three-dimensional, finite-difference flow code. Semiquantitative sensitivity analysis (a limited parametric study) is conducted to define the system response to changes in hydrologic properties or boundary conditions. A direct method for sensitivity analysis using an adjoint form of the flow equation is applied to the conceptualized flow regime in the Leadville limestone aquifer. All steps leading to the final results and conclusions aremore » incorporated in this report. The available data utilized in this study is summarized. The specific conceptual models, defining the areal and vertical averaging of litho-logic units, aquifer properties, fluid properties, and hydrologic boundary conditions, are described in detail. Two models were evaluated in this study: a regional model encompassing the hydrogeologic units above and below the Paradox Formation/Hermosa Group and a refined scale model which incorporated only the post Paradox strata. The results are delineated by the simulated potentiometric surfaces and tables summarizing areal and vertical boundary fluxes, Darcy velocities at specific points, and ground-water travel paths. Results from the adjoint sensitivity analysis include importance functions and sensitivity coefficients, using heads or the average Darcy velocities to represent system response. The reported work is the first stage of an ongoing evaluation of the Gibson Dome area within the Paradox Basin as a potential repository for high-level radioactive wastes.« less

  2. Application of sensitivity analysis for assessment of de-desertification alternatives in the central Iran by using Triantaphyllou method.

    PubMed

    Sadeghi Ravesh, Mohammad Hassan; Ahmadi, Hassan; Zehtabian, Gholamreza

    2011-08-01

    Desertification, land degradation in arid, semi-arid, and dry sub-humid regions, is a global environmental problem. With respect to increasing importance of desertification and its complexity, the necessity of attention to the optimal de-desertification alternatives is essential. Therefore, this work presents an analytic hierarchy process (AHP) method to objectively select the optimal de-desertification alternatives based on the results of interviews with experts in Khezr Abad region, central Iran as the case study. This model was used in Yazd Khezr Abad region to evaluate the efficiency in presentation of better alternatives related to personal and environmental situations. Obtained results indicate that the criterion "proportion and adaptation to the environment" with the weighted average of 33.6% is the most important criterion from experts viewpoints. While prevention alternatives of land usage unsuitable of reveres and conversion with 22.88% mean weight and vegetation cover development and reclamation with 21.9% mean weight are recognized ordinarily as the most important de-desertification alternatives in region. Finally, sensitivity analysis is performed in detail by varying the objective factor decision weight, the priority weight of subjective factors, and the gain factors. After the fulfillment of sensitivity analysis and determination of the most sensitive criteria and alternatives, the former classification and ranking of alternatives does not change so much, and it was observed that unsuitable land use alternative with the preference degree of 22.7% was still in the first order of priority. The final priority of livestock grazing control alternative was replaced with the alternative of modification of ground water harvesting.

  3. Modeling and simulation of deformation of hydrogels responding to electric stimulus.

    PubMed

    Li, Hua; Luo, Rongmo; Lam, K Y

    2007-01-01

    A model for simulation of pH-sensitive hydrogels is refined in this paper to extend its application to electric-sensitive hydrogels, termed the refined multi-effect-coupling electric-stimulus (rMECe) model. By reformulation of the fixed-charge density and consideration of finite deformation, the rMECe model is able to predict the responsive deformations of the hydrogels when they are immersed in a bath solution subject to externally applied electric field. The rMECe model consists of nonlinear partial differential governing equations with chemo-electro-mechanical coupling effects and the fixed-charge density with electric-field effect. By comparison between simulation and experiment extracted from literature, the model is verified to be accurate and stable. The rMECe model performs quantitatively for deformation analysis of the electric-sensitive hydrogels. The influences of several physical parameters, including the externally applied electric voltage, initial fixed-charge density, hydrogel strip thickness, ionic strength and valence of surrounding solution, are discussed in detail on the displacement and average curvature of the hydrogels.

  4. Non-radioactive detection of trinucleotide repeat size variability.

    PubMed

    Tomé, Stéphanie; Nicole, Annie; Gomes-Pereira, Mario; Gourdon, Genevieve

    2014-03-06

    Many human diseases are associated with the abnormal expansion of unstable trinucleotide repeat sequences. The mechanisms of trinucleotide repeat size mutation have not been fully dissected, and their understanding must be grounded on the detailed analysis of repeat size distributions in human tissues and animal models. Small-pool PCR (SP-PCR) is a robust, highly sensitive and efficient PCR-based approach to assess the levels of repeat size variation, providing both quantitative and qualitative data. The method relies on the amplification of a very low number of DNA molecules, through sucessive dilution of a stock genomic DNA solution. Radioactive Southern blot hybridization is sensitive enough to detect SP-PCR products derived from single template molecules, separated by agarose gel electrophoresis and transferred onto DNA membranes. We describe a variation of the detection method that uses digoxigenin-labelled locked nucleic acid probes. This protocol keeps the sensitivity of the original method, while eliminating the health risks associated with the manipulation of radiolabelled probes, and the burden associated with their regulation, manipulation and waste disposal.

  5. Development of high-sensitive, reproducible colloidal surface-enhanced Raman spectroscopy active substrate using silver nanocubes for potential biosensing applications

    NASA Astrophysics Data System (ADS)

    Hasna, Kudilatt; Lakshmi, Kiran; Ezhuthachan Jayaraj, Madambi Kunjukuttan; Kumar, Kumaran Rajeev; Matham, Murukeshan Vadakke

    2016-04-01

    Surface-enhanced Raman spectroscopy (SERS) has emerged as one of the thrust research areas that could find potential applications in bio and chemical sensing. We developed colloidal SERS active substrate with excellent sensitivity and high reproducibility using silver nanocube (AgNC) synthesized via the solvothermal method. Finite-difference time-domain simulation was carried out in detail to visualize dipole generation in the nanocube during localized surface plasmon resonance and to locate the respective hot spots in AgNC responsible for the huge Raman enhancement. The prediction is verified by the SERS analysis of the synthesized nanocubes using Rhodamine 6G molecule. An excellent sensitivity with a detection limit of 10-17 M and a very high enhancement factor of 1.2×108 confirms the "hot spots" in the nanocube. SERS activity is also carried out for crystal violet and for food adulterant Sudan I molecule. Finally, label-free DNA detection is performed to demonstrate the versatility of SERS as a potential biosensor.

  6. Can We Probe the Conductivity of the Lithosphere and Upper Mantle Using Satellite Tidal Magnetic Signals?

    NASA Technical Reports Server (NTRS)

    Schnepf, N. R.; Kuvshinov, A.; Sabaka, T.

    2015-01-01

    A few studies convincingly demonstrated that the magnetic fields induced by the lunar semidiurnal (M2) ocean flow can be identified in satellite observations. This result encourages using M2 satellite magnetic data to constrain subsurface electrical conductivity in oceanic regions. Traditional satellite-based induction studies using signals of magnetospheric origin are mostly sensitive to conducting structures because of the inductive coupling between primary and induced sources. In contrast, galvanic coupling from the oceanic tidal signal allows for studying less conductive, shallower structures. We perform global 3-D electromagnetic numerical simulations to investigate the sensitivity of M2 signals to conductivity distributions at different depths. The results of our sensitivity analysis suggest it will be promising to use M2 oceanic signals detected at satellite altitude for probing lithospheric and upper mantle conductivity. Our simulations also suggest that M2 seafloor electric and magnetic field data may provide complementary details to better constrain lithospheric conductivity.

  7. Ras-sensitive IMP modulation of the Raf/MEK/ERK cascade through KSR1.

    PubMed

    Matheny, Sharon A; White, Michael A

    2006-01-01

    The E3 ubiquitin ligase IMP (impedes mitogenic signal propagation) was isolated as a novel Ras effector that negatively regulates ERK1/2 activation. Current evidence suggests that IMP limits the functional assembly of Raf/MEK complexes by inactivation of the KSR1 adaptor/scaffold protein. Interaction with Ras-GTP stimulates IMP autoubiquitination to relieve limitations on KSR function. The elevated sensitivity of IMP-depleted cells to ERK1/2 pathway activation suggests IMP acts as a signal threshold regulator by imposing reversible restrictions on the assembly of functional Raf/MEK/ERK kinase modules. These observations challenge commonly held concepts of signal transmission by Ras to the MAPK pathway and provide evidence for the role of amplitude modulation in tuning cellular responses to ERK1/2 pathway engagement. Here we describe details of the methods, including RNA interference, ubiquitin ligase assays, and protein complex analysis, that can be used to display the Ras-sensitive contribution of IMP to KSR-dependent modulation of the Raf/MEK/ERK pathway.

  8. Solar Variability and the Near-Earth Environment: Mining Enhanced Low Dose Rate Sensitivity Data From the Microelectronics and Photonics Test Bed Space Experiment

    NASA Technical Reports Server (NTRS)

    Turflinger, T.; Schmeichel, W.; Krieg, J.; Titus, J.; Campbell, A.; Reeves, M.; Marshall (P.); Hardage, Donna (Technical Monitor)

    2004-01-01

    This effort is a detailed analysis of existing microelectronics and photonics test bed satellite data from one experiment, the bipolar test board, looking to improve our understanding of the enhanced low dose rate sensitivity (ELDRS) phenomenon. Over the past several years, extensive total dose irradiations of bipolar devices have demonstrated that many of these devices exhibited ELDRS. In sensitive bipolar transistors, ELDRS produced enhanced degradation of base current, resulting in enhanced gain degradation at dose rates <0.1 rd(Si)/s compared to similar transistors irradiated at dose rates >1 rd(Si)/s. This Technical Publication provides updated information about the test devices, the in-flight experiment, and both flight-and ground-based observations. Flight data are presented for the past 5 yr of the mission. These data are compared to ground-based data taken on devices from the same date code lots. Information about temperature fluctuations, power shutdowns, and other variables encountered during the space flight are documented.

  9. Supplement: “The Rate of Binary Black Hole Mergers Inferred from Advanced LIGO Observations Surrounding GW150914” (2016, ApJL, 833, L1)

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D’Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O’Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O’Reilly, B.; O’Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wesels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-12-01

    This article provides supplemental information for a Letter reporting the rate of (BBH) coalescences inferred from 16 days of coincident Advanced LIGO observations surrounding the transient (GW) signal GW150914. In that work we reported various rate estimates whose 90% confidence intervals fell in the range 2–600 Gpc‑3 yr‑1. Here we give details on our method and computations, including information about our search pipelines, a derivation of our likelihood function for the analysis, a description of the astrophysical search trigger distribution expected from merging BBHs, details on our computational methods, a description of the effects and our model for calibration uncertainty, and an analytic method for estimating our detector sensitivity, which is calibrated to our measurements.

  10. Mechanical Design Studies of the MQXF Long Model Quadrupole for the HiLumi LHC

    DOE PAGES

    Pan, Heng; Anderssen, Eric; Ambrosio, Giorgio; ...

    2016-12-20

    The Large Hadron Collider Luminosity upgrade (HiLumi) program requires new low-β triplet quadrupole magnets, called MQXF, in the Interaction Region (IR) to increase the LHC peak and integrated luminosity. The MQXF magnets, designed and fabricated in collaboration between CERN and the U.S. LARP, will all have the same cross section. The MQXF long model, referred as MQXFA, is a quadrupole using the Nb3Sn superconducting technology with 150 mm aperture and a 4.2 m magnetic length and is the first long prototype of the final MQXF design. The MQXFA magnet is based on the previous LARP HQ and MQXFS designs. Inmore » this paper we present the baseline design of the MQXFA structure with detailed 3D numerical analysis. A detailed tolerance analysis of the baseline case has been performed by using a 3D finite element model, which allows fast computation of structures modelled with actual tolerances. Tolerance sensitivity of each component is discussed to verify the actual tolerances to be achieved by vendors. In conclusion, tolerance stack-up analysis is presented in the end of this paper.« less

  11. Measurement, modeling, and analysis of nonmethane hydrocarbons and ozone in the southeast United States national parks

    NASA Astrophysics Data System (ADS)

    Kang, Daiwen

    In this research, the sources, distributions, transport, ozone formation potential, and biogenic emissions of VOCs are investigated focusing on three Southeast United States National Parks: Shenandoah National Park, Big Meadows site (SHEN), Great Smoky Mountains National Park at Cove Mountain (GRSM) and Mammoth Cave National Park (MACA). A detailed modeling analysis is conducted using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone characterized by high O3 surface concentrations. Nine emissions perturbation using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone characterized by high O 3 surface concentrations. In the observation-based analysis, source classification techniques based on correlation coefficient, chemical reactivity, and certain ratios were developed and applied to the data set. Anthropogenic VOCs from automobile exhaust dominate at Mammoth Cave National Park, and at Cove Mountain, Great Smoky Mountains National Park, while at Big Meadows, Shenandoah National Park, the source composition is complex and changed from 1995 to 1996. The dependence of isoprene concentrations on ambient temperatures is investigated, and similar regressional relationships are obtained for all three monitoring locations. Propylene-equivalent concentrations are calculated to account for differences in reaction rates between the OH and individual hydrocarbons, and to thereby estimate their relative contributions to ozone formation. Isoprene fluxes were also estimated for all these rural areas. Model predictions (base scenario) tend to give lower daily maximum O 3 concentrations than observations by 10 to 30%. Model predicted concentrations of lumped paraffin compounds are of the same order of magnitude as the observed values, while the observed concentrations for other species (isoprene, ethene, surrogate olefin, surrogate toluene, and surrogate xylene) are usually an order of magnitude higher than the predictions. Detailed sensitivity and process analyses in terms of ozone and VOC scenarios including the base scenario are designed and utilized in the model simulations. Model predictions are compared with the observed values at the three locations for the same time period. Detailed sensitivity and process analyses in terms of ozone and VOC budgets, and relative importance of various VOCs species are provided. (Abstract shortened by UMI.)

  12. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    PubMed Central

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models. PMID:19225569

  13. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    PubMed

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models.

  14. A lower and more constrained estimate of climate sensitivity using updated observations and detailed radiative forcing time series

    NASA Astrophysics Data System (ADS)

    Skeie, R. B.; Berntsen, T.; Aldrin, M.; Holden, M.; Myhre, G.

    2012-04-01

    A key question in climate science is to quantify the sensitivity of the climate system to perturbation in the radiative forcing (RF). This sensitivity is often represented by the equilibrium climate sensitivity, but this quantity is poorly constrained with significant probabilities for high values. In this work the equilibrium climate sensitivity (ECS) is estimated based on observed near-surface temperature change from the instrumental record, changes in ocean heat content and detailed RF time series. RF time series from pre-industrial times to 2010 for all main anthropogenic and natural forcing mechanisms are estimated and the cloud lifetime effect and the semi-direct effect, which are not RF mechanisms in a strict sense, are included in the analysis. The RF time series are linked to the observations of ocean heat content and temperature change through an energy balance model and a stochastic model, using a Bayesian approach to estimate the ECS from the data. The posterior mean of the ECS is 1.9˚C with 90% credible interval (C.I.) ranging from 1.2 to 2.9˚C, which is tighter than previously published estimates. Observational data up to and including year 2010 are used in this study. This is at least ten additional years compared to the majority of previously published studies that have used the instrumental record in attempts to constrain the ECS. We show that the additional 10 years of data, and especially 10 years of additional ocean heat content data, have significantly narrowed the probability density function of the ECS. If only data up to and including year 2000 are used in the analysis, the 90% C.I. is 1.4 to 10.6˚C with a pronounced heavy tail in line with previous estimates of ECS constrained by observations in the 20th century. Also the transient climate response (TCR) is estimated in this study. Using observational data up to and including year 2010 gives a 90% C.I. of 1.0 to 2.1˚C, while the 90% C.I. is significantly broader ranging from 1.1 to 3.4 ˚C if only data up to and including year 2000 is used.

  15. Biological particle analysis by mass spectrometry

    NASA Technical Reports Server (NTRS)

    Vilker, V. L.; Platz, R. M.

    1983-01-01

    An instrument that analyzes the chemical composition of biological particles in aerosol or hydrosol form was developed. Efforts were directed toward the acquisition of mass spectra from aerosols of biomolecules and bacteria. The filament ion source was installed on the particle analysis by mass spectrometry system. Modifications of the vacuum system improved the sensitivity of the mass spectrometer. After the modifications were incorporated, detailed mass spectra of simple compounds from the three major classes of biomolecules, proteins, nucleic acids, and carbohydrates were obtained. A method of generating bacterial aerosols was developed. The aerosols generated were collected and examined in the scanning electron microscope to insure that the bacteria delivered to the mass spectrometer were intact and free from debris.

  16. Stepwise sensitivity analysis from qualitative to quantitative: Application to the terrestrial hydrological modeling of a Conjunctive Surface-Subsurface Process (CSSP) land surface model

    NASA Astrophysics Data System (ADS)

    Gan, Yanjun; Liang, Xin-Zhong; Duan, Qingyun; Choi, Hyun Il; Dai, Yongjiu; Wu, Huan

    2015-06-01

    An uncertainty quantification framework was employed to examine the sensitivities of 24 model parameters from a newly developed Conjunctive Surface-Subsurface Process (CSSP) land surface model (LSM). The sensitivity analysis (SA) was performed over 18 representative watersheds in the contiguous United States to examine the influence of model parameters in the simulation of terrestrial hydrological processes. Two normalized metrics, relative bias (RB) and Nash-Sutcliffe efficiency (NSE), were adopted to assess the fit between simulated and observed streamflow discharge (SD) and evapotranspiration (ET) for a 14 year period. SA was conducted using a multiobjective two-stage approach, in which the first stage was a qualitative SA using the Latin Hypercube-based One-At-a-Time (LH-OAT) screening, and the second stage was a quantitative SA using the Multivariate Adaptive Regression Splines (MARS)-based Sobol' sensitivity indices. This approach combines the merits of qualitative and quantitative global SA methods, and is effective and efficient for understanding and simplifying large, complex system models. Ten of the 24 parameters were identified as important across different watersheds. The contribution of each parameter to the total response variance was then quantified by Sobol' sensitivity indices. Generally, parameter interactions contribute the most to the response variance of the CSSP, and only 5 out of 24 parameters dominate model behavior. Four photosynthetic and respiratory parameters are shown to be influential to ET, whereas reference depth for saturated hydraulic conductivity is the most influential parameter for SD in most watersheds. Parameter sensitivity patterns mainly depend on hydroclimatic regime, as well as vegetation type and soil texture. This article was corrected on 26 JUN 2015. See the end of the full text for details.

  17. [Progress in the application of laser ablation ICP-MS to surface microanalysis in material science].

    PubMed

    Zhang, Yong; Jia, Yun-hai; Chen, Ji-wen; Shen, Xue-jing; Liu, Ying; Zhao, Leiz; Li, Dong-ling; Hang, Peng-cheng; Zhao, Zhen; Fan, Wan-lun; Wang, Hai-zhou

    2014-08-01

    In the present paper, apparatus and theory of surface analysis is introduced, and the progress in the application of laser ablation ICP-MS to microanalysis in ferrous, nonferrous and semiconductor field is reviewed in detail. Compared with traditional surface analytical tools, such as SEM/EDS (scanning electron microscopy/energy dispersive spectrum), EPMA (electron probe microanalysis analysis), AES (auger energy spectrum), etc. the advantage is little or no sample preparation, adjustable spatial resolution according to analytical demand, multi-element analysis and high sensitivity. It is now a powerful complementary method to traditional surface analytical tool. With the development of LA-ICP-MS technology maturing, more and more analytical workers will use this powerful tool in the future, and LA-ICP-MS will be a super star in elemental analysis field just like LIBS (Laser-induced breakdown spectroscopy).

  18. Publications - GMC 414 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    DGGS GMC 414 Publication Details Title: Sensitive High Resolution Ion Micro Probe (SHRIMP) data of Gottlieb, E., 2012, Sensitive High Resolution Ion Micro Probe (SHRIMP) data of outcrop samples from the

  19. X-ray crystal spectrometer upgrade for ITER-like wall experiments at JETa)

    NASA Astrophysics Data System (ADS)

    Shumack, A. E.; Rzadkiewicz, J.; Chernyshova, M.; Jakubowska, K.; Scholz, M.; Byszuk, A.; Cieszewski, R.; Czarski, T.; Dominik, W.; Karpinski, L.; Kasprowicz, G.; Pozniak, K.; Wojenski, A.; Zabolotny, W.; Conway, N. J.; Dalley, S.; Figueiredo, J.; Nakano, T.; Tyrrell, S.; Zastrow, K.-D.; Zoita, V.

    2014-11-01

    The high resolution X-Ray crystal spectrometer at the JET tokamak has been upgraded with the main goal of measuring the tungsten impurity concentration. This is important for understanding impurity accumulation in the plasma after installation of the JET ITER-like wall (main chamber: Be, divertor: W). This contribution provides details of the upgraded spectrometer with a focus on the aspects important for spectral analysis and plasma parameter calculation. In particular, we describe the determination of the spectrometer sensitivity: important for impurity concentration determination.

  20. X-ray crystal spectrometer upgrade for ITER-like wall experiments at JET.

    PubMed

    Shumack, A E; Rzadkiewicz, J; Chernyshova, M; Jakubowska, K; Scholz, M; Byszuk, A; Cieszewski, R; Czarski, T; Dominik, W; Karpinski, L; Kasprowicz, G; Pozniak, K; Wojenski, A; Zabolotny, W; Conway, N J; Dalley, S; Figueiredo, J; Nakano, T; Tyrrell, S; Zastrow, K-D; Zoita, V

    2014-11-01

    The high resolution X-Ray crystal spectrometer at the JET tokamak has been upgraded with the main goal of measuring the tungsten impurity concentration. This is important for understanding impurity accumulation in the plasma after installation of the JET ITER-like wall (main chamber: Be, divertor: W). This contribution provides details of the upgraded spectrometer with a focus on the aspects important for spectral analysis and plasma parameter calculation. In particular, we describe the determination of the spectrometer sensitivity: important for impurity concentration determination.

  1. Methods for analysis of selected metals in water by atomic absorption

    USGS Publications Warehouse

    Fishman, Marvin J.; Downs, Sanford C.

    1966-01-01

    This manual describes atomic-absorption-spectroscopy methods for determining calcium, copper, lithium, magnesium, manganese, potassium, sodium, strontium and zinc in atmospheric precipitation, fresh waters, and brines. The procedures are intended to be used by water quality laboratories of the Water Resources Division of the U.S. Geological Survey. Detailed procedures, calculations, and methods for the preparation of reagents are given for each element along with data on accuracy, precision, and sensitivity. Other topics discussed briefly are the principle of atomic absorption, instrumentation used, and special analytical techniques.

  2. Domain Engineered Magnetoelectric Thin Films for High Sensitivity Resonant Magnetic Field Sensors

    DTIC Science & Technology

    2012-02-28

    texture E analysis w cated by poo re accounted n measurem 8 sol-gel samp d PZT sol-g as utilized t r fit between in the mo ent spot). les shown i el...nsformer str nted by aero ure. ure 34: Un were grow as varied in D) as show texturing in . D pattern of the films d ucture. Figu sol jet depo ipoled PZT ...the detailed characterization was the development of prediction models for texturing of PZT sol-gel thin films, an understanding of the analytical

  3. Computer program to perform cost and weight analysis of transport aircraft. Volume 2: Technical volume

    NASA Technical Reports Server (NTRS)

    1973-01-01

    An improved method for estimating aircraft weight and cost using a unique and fundamental approach was developed. The results of this study were integrated into a comprehensive digital computer program, which is intended for use at the preliminary design stage of aircraft development. The program provides a means of computing absolute values for weight and cost, and enables the user to perform trade studies with a sensitivity to detail design and overall structural arrangement. Both batch and interactive graphics modes of program operation are available.

  4. Impervious surfaces and sewer pipe effects on stormwater runoff temperature

    NASA Astrophysics Data System (ADS)

    Sabouri, F.; Gharabaghi, B.; Mahboubi, A. A.; McBean, E. A.

    2013-10-01

    The warming effect of the impervious surfaces in urban catchment areas and the cooling effect of underground storm sewer pipes on stormwater runoff temperature are assessed. Four urban residential catchment areas in the Cities of Guelph and Kitchener, Ontario, Canada were evaluated using a combination of runoff monitoring and modelling. The stormwater level and water temperature were monitored at 10 min interval at the inlet of the stormwater management ponds for three summers 2009, 2010 and 2011. The warming effect of the ponds is also studied, however discussed in detail in a separate paper. An artificial neural network (ANN) model for stormwater temperature was trained and validated using monitoring data. Stormwater runoff temperature was most sensitive to event mean temperature of the rainfall (EMTR) with a normalized sensitivity coefficient (Sn) of 1.257. Subsequent levels of sensitivity corresponded to the longest sewer pipe length (LPL), maximum rainfall intensity (MI), percent impervious cover (IMP), rainfall depth (R), initial asphalt temperature (AspT), pipe network density (PND), and rainfall duration (D), respectively. Percent impervious cover of the catchment area (IMP) was the key parameter that represented the warming effect of the paved surfaces; sensitivity analysis showed IMP increase from 20% to 50% resulted in runoff temperature increase by 3 °C. The longest storm sewer pipe length (LPL) and the storm sewer pipe network density (PND) are the two key parameters that control the cooling effect of the underground sewer system; sensitivity analysis showed LPL increase from 345 to 966 m, resulted in runoff temperature drop by 2.5 °C.

  5. Response to novelty as a predictor of cocaine sensitization and conditioning in rats: a correlational analysis.

    PubMed

    Carey, Robert J; DePalma, Gail; Damianopoulos, Ernest

    2003-07-01

    An animal's response to novelty has been suggested to be a predictor of its response to drugs of abuse. The possible relationship between an individual's behavioral response to novelty and its subsequent behavioral response to cocaine has not been subjected to a detailed correlational analysis. To use a repeated cocaine treatment protocol to induce cocaine sensitization and conditioned cocaine locomotor stimulant effects and to assess the relationship of these effects to pre-cocaine locomotor behavior in a novel environment. In two separate experiments, rats were given a 20-min test in a novel open-field environment. Subsequently, the rats were given a series of additional tests in conjunction with either saline or cocaine (10 mg/kg) treatments to induce cocaine sensitization and conditioned effects. The repeated cocaine treatments induced cocaine behavioral sensitization and conditioned effects. Correlational analyses showed that the initial 20-min novel environment test proved to be a strong predictor of an animal's subsequent saline activity level but did not predict the rats' behavioral acute and sensitized response to cocaine. When change in activity was used as the dependent variable, initial activity level was reliably negatively correlated with activity changes on cocaine tests as well as cocaine conditioning tests. The negative correlation between initial activity in a novel environment and the change in activity induced by cocaine indicates that low responders to environmental novelty tend to have the strongest response to cocaine. These results appear consistent with the classic initial value and response rate dependent analyses of stimulant drug effects.

  6. Water chemistry of tundra lakes in the periglacial zone of the Bellsund Fiord (Svalbard) in the summer of 2013.

    PubMed

    Szumińska, Danuta; Szopińska, Małgorzata; Lehmann-Konera, Sara; Franczak, Łukasz; Kociuba, Waldemar; Chmiel, Stanisław; Kalinowski, Paweł; Polkowska, Żaneta

    2018-05-15

    Climate changes observed in the Arctic (e.g. permafrost degradation, glacier retreat) may have significant influence on sensitive polar wetlands. The main objectives of this paper are defining chemical features of water within six small arctic lakes located in Bellsund (Svalbard) in the area of continuous permafrost occurrence. The unique environmental conditions of the study area offer an opportunity to observe phenomena influencing water chemistry, such as: chemical weathering, permafrost thawing, marine aerosols, atmospheric deposition and biological inputs. In the water samples collected during the summer 2013, detailed tundra lake water chemistry characteristics regarding ions, trace elements, pH and specific electrolytic conductivity (SEC 25 ) analysis were determined. Moreover, water chemistry of the studied lakes was compared to the water samples from the Tyvjobekken Creek and precipitation water samples. As a final step of data analysis, Principal Component Analysis (PCA) was performed. Detailed chemical analysis allowed us to conclude what follows: (1) Ca 2+ , Mg 2+ , SO 4 2- , Sr are of geogenic origin, (2) NO 3 - present in tundra lakes and the Tyvjobekken Creek water samples (ranging from 0.31 to 1.69mgL - 1 and from 0.25 to 1.58mgL - 1 respectively) may be of mixed origin, i.e. from biological processes and permafrost thawing, (3) high contribution of non-sea-salt SO 4 2- >80% in majority of studied samples indicate considerable inflow of sulphate-rich air to the study area, (4) high content of chlorides in tundra lakes (range: 25.6-32.0% meqL - 1 ) indicates marine aerosol influence, (5) PCA result shows that atmospheric transport may constitute a source of Mn, Co, Ni, Cu, Ga, Ba and Cd. However, further detailed inter-season and multi-seasonal study of tundra lakes in the Arctic are recommended. Especially in terms of detailed differentiation of sources influence (atmospheric transport vs. permafrost degradation). Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Benefit-cost analysis of addiction treatment: methodological guidelines and empirical application using the DATCAP and ASI.

    PubMed

    French, Michael T; Salomé, Helena J; Sindelar, Jody L; McLellan, A Thomas

    2002-04-01

    To provide detailed methodological guidelines for using the Drug Abuse Treatment Cost Analysis Program (DATCAP) and Addiction Severity Index (ASI) in a benefit-cost analysis of addiction treatment. A representative benefit-cost analysis of three outpatient programs was conducted to demonstrate the feasibility and value of the methodological guidelines. Procedures are outlined for using resource use and cost data collected with the DATCAP. Techniques are described for converting outcome measures from the ASI to economic (dollar) benefits of treatment. Finally, principles are advanced for conducting a benefit-cost analysis and a sensitivity analysis of the estimates. The DATCAP was administered at three outpatient drug-free programs in Philadelphia, PA, for 2 consecutive fiscal years (1996 and 1997). The ASI was administered to a sample of 178 treatment clients at treatment entry and at 7-months postadmission. The DATCAP and ASI appear to have significant potential for contributing to an economic evaluation of addiction treatment. The benefit-cost analysis and subsequent sensitivity analysis all showed that total economic benefit was greater than total economic cost at the three outpatient programs, but this representative application is meant to stimulate future economic research rather than justifying treatment per se. This study used previously validated, research-proven instruments and methods to perform a practical benefit-cost analysis of real-world treatment programs. The study demonstrates one way to combine economic and clinical data and offers a methodological foundation for future economic evaluations of addiction treatment.

  8. Profitability analysis of a femtosecond laser system for cataract surgery using a fuzzy logic approach.

    PubMed

    Trigueros, José Antonio; Piñero, David P; Ismail, Mahmoud M

    2016-01-01

    To define the financial and management conditions required to introduce a femtosecond laser system for cataract surgery in a clinic using a fuzzy logic approach. In the simulation performed in the current study, the costs associated to the acquisition and use of a commercially available femtosecond laser platform for cataract surgery (VICTUS, TECHNOLAS Perfect Vision GmbH, Bausch & Lomb, Munich, Germany) during a period of 5y were considered. A sensitivity analysis was performed considering such costs and the countable amortization of the system during this 5y period. Furthermore, a fuzzy logic analysis was used to obtain an estimation of the money income associated to each femtosecond laser-assisted cataract surgery (G). According to the sensitivity analysis, the femtosecond laser system under evaluation can be profitable if 1400 cataract surgeries are performed per year and if each surgery can be invoiced more than $500. In contrast, the fuzzy logic analysis confirmed that the patient had to pay more per surgery, between $661.8 and $667.4 per surgery, without considering the cost of the intraocular lens (IOL). A profitability of femtosecond laser systems for cataract surgery can be obtained after a detailed financial analysis, especially in those centers with large volumes of patients. The cost of the surgery for patients should be adapted to the real flow of patients with the ability of paying a reasonable range of cost.

  9. Diagnostics of laser-produced plasmas based on the analysis of intensity ratios of He-like ions X-ray emission

    DOE PAGES

    Ryazantsev, S. N.; Skobelev, I. Yu.; Faenov, A. Ya.; ...

    2016-12-08

    Here, in this paper, we detail the diagnostic technique used to infer the spatially resolved electron temperatures and densities in experiments dedicated to investigate the generation of magnetically collimated plasma jets. It is shown that the relative intensities of the resonance transitions in emitting He-like ions can be used to measure the temperature in such recombining plasmas. The intensities of these transitions are sensitive to the plasma density in the range of 10 16–10 20 cm -3 and to plasma temperature ranges from 10 to 100 eV for ions with a nuclear charge Z n ~10. We show how detailedmore » calculations of the emissivity of F VIII ions allow to determine the parameters of the plasma jets that were created using ELFIE ns laser facility (Ecole Polytechnique, France). Lastly, the diagnostic and analysis technique detailed here can be applied in a broader context than the one of this study, i.e., to diagnose any recombining plasma containing He-like fluorine ions.« less

  10. A sensitivity analysis on seismic tomography data with respect to CO2 saturation of a CO2 geological sequestration field

    NASA Astrophysics Data System (ADS)

    Park, Chanho; Nguyen, Phung K. T.; Nam, Myung Jin; Kim, Jongwook

    2013-04-01

    Monitoring CO2 migration and storage in geological formations is important not only for the stability of geological sequestration of CO2 but also for efficient management of CO2 injection. Especially, geophysical methods can make in situ observation of CO2 to assess the potential leakage of CO2 and to improve reservoir description as well to monitor development of geologic discontinuity (i.e., fault, crack, joint, etc.). Geophysical monitoring can be based on wireline logging or surface surveys for well-scale monitoring (high resolution and nallow area of investigation) or basin-scale monitoring (low resolution and wide area of investigation). In the meantime, crosswell tomography can make reservoir-scale monitoring to bridge the resolution gap between well logs and surface measurements. This study focuses on reservoir-scale monitoring based on crosswell seismic tomography aiming describe details of reservoir structure and monitoring migration of reservoir fluid (water and CO2). For the monitoring, we first make a sensitivity analysis on crosswell seismic tomography data with respect to CO2 saturation. For the sensitivity analysis, Rock Physics Models (RPMs) are constructed by calculating the values of density and P and S-wave velocities of a virtual CO2 injection reservoir. Since the seismic velocity of the reservoir accordingly changes as CO2 saturation changes when the CO2 saturation is less than about 20%, while when the CO2 saturation is larger than 20%, the seismic velocity is insensitive to the change, sensitivity analysis is mainly made when CO2 saturation is less than 20%. For precise simulation of seismic tomography responses for constructed RPMs, we developed a time-domain 2D elastic modeling based on finite difference method with a staggered grid employing a boundary condition of a convolutional perfectly matched layer. We further make comparison between sensitivities of seismic tomography and surface measurements for RPMs to analysis resolution difference between them. Moreover, assuming a similar reservoir situation to the CO2 storage site in Nagaoka, Japan, we generate time-lapse tomographic data sets for the corresponding CO2 injection process, and make a preliminary interpretation of the data sets.

  11. Design and Analysis of the International X-Ray Observatory Mirror Modules

    NASA Technical Reports Server (NTRS)

    McClelland, Ryan S.; Carnahan, Timothy M.; Robinson, David W.; Saha, Timo T.

    2009-01-01

    The Soft X-Ray Telescope (SXT) modules are the fundamental focusing assemblies on NASA's next major X-ray telescope mission, the International X-Ray Observatory (IXO). The preliminary design and analysis of these assemblies has been completed, addressing the major engineering challenges and leading to an understanding of the factors effecting module performance. Each of the 60 modules in the Flight Mirror Assembly (FMA) supports 200-300 densely packed 0.4 mm thick glass mirror segments in order to meet the unprecedented effective area required to achieve the scientific objectives of the mission. Detailed Finite Element Analysis (FEA), materials testing, and environmental testing have been completed to ensure the modules can be successfully launched. Resulting stress margins are positive based on detailed FEA, a large factor of safety, and a design strength determined by robust characterization of the glass properties. FEA correlates well with the results of the successful modal, vibration, and acoustic environmental tests. Deformation of the module due to on-orbit thermal conditions is also a major design driver. A preliminary thermal control system has been designed and the sensitivity of module optical performance to various thermal loads has been determined using optomechanical analysis methods developed for this unique assembly. This design and analysis furthers the goal of building a module that demonstrates the ability to meet IXO requirements, which is the current focus of the IXO FMA technology development team.

  12. Dependence of Intramyocardial Pressure and Coronary Flow on Ventricular Loading and Contractility: A Model Study

    PubMed Central

    Borsje, Petra; Arts, Theo; van De Vosse, Frans N.

    2006-01-01

    The phasic coronary arterial inflow during the normal cardiac cycle has been explained with simple (waterfall, intramyocardial pump) models, emphasizing the role of ventricular pressure. To explain changes in isovolumic and low afterload beats, these models were extended with the effect of three-dimensional wall stress, nonlinear characteristics of the coronary bed, and extravascular fluid exchange. With the associated increase in the number of model parameters, a detailed parameter sensitivity analysis has become difficult. Therefore we investigated the primary relations between ventricular pressure and volume, wall stress, intramyocardial pressure and coronary blood flow, with a mathematical model with a limited number of parameters. The model replicates several experimental observations: the phasic character of coronary inflow is virtually independent of maximum ventricular pressure, the amplitude of the coronary flow signal varies about proportionally with cardiac contractility, and intramyocardial pressure in the ventricular wall may exceed ventricular pressure. A parameter sensitivity analysis shows that the normalized amplitude of coronary inflow is mainly determined by contractility, reflected in ventricular pressure and, at low ventricular volumes, radial wall stress. Normalized flow amplitude is less sensitive to myocardial coronary compliance and resistance, and to the relation between active fiber stress, time, and sarcomere shortening velocity. PMID:17048105

  13. Diagnostic accuracy of central venous catheter confirmation by bedside ultrasound versus chest radiography in critically ill patients: A systematic review and meta-analysis

    PubMed Central

    Ablordeppey, Enyo A.; Drewry, Anne M.; Beyer, Alexander B.; Theodoro, Daniel L.; Fowler, Susan A.; Fuller, Brian M.; Carpenter, Christopher R.

    2016-01-01

    Objective We performed a systematic review and meta-analysis to examine the accuracy of bedside ultrasound for confirmation of central venous catheter position and exclusion of pneumothorax compared to chest radiography. Data Sources PubMed, EMBASE, Cochrane Central Register of Controlled Trials, reference lists, conference proceedings and ClinicalTrials.gov Study Selection Articles and abstracts describing the diagnostic accuracy of bedside ultrasound compared with chest radiography for confirmation of central venous catheters in sufficient detail to reconstruct 2×2 contingency tables were reviewed. Primary outcomes included the accuracy of confirming catheter positioning and detecting a pneumothorax. Secondary outcomes included feasibility, inter-rater reliability, and efficiency to complete bedside ultrasound confirmation of central venous catheter position. Data Extraction Investigators abstracted study details including research design and sonographic imaging technique to detect catheter malposition and procedure-related pneumothorax. Diagnostic accuracy measures included pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Data Synthesis 15 studies with 1553 central venous catheter placements were identified with a pooled sensitivity and specificity of catheter malposition by ultrasound of 0.82 [0.77, 0.86] and 0.98 [0.97, 0.99] respectively. The pooled positive and negative likelihood ratios of catheter malposition by ultrasound were 31.12 [14.72, 65.78] and 0.25 [0.13, 0.47]. The sensitivity and specificity of ultrasound for pneumothorax detection was nearly 100% in the participating studies. Bedside ultrasound reduced mean central venous catheter confirmation time by 58.3 minutes. Risk of bias and clinical heterogeneity in the studies were high. Conclusions Bedside ultrasound is faster than radiography at identifying pneumothorax after central venous catheter insertion. When a central venous catheter malposition exists, bedside ultrasound will identify four out of every five earlier than chest radiography. PMID:27922877

  14. Implementation and reporting of causal mediation analysis in 2015: a systematic review in epidemiological studies.

    PubMed

    Liu, Shao-Hsien; Ulbricht, Christine M; Chrysanthopoulou, Stavroula A; Lapane, Kate L

    2016-07-20

    Causal mediation analysis is often used to understand the impact of variables along the causal pathway of an occurrence relation. How well studies apply and report the elements of causal mediation analysis remains unknown. We systematically reviewed epidemiological studies published in 2015 that employed causal mediation analysis to estimate direct and indirect effects of observed associations between an exposure on an outcome. We identified potential epidemiological studies through conducting a citation search within Web of Science and a keyword search within PubMed. Two reviewers independently screened studies for eligibility. For eligible studies, one reviewer performed data extraction, and a senior epidemiologist confirmed the extracted information. Empirical application and methodological details of the technique were extracted and summarized. Thirteen studies were eligible for data extraction. While the majority of studies reported and identified the effects of measures, most studies lacked sufficient details on the extent to which identifiability assumptions were satisfied. Although most studies addressed issues of unmeasured confounders either from empirical approaches or sensitivity analyses, the majority did not examine the potential bias arising from the measurement error of the mediator. Some studies allowed for exposure-mediator interaction and only a few presented results from models both with and without interactions. Power calculations were scarce. Reporting of causal mediation analysis is varied and suboptimal. Given that the application of causal mediation analysis will likely continue to increase, developing standards of reporting of causal mediation analysis in epidemiological research would be prudent.

  15. Low energy analysis techniques for CUORE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alduino, C.; Alfonso, K.; Artusa, D. R.

    CUORE is a tonne-scale cryogenic detector operating at the Laboratori Nazionali del Gran Sasso (LNGS) that uses tellurium dioxide bolometers to search for neutrinoless double-beta decay of 130Te. CUORE is also suitable to search for low energy rare events such as solar axions or WIMP scattering, thanks to its ultra-low background and large target mass. However, to conduct such sensitive searches requires improving the energy threshold to 10 keV. Here in this article, we describe the analysis techniques developed for the low energy analysis of CUORE-like detectors, using the data acquired from November 2013 to March 2015 by CUORE-0, amore » single-tower prototype designed to validate the assembly procedure and new cleaning techniques of CUORE. We explain the energy threshold optimization, continuous monitoring of the trigger efficiency, data and event selection, and energy calibration at low energies in detail. We also present the low energy background spectrum of CUORE-0 below 60keV. Finally, we report the sensitivity of CUORE to WIMP annual modulation using the CUORE-0 energy threshold and background, as well as an estimate of the uncertainty on the nuclear quenching factor from nuclear recoils inCUORE-0.« less

  16. Low energy analysis techniques for CUORE

    DOE PAGES

    Alduino, C.; Alfonso, K.; Artusa, D. R.; ...

    2017-12-12

    CUORE is a tonne-scale cryogenic detector operating at the Laboratori Nazionali del Gran Sasso (LNGS) that uses tellurium dioxide bolometers to search for neutrinoless double-beta decay of 130Te. CUORE is also suitable to search for low energy rare events such as solar axions or WIMP scattering, thanks to its ultra-low background and large target mass. However, to conduct such sensitive searches requires improving the energy threshold to 10 keV. Here in this article, we describe the analysis techniques developed for the low energy analysis of CUORE-like detectors, using the data acquired from November 2013 to March 2015 by CUORE-0, amore » single-tower prototype designed to validate the assembly procedure and new cleaning techniques of CUORE. We explain the energy threshold optimization, continuous monitoring of the trigger efficiency, data and event selection, and energy calibration at low energies in detail. We also present the low energy background spectrum of CUORE-0 below 60keV. Finally, we report the sensitivity of CUORE to WIMP annual modulation using the CUORE-0 energy threshold and background, as well as an estimate of the uncertainty on the nuclear quenching factor from nuclear recoils inCUORE-0.« less

  17. PIXE analysis of caries related trace elements in tooth enamel

    NASA Astrophysics Data System (ADS)

    Annegarn, H. J.; Jodaikin, A.; Cleaton-Jones, P. E.; Sellschop, J. P. F.; Madiba, C. C. P.; Bibby, D.

    1981-03-01

    PIXE analysis has been applied to a set of twenty human teeth to determine trace element concentration in enamel from areas susceptible to dental caries (mesial and distal contact points) and in areas less susceptible to the disease (buccal surfaces), with the aim of determining the possible roles of trace elements in the curious process. The samples were caries-free anterior incisors extracted for periodontal reasons from subjects 10-30 years of age. Prior to extraction of the sample teeth, a detailed dental history and examination was carried out in each individual. PIXE analysis, using a 3 MeV proton beam of 1 mm diameter, allowed the determination of Ca, Mn, Fe, Cu, Zn, Sr and Pb above detection limits. As demonstrated in this work, the enhanced sensitivity of PIXE analysis over electron microprobe analysis, and the capability of localised surface analysis compared with the pooled samples required for neutron activation analysis, makes it a powerful and useful technique in dental analysis.

  18. Sensitivity of simulated maize crop yields to regional climate in the Southwestern United States

    NASA Astrophysics Data System (ADS)

    Kim, S.; Myoung, B.; Stack, D.; Kim, J.; Hatzopoulos, N.; Kafatos, M.

    2013-12-01

    The sensitivity of maize yield to the regional climate in the Southwestern United States (SW US) has been investigated by using a crop-yield simulation model (APSIM) in conjunction with meteorological forcings (daily minimum and maximum temperature, precipitation, and radiation) from the North American Regional Reanalysis (NARR) dataset. The primary focus of this study is to look at the effects of interannual variations of atmospheric components on the crop productivity in the SW US over the 21-year period (1991 to 2011). First of all, characteristics and performance of APSIM was examined by comparing simulated maize yields with observed yields from United States Department of Agriculture (USDA) and the leaf-area index (LAI) from MODIS satellite data. Comparisons of the simulated maize yield with the available observations show that the crop model can reasonably reproduce observed maize yields. Sensitivity tests were performed to assess the relative contribution of each climate driver to regional crop yield. Sensitivity experiments show that potential crop production responds nonlinearly to climate drivers and the yield sensitivity varied among geographical locations depending on their mean climates. Lastly, a detailed analysis of both the spatial and temporal variations of each climate driver in the regions where maize is actually grown in three states (CA, AZ, and NV) in the SW US was performed.

  19. Sensitivity of a numerical wave model on wind re-analysis datasets

    NASA Astrophysics Data System (ADS)

    Lavidas, George; Venugopal, Vengatesan; Friedrich, Daniel

    2017-03-01

    Wind is the dominant process for wave generation. Detailed evaluation of metocean conditions strengthens our understanding of issues concerning potential offshore applications. However, the scarcity of buoys and high cost of monitoring systems pose a barrier to properly defining offshore conditions. Through use of numerical wave models, metocean conditions can be hindcasted and forecasted providing reliable characterisations. This study reports the sensitivity of wind inputs on a numerical wave model for the Scottish region. Two re-analysis wind datasets with different spatio-temporal characteristics are used, the ERA-Interim Re-Analysis and the CFSR-NCEP Re-Analysis dataset. Different wind products alter results, affecting the accuracy obtained. The scope of this study is to assess different available wind databases and provide information concerning the most appropriate wind dataset for the specific region, based on temporal, spatial and geographic terms for wave modelling and offshore applications. Both wind input datasets delivered results from the numerical wave model with good correlation. Wave results by the 1-h dataset have higher peaks and lower biases, in expense of a high scatter index. On the other hand, the 6-h dataset has lower scatter but higher biases. The study shows how wind dataset affects the numerical wave modelling performance, and that depending on location and study needs, different wind inputs should be considered.

  20. Description and Sensitivity Analysis of the SOLSE/LORE-2 and SAGE III Limb Scattering Ozone Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.

    2002-01-01

    The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.

  1. Cost of diabetic eye, renal and foot complications: a methodological review.

    PubMed

    Schirr-Bonnans, Solène; Costa, Nadège; Derumeaux-Burel, Hélène; Bos, Jérémy; Lepage, Benoît; Garnault, Valérie; Martini, Jacques; Hanaire, Hélène; Turnin, Marie-Christine; Molinier, Laurent

    2017-04-01

    Diabetic retinopathy (DR), diabetic kidney disease (DKD) and diabetic foot ulcer (DFU) represent a public health and economic concern that may be assessed with cost-of-illness (COI) studies. (1) To review COI studies published between 2000 and 2015, about DR, DKD and DFU; (2) to analyse methods used. Disease definition, epidemiological approach, perspective, type of costs, activity data sources, cost valuation, sensitivity analysis, cost discounting and presentation of costs may be described in COI studies. Each reviewed study was assessed with a methodological grid including these nine items. The five following items have been detailed in the reviewed studies: epidemiological approach (59 % of studies described it), perspective (75 %), type of costs (98 %), activity data sources (91 %) and cost valuation (59 %). The disease definition and the presentation of results were detailed in fewer studies (respectively 50 and 46 %). In contrast, sensitivity analysis was only performed in 14 % of studies and cost discounting in 7 %. Considering the studies showing an average cost per patient and per year with a societal perspective, DR cost estimates were US $2297 (range 5-67,486), DKD cost ranged from US $1095 to US $16,384, and DFU cost was US $10,604 (range 1444-85,718). This review reinforces the need to adequately describe the method to facilitate literature comparisons and projections. It also recalls that COI studies represent complementary tools to cost-effectiveness studies to help decision makers in the allocation of economic resources for the management of DR, DKD and DFU.

  2. Performance analysis of a new positron camera geometry for high speed, fine particle tracking

    NASA Astrophysics Data System (ADS)

    Sovechles, J. M.; Boucher, D.; Pax, R.; Leadbeater, T.; Sasmito, A. P.; Waters, K. E.

    2017-09-01

    A new positron camera arrangement was assembled using 16 ECAT951 modular detector blocks. A closely packed, cross pattern arrangement was selected to produce a highly sensitive cylindrical region for tracking particles with low activities and high speeds. To determine the capabilities of this system a comprehensive analysis of the tracking performance was conducted to determine the 3D location error and location frequency as a function of tracer activity and speed. The 3D error was found to range from 0.54 mm for a stationary particle, consistent for all tracer activities, up to 4.33 mm for a tracer with an activity of 3 MBq and a speed of 4 m · s-1. For lower activity tracers (<10-2 MBq), the error was more sensitive to increases in speed, increasing to 28 mm (at 4 m · s-1), indicating that at these conditions a reliable trajectory is not possible. These results expanded on, but correlated well with, previous literature that only contained location errors for tracer speeds up to 1.5 m · s-1. The camera was also used to track directly activated mineral particles inside a two-inch hydrocyclone and a 142 mm diameter flotation cell. A detailed trajectory, inside the hydrocyclone, of a  -212  +  106 µm (10-1 MBq) quartz particle displayed the expected spiralling motion towards the apex. This was the first time a mineral particle of this size had been successfully traced within a hydrocyclone, however more work is required to develop detailed velocity fields.

  3. SUPPLEMENT: “THE RATE OF BINARY BLACK HOLE MERGERS INFERRED FROM ADVANCED LIGO OBSERVATIONS SURROUNDING GW150914” (2016, ApJL, 833, L1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, B. P.; Abbott, R.; Abernathy, M. R.

    This article provides supplemental information for a Letter reporting the rate of (BBH) coalescences inferred from 16 days of coincident Advanced LIGO observations surrounding the transient (GW) signal GW150914. In that work we reported various rate estimates whose 90% confidence intervals fell in the range 2–600 Gpc{sup −3} yr{sup −1}. Here we give details on our method and computations, including information about our search pipelines, a derivation of our likelihood function for the analysis, a description of the astrophysical search trigger distribution expected from merging BBHs, details on our computational methods, a description of the effects and our model for calibration uncertainty,more » and an analytic method for estimating our detector sensitivity, which is calibrated to our measurements.« less

  4. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-02-26

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of amore » two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.« less

  5. Global sensitivity analysis of water age and temperature for informing salmonid disease management

    NASA Astrophysics Data System (ADS)

    Javaheri, Amir; Babbar-Sebens, Meghna; Alexander, Julie; Bartholomew, Jerri; Hallett, Sascha

    2018-06-01

    Many rivers in the Pacific Northwest region of North America are anthropogenically manipulated via dam operations, leading to system-wide impacts on hydrodynamic conditions and aquatic communities. Understanding how dam operations alter abiotic and biotic variables is important for designing management actions. For example, in the Klamath River, dam outflows could be manipulated to alter water age and temperature to reduce risk of parasite infections in salmon by diluting or altering viability of parasite spores. However, sensitivity of water age and temperature to the riverine conditions such as bathymetry can affect outcomes from dam operations. To examine this issue in detail, we conducted a global sensitivity analysis of water age and temperature to a comprehensive set of hydraulics and meteorological parameters in the Klamath River, California, where management of salmonid disease is a high priority. We applied an analysis technique, which combined Latin-hypercube and one-at-a-time sampling methods, and included simulation runs with the hydrodynamic numerical model of the Lower Klamath. We found that flow rate and bottom roughness were the two most important parameters that influence water age. Water temperature was more sensitive to inflow temperature, air temperature, solar radiation, wind speed, flow rate, and wet bulb temperature respectively. Our results are relevant for managers because they provide a framework for predicting how water within 'high infection risk' sections of the river will respond to dam water (low infection risk) input. Moreover, these data will be useful for prioritizing the use of water age (dilution) versus temperature (spore viability) under certain contexts when considering flow manipulation as a method to reduce risk of infection and disease in Klamath River salmon.

  6. A comparison of solute-transport solution techniques and their effect on sensitivity analysis and inverse modeling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2001-01-01

    Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.

  7. MDS1, a dosage suppressor of an mck1 mutant, encodes a putative yeast homolog of glycogen synthase kinase 3.

    PubMed Central

    Puziss, J W; Hardy, T A; Johnson, R B; Roach, P J; Hieter, P

    1994-01-01

    The yeast gene MCK1 encodes a serine/threonine protein kinase that is thought to function in regulating kinetochore activity and entry into meiosis. Disruption of MCK1 confers a cold-sensitive phenotype, a temperature-sensitive phenotype, and sensitivity to the microtubule-destabilizing drug benomyl and leads to loss of chromosomes during growth on benomyl. A dosage suppression selection was used to identify genes that, when present at high copy number, could suppress the cold-sensitive phenotype of mck1::HIS3 mutant cells. Several unique classes of clones were identified, and one of these, designated MDS1, has been characterized in some detail. Nucleotide sequence data reveal that MDS1 encodes a serine/threonine protein kinase that is highly homologous to the shaggy/zw3 kinase in Drosophila melanogaster and its functional homolog, glycogen synthase kinase 3, in rats. The presence of MDS1 in high copy number rescues both the cold-sensitive and the temperature-sensitive phenotypes, but not the benomyl-sensitive phenotype, associated with the disruption of MCK1. Analysis of strains harboring an mds1 null mutation demonstrates that MDS1 is not essential during normal vegetative growth but appears to be required for meiosis. Finally, in vitro experiments indicate that the proteins encoded by both MCK1 and MDS1 possess protein kinase activity with substrate specificity similar to that of mammalian glycogen synthase kinase 3. Images PMID:8264650

  8. Optimization of the coplanar interdigital capacitive sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yunzhi; Zhan, Zheng; Bowler, Nicola

    2017-02-01

    Interdigital capacitive sensors are applied in nondestructive testing and material property characterization of low-conductivity materials. The sensor performance is typically described based on the penetration depth of the electric field into the sample material, the sensor signal strength and its sensitivity. These factors all depend on the geometry and material properties of the sensor and sample. In this paper, a detailed analysis is provided, through finite element simulations, of the ways in which the sensor's geometrical parameters affect its performance. The geometrical parameters include the number of digits forming the interdigital electrodes and the ratio of digit width to their separation. In addition, the influence of the presence or absence of a metal backplane on the sample is analyzed. Further, the effects of sensor substrate thickness and material on signal strength are studied. The results of the analysis show that it is necessary to take into account a trade-off between the desired sensitivity and penetration depth when designing the sensor. Parametric equations are presented to assist the sensor designer or nondestructive evaluation specialist in optimizing the design of a capacitive sensor.

  9. Modelling irrigated maize with a combination of coupled-model simulation and uncertainty analysis, in the northwest of China

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kinzelbach, W.; Zhou, J.; Cheng, G. D.; Li, X.

    2012-05-01

    The hydrologic model HYDRUS-1-D and the crop growth model WOFOST are coupled to efficiently manage water resources in agriculture and improve the prediction of crop production. The results of the coupled model are validated by experimental studies of irrigated-maize done in the middle reaches of northwest China's Heihe River, a semi-arid to arid region. Good agreement is achieved between the simulated evapotranspiration, soil moisture and crop production and their respective field measurements made under current maize irrigation and fertilization. Based on the calibrated model, the scenario analysis reveals that the most optimal amount of irrigation is 500-600 mm in this region. However, for regions without detailed observation, the results of the numerical simulation can be unreliable for irrigation decision making owing to the shortage of calibrated model boundary conditions and parameters. So, we develop a method of combining model ensemble simulations and uncertainty/sensitivity analysis to speculate the probability of crop production. In our studies, the uncertainty analysis is used to reveal the risk of facing a loss of crop production as irrigation decreases. The global sensitivity analysis is used to test the coupled model and further quantitatively analyse the impact of the uncertainty of coupled model parameters and environmental scenarios on crop production. This method can be used for estimation in regions with no or reduced data availability.

  10. [MALDI-TOF mass spectrometry in the investigation of large high-molecular biological compounds].

    PubMed

    Porubl'ova, L V; Rebriiev, A V; Hromovyĭ, T Iu; Minia, I I; Obolens'ka, M Iu

    2009-01-01

    MALDI-TOF (Matrix-Assisted Laser Desorption/Ionization Time-of-Flight) mass spectrometry has become, in the recent years, a tool of choice for analyses of biological polymers. The wide mass range, high accuracy, informativity and sensitivity make it a superior method for analysis of all kinds of high-molecular biological compounds including proteins, nucleic acids and lipids. MALDI-TOF-MS is particularly suitable for the identification of proteins by mass fingerprint or microsequencing. Therefore it has become an important technique of proteomics. Furthermore, the method allows making a detailed analysis of post-translational protein modifications, protein-protein and protein-nucleic acid interactions. Recently, the method was also successfully applied to nucleic acid sequencing as well as screening for mutations.

  11. Clinical Application of Esophageal High-resolution Manometry in the Diagnosis of Esophageal Motility Disorders.

    PubMed

    van Hoeij, Froukje B; Bredenoord, Albert J

    2016-01-31

    Esophageal high-resolution manometry (HRM) is replacing conventional manometry in the clinical evaluation of patients with esophageal symptoms, especially dysphagia. The introduction of HRM gave rise to new objective metrics and recognizable patterns of esophageal motor function, requiring a new classification scheme: the Chicago classification. HRM measurements are more detailed and more easily performed compared to conventional manometry. The visual presentation of acquired data improved the analysis and interpretation of esophageal motor function. This led to a more sensitive, accurate, and objective analysis of esophageal motility. In this review we discuss how HRM changed the way we define and categorize esophageal motility disorders. Moreover, we discuss the clinical applications of HRM for each esophageal motility disorder separately.

  12. Clinical Application of Esophageal High-resolution Manometry in the Diagnosis of Esophageal Motility Disorders

    PubMed Central

    van Hoeij, Froukje B; Bredenoord, Albert J

    2016-01-01

    Esophageal high-resolution manometry (HRM) is replacing conventional manometry in the clinical evaluation of patients with esophageal symptoms, especially dysphagia. The introduction of HRM gave rise to new objective metrics and recognizable patterns of esophageal motor function, requiring a new classification scheme: the Chicago classification. HRM measurements are more detailed and more easily performed compared to conventional manometry. The visual presentation of acquired data improved the analysis and interpretation of esophageal motor function. This led to a more sensitive, accurate, and objective analysis of esophageal motility. In this review we discuss how HRM changed the way we define and categorize esophageal motility disorders. Moreover, we discuss the clinical applications of HRM for each esophageal motility disorder separately. PMID:26631942

  13. Label-free and highly sensitive optical imaging of detailed microcirculation within meninges and cortex in mice with the cranium left intact

    NASA Astrophysics Data System (ADS)

    Jia, Yali; An, Lin; Wang, Ruikang K.

    2010-05-01

    We demonstrate for the first time that the detailed blood flow distribution within intracranial dura mater and cortex can be visualized by an ultrahigh sensitive optical microangiography (UHS-OMAG). The study uses an UHS-OMAG system operating at 1310 nm with an imaging speed at 150 frames per second that requires ~10 s to complete one 3-D scan of ~2.5×2.5 mm2. The system is sensitive to blood flow with a velocity ranging from ~4 μm/s to ~23 mm/s. We show superior performance of UHS-OMAG in providing functional images of capillary level microcirculation within meninges in mice with the cranium left intact, the results of which correlate well with the standard dural histopathology.

  14. Flat plate vs. concentrator solar photovoltaic cells - A manufacturing cost analysis

    NASA Technical Reports Server (NTRS)

    Granon, L. A.; Coleman, M. G.

    1980-01-01

    The choice of which photovoltaic system (flat plate or concentrator) to use for utilizing solar cells to generate electricity depends mainly on the cost. A detailed, comparative manufacturing cost analysis of the two types of systems is presented. Several common assumptions, i.e., cell thickness, interest rate, power rate, factory production life, polysilicon cost, and direct labor rate are utilized in this analysis. Process sequences, cost variables, and sensitivity analyses have been studied, and results of the latter show that the most important parameters which determine manufacturing costs are concentration ratio, manufacturing volume, and cell efficiency. The total cost per watt of the flat plate solar cell is $1.45, and that of the concentrator solar cell is $1.85, the higher cost being due to the increased process complexity and material costs.

  15. Analysis of the SL-1 Accident Using RELAPS5-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Francisco, A.D. and Tomlinson, E. T.

    2007-11-08

    On January 3, 1961, at the National Reactor Testing Station, in Idaho Falls, Idaho, the Stationary Low Power Reactor No. 1 (SL-1) experienced a major nuclear excursion, killing three people, and destroying the reactor core. The SL-1 reactor, a 3 MW{sub t} boiling water reactor, was shut down and undergoing routine maintenance work at the time. This paper presents an analysis of the SL-1 reactor excursion using the RELAP5-3D thermal-hydraulic and nuclear analysis code, with the intent of simulating the accident from the point of reactivity insertion to destruction and vaporization of the fuel. Results are presented, along with amore » discussion of sensitivity to some reactor and transient parameters (many of the details are only known with a high level of uncertainty).« less

  16. An approach to measure parameter sensitivity in watershed hydrological modelling

    EPA Science Inventory

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the...

  17. Power Systems Life Cycle Analysis Tool (Power L-CAT).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andruski, Joel; Drennen, Thomas E.

    2011-01-01

    The Power Systems L-CAT is a high-level dynamic model that calculates levelized production costs and tracks environmental performance for a range of electricity generation technologies: natural gas combined cycle (using either imported (LNGCC) or domestic natural gas (NGCC)), integrated gasification combined cycle (IGCC), supercritical pulverized coal (SCPC), existing pulverized coal (EXPC), nuclear, and wind. All of the fossil fuel technologies also include an option for including carbon capture and sequestration technologies (CCS). The model allows for quick sensitivity analysis on key technical and financial assumptions, such as: capital, O&M, and fuel costs; interest rates; construction time; heat rates; taxes; depreciation;more » and capacity factors. The fossil fuel options are based on detailed life cycle analysis reports conducted by the National Energy Technology Laboratory (NETL). For each of these technologies, NETL's detailed LCAs include consideration of five stages associated with energy production: raw material acquisition (RMA), raw material transport (RMT), energy conversion facility (ECF), product transportation and distribution (PT&D), and end user electricity consumption. The goal of the NETL studies is to compare existing and future fossil fuel technology options using a cradle-to-grave analysis. The NETL reports consider constant dollar levelized cost of delivered electricity, total plant costs, greenhouse gas emissions, criteria air pollutants, mercury (Hg) and ammonia (NH3) emissions, water withdrawal and consumption, and land use (acreage).« less

  18. Development of the Burst and Transient Source Experiment (BATSE)

    NASA Technical Reports Server (NTRS)

    Horack, J. M.

    1991-01-01

    The Burst and Transient Source Experiment (BATSE), one of four instruments on the Gamma Ray Observatory, consists of eight identical detector modules mounted on the corners of the spacecraft. Developed at MSFC, BATSE is the most sensitive gamma ray burst detector flown to date. Details of the assembly and test phase of the flight hardware development are presented. Results and descriptions of calibrations performed at MSFC, TRW, and KSC are documented extensively. With the presentation of each calibration results, the reader is provided with the means to access raw calibration data for further review or analysis.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozoki, G.E.; Fitzpatrick, R.G.; Bohn, M.P.

    This report details the review of the Diablo Canyon Probabilistic Risk Assessment (DCPRA). The study was performed under contract from the Probabilistic Risk Analysis Branch, Office of Nuclear Reactor Research, USNRC by Brookhaven National Laboratory. The DCPRA is a full scope Level I effort and although the review touched on all aspects of the PRA, the internal events and seismic events received the vast majority of the review effort. The report includes a number of independent systems analyses sensitivity studies, importance analyses as well as conclusions on the adequacy of the DCPRA for use in the Diablo Canyon Long Termmore » Seismic Program.« less

  20. The ratio method: A new tool to study one-neutron halo nuclei

    DOE PAGES

    Capel, Pierre; Johnson, R. C.; Nunes, F. M.

    2013-10-02

    Recently a new observable to study halo nuclei was introduced, based on the ratio between breakup and elastic angular cross sections. This new observable is shown by the analysis of specific reactions to be independent of the reaction mechanism and to provide nuclear-structure information of the projectile. Here we explore the details of this ratio method, including the sensitivity to binding energy and angular momentum of the projectile. We also study the reliability of the method with breakup energy. Lastly, we provide guidelines and specific examples for experimentalists who wish to apply this method.

  1. Angular correlations in the prompt neutron emission in spontaneous fission of 252Cf

    NASA Astrophysics Data System (ADS)

    Kopatch, Yuri; Chietera, Andreina; Stuttgé, Louise; Gönnenwein, Friedrich; Mutterer, Manfred; Gagarski, Alexei; Guseva, Irina; Dorvaux, Olivier; Hanappe, Francis; Hambsch, Franz-Josef

    2017-09-01

    An experiment aiming at the detailed investigation of angular correlations in the neutron emission from spontaneous fission of 252Cf has been performed at IPHC Strasbourg using the angle-sensitive double ionization chamber CODIS for measuring fission fragments and a set of 60 DEMON scintillator counters for neutron detection. The main aim of the experiment is to search for an anisotropy of neutron emission in the center-of-mass system of the fragments. The present status of the data analysis and the full Monte-Carlo simulation of the experiment are reported in the present paper.

  2. Estimation of Plutonium-240 Mass in Waste Tanks Using Ultra-Sensitive Detection of Radioactive Xenon Isotopes from Spontaneous Fission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowyer, Theodore W.; Gesh, Christopher J.; Haas, Daniel A.

    This report details efforts to develop a technique which is able to detect and quantify the mass of 240Pu in waste storage tanks and other enclosed spaces. If the isotopic ratios of the plutonium contained in the enclosed space is also known, then this technique is capable of estimating the total mass of the plutonium without physical sample retrieval and radiochemical analysis of hazardous material. Results utilizing this technique are reported for a Hanford Site waste tank (TX-118) and a well-characterized plutonium sample in a laboratory environment.

  3. Evaluation of standard radiation atmosphere aerosol models for a coastal environment

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H.; Suttles, J. T.; Sebacher, D. I.; Fuller, W. H.; Lecroy, S. R.

    1986-01-01

    Calculations are compared with data from an experiment to evaluate the utility of standard radiation atmosphere (SRA) models for defining aerosol properties in atmospheric radiation computations. Initial calculations with only SRA aerosols in a four-layer atmospheric column simulation allowed a sensitivity study and the detection of spectral trends in optical depth, which differed from measurements. Subsequently, a more detailed analysis provided a revision in the stratospheric layer, which brought calculations in line with both optical depth and skylight radiance data. The simulation procedure allows determination of which atmospheric layers influence both downwelling and upwelling radiation spectra.

  4. Scientific guidelines for preservation of samples collected from Mars

    NASA Technical Reports Server (NTRS)

    Gooding, James L. (Editor)

    1990-01-01

    The maximum scientific value of Martian geologic and atmospheric samples is retained when the samples are preserved in the conditions that applied prior to their collection. Any sample degradation equates to loss of information. Based on detailed review of pertinent scientific literature, and advice from experts in planetary sample analysis, number values are recommended for key parameters in the environmental control of collected samples with respect to material contamination, temperature, head-space gas pressure, ionizing radiation, magnetic fields, and acceleration/shock. Parametric values recommended for the most sensitive geologic samples should also be adequate to preserve any biogenic compounds or exobiological relics.

  5. Markov models of genome segmentation

    NASA Astrophysics Data System (ADS)

    Thakur, Vivek; Azad, Rajeev K.; Ramaswamy, Ram

    2007-01-01

    We introduce Markov models for segmentation of symbolic sequences, extending a segmentation procedure based on the Jensen-Shannon divergence that has been introduced earlier. Higher-order Markov models are more sensitive to the details of local patterns and in application to genome analysis, this makes it possible to segment a sequence at positions that are biologically meaningful. We show the advantage of higher-order Markov-model-based segmentation procedures in detecting compositional inhomogeneity in chimeric DNA sequences constructed from genomes of diverse species, and in application to the E. coli K12 genome, boundaries of genomic islands, cryptic prophages, and horizontally acquired regions are accurately identified.

  6. Balloon-borne three-meter telescope for far-infrared and submillimeter astronomy

    NASA Technical Reports Server (NTRS)

    Fazio, G. G.

    1985-01-01

    Presented are scientific objectives, engineering analysis and design, and results of technology development for a Three-Meter Balloon-Borne Far-Infrared and Submillimeter Telescope. The scientific rationale is based on two crucial instrumental capabilities: high angular resolution which approaches eight arcseconds at one hundred micron wavelength, and high resolving power spectroscopy with good sensitivity throughout the telescope's 30-micron to 1-mm wavelength range. The high angular resolution will allow us to resolve and study in detail such objects as collapsing protostellar condensations in our own galaxy, clusters of protostars in the Magellanic clouds, giant molecular clouds in nearby galaxies, and spiral arms in distant galaxies. The large aperture of the telescope will permit sensitive spectral line measurements of molecules, atoms, and ions, which can be used to probe the physical, chemical, and dynamical conditions in a wide variety of objects.

  7. Characteristic analysis of surface waves in a sensitive plasma absorption probe

    NASA Astrophysics Data System (ADS)

    You, Wei; Li, Hong; Tan, Mingsheng; Liu, Wandong

    2018-01-01

    With features that are simple to construct and a symmetric configuration, the sensitive plasma absorption probe (SPAP) is a dependable probe for industry plasma diagnosis. The minimum peak in the characteristic curve of the coefficient of reflection stems from the surface wave resonance in plasma. We use numerical simulation methods to analyse the details of the excitation and propagation of these surface waves. With this method, the electromagnetic field structure and the resonance and propagation characteristics of the surface wave were analyzed simultaneously using the simulation method. For this SPAP structure, there are three different propagation paths for the propagating plasma surface wave. The propagation characteristic of the surface wave along each path is presented. Its dispersion relation is also calculated. The objective is to complete the relevant theory of the SPAP as well as the propagation process of the plasma surface wave.

  8. Magnetic-field gradiometer based on ultracold collisions

    NASA Astrophysics Data System (ADS)

    Wasak, Tomasz; Jachymski, Krzysztof; Calarco, Tommaso; Negretti, Antonio

    2018-05-01

    We present a detailed analysis of the usefulness of ultracold atomic collisions for sensing the strength of an external magnetic field as well as its spatial gradient. The core idea of the sensor, which we recently proposed in Jachymski et al. [Phys. Rev. Lett. 120, 013401 (2018), 10.1103/PhysRevLett.120.013401], is to probe the transmission of the atoms through a set of quasi-one-dimensional waveguides that contain an impurity. Magnetic-field-dependent interactions between the incoming atoms and the impurity naturally lead to narrow resonances that can act as sensitive field probes since they strongly affect the transmission. We illustrate our findings with concrete examples of experimental relevance, demonstrating that for large atom fluences N a sensitivity of the order of 1 nT/√{N } for the field strength and 100 nT/(mm √{N }) for the gradient can be reached with our scheme.

  9. Factors influencing atmospheric composition over subarctic North America during summer

    NASA Technical Reports Server (NTRS)

    Wofsy, Steven C.; Fan, S. -M.; Blake, D. R.; Bradshaw, J. D.; Sandholm, S. T.; Singh, H. B.; Sachse, G. W.; Harriss, R. C.

    1994-01-01

    Elevated concentrations of hydrocarbons, CO, and nitrogen oxides were observed in extensive haze layers over northeastern Canada in the summer of 1990, during ABLE 3B. Halocarbon concentrations remained near background in most layers, indicating a source from biomass wildfires. Elevated concentrations of C2Cl4 provided a sensitive indicator for pollution from urban/industrial sources. Detailed analysis of regional budgets for CO and hydrocarbons indicates that biomass fires accounted for approximately equal to 70% of the input to the subarctic for most hydrocarbons and for acetone and more than 50% for CO. Regional sources for many species (including CO) exceeded chemical sinks during summer, and the boreal region provided a net source to midlatitudes. Interannual variations and long-term trends in atmospheric composition are sensitive to climatic change; a shift to warmer, drier conditions could increase the areas burned and thus the sources of many trace gases.

  10. Application of characteristic time concepts for hydraulic fracture configuration design, control, and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Advani, S.H.; Lee, T.S.; Moon, H.

    1992-10-01

    The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less

  11. Application of characteristic time concepts for hydraulic fracture configuration design, control, and optimization. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Advani, S.H.; Lee, T.S.; Moon, H.

    1992-10-01

    The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less

  12. Effect of microwave exposure on the photo anode of DSSC sensitized with natural dye

    NASA Astrophysics Data System (ADS)

    Swathi, K. E.; Jinchu, I.; Sreelatha, K. S.; Sreekala, C. O.; Menon, Sreedevi K.

    2018-02-01

    Dye Sensitized solar cells (DSSC) are also referred to as dye sensitised cells (DSC) or Graetzel cell are the device that converts solar energy in to electricity by the photovoltaic effect. This is the class of advanced cell that mimics the artificial photosynthesis. DSSC fabrication is simple and can be done using readily available low cost materials that are nontoxic, environment friendly and works even under low flux of sunlight. DSSC exhibits good efficiency of ~ 10-14 %. This paper emphasis on the study of enhancing the efficiency of DSSC by exposing the photo anode to microwave frequency. Effect of duration of microwave exposure at 2.6 GHz on energy efficiency of solar cell is studied in detail. The SEM analysis and dye desorption studies of the photo anode confirms an increased solar energy conversion efficiency of the DSSC.

  13. Online immunoaffinity LC/MS/MS. A general method to increase sensitivity and specificity: How do you do it and what do you need?

    PubMed

    Dufield, Dawn R; Radabaugh, Melissa R

    2012-02-01

    There is an increased emphasis on hyphenated techniques such as immunoaffinity LC/MS/MS (IA-LC/MS/MS) or IA-LC/MRM. These techniques offer competitive advantages with respect to sensitivity and selectivity over traditional LC/MS and are complementary to ligand binding assays (LBA) or ELISA's. However, these techniques are not entirely straightforward and there are several tips and tricks to routine sample analysis. We describe here our methods and procedures for how to perform online IA-LC/MS/MS including a detailed protocol for the preparation of antibody (Ab) enrichment columns. We have included sample trapping and Ab methods. Furthermore, we highlight tips, tricks, minimal and optimal approaches. This technology has been shown to be viable for several applications, species and fluids from small molecules to proteins and biomarkers to PK assays. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Analysis of antigen-induced changes in pulmonary mechanics in sensitized inbred rats.

    PubMed

    Holroyde, M C; Smith, S Y; Holme, G

    1982-05-01

    An inbred line of rats was derived which develop marked and consistent dyspnea following sensitization and then exposure to aerosolized antigen. This pulmonary response was investigated in detail by determining forced pulmonary mechanics to derive respiratory rate, peak expiratory flow rate (PEFR), forced vital capacity (FVC), forced expiratory volume in 0.1 s (FEV0.1), and maximal midexpiratory flow rate (MMFR). Challenging anesthetized rats for 5 min with an aerosol of 3% egg albumin produced minimal change in respiratory rate, a 20% fall in PEFR, a 50% fall in FVC, and a 30% decrease in FEV0.1 and MMFR. The response could be inhibited or reversed by salbutamol (0.5 mg/kg, i.v.) and aminophylline (25 mg/kg, i.v.) administered either before or after challenge. The pulmonary changes are consistent with antigen-induced asthma in the rats. The response shows similarities to human asthma and may provide a relevant experimental model.

  15. A rapid, naked-eye detection of hypochlorite and bisulfite using a robust and highly-photostable indicator dye Quinaldine Red in aqueous medium

    NASA Astrophysics Data System (ADS)

    Dutta, Tanoy; Chandra, Falguni; Koner, Apurba L.

    2018-02-01

    A ;naked-eye; detection of health hazardous bisulfite (HSO3-) and hypochlorite (ClO-) using an indicator dye (Quinaldine Red, QR) in a wide range of pH is demonstrated. The molecule contains a quinoline moiety linked to an N,N-dimethylaniline moiety with a conjugated double bond. Treatment of QR with HSO3- and ClO-, in aqueous solution at near-neutral pH, resulted in a colorless product with high selectivity and sensitivity. The detection limit was 47.8 μM and 0.2 μM for HSO3- and ClO- respectively. However, ClO- was 50 times more sensitive and with 2 times faster response compared to HSO3-. The detail characterization and related analysis demonstrate the potential of QR for a rapid, robust and highly efficient colorimetric sensor for the practical applications to detect hypochlorite in water samples.

  16. Towards shot-noise limited diffraction experiments with table-top femtosecond hard x-ray sources.

    PubMed

    Holtz, Marcel; Hauf, Christoph; Weisshaupt, Jannick; Salvador, Antonio-Andres Hernandez; Woerner, Michael; Elsaesser, Thomas

    2017-09-01

    Table-top laser-driven hard x-ray sources with kilohertz repetition rates are an attractive alternative to large-scale accelerator-based systems and have found widespread applications in x-ray studies of ultrafast structural dynamics. Hard x-ray pulses of 100 fs duration have been generated at the Cu K α wavelength with a photon flux of up to 10 9 photons per pulse into the full solid angle, perfectly synchronized to the sub-100-fs optical pulses from the driving laser system. Based on spontaneous x-ray emission, such sources display a particular noise behavior which impacts the sensitivity of x-ray diffraction experiments. We present a detailed analysis of the photon statistics and temporal fluctuations of the x-ray flux, together with experimental strategies to optimize the sensitivity of optical pump/x-ray probe experiments. We demonstrate measurements close to the shot-noise limit of the x-ray source.

  17. Tip-enhanced Raman scattering microscopy: Recent advance in tip production

    NASA Astrophysics Data System (ADS)

    Fujita, Yasuhiko; Walke, Peter; De Feyter, Steven; Uji-i, Hiroshi

    2016-08-01

    Tip-enhanced Raman scattering (TERS) microscopy is a technique that combines the chemical sensitivity of Raman spectroscopy with the resolving power of scanning probe microscopy. The key component of any TERS setup is a plasmonically-active noble metal tip, which serves to couple far-field incident radiation with the near-field. Thus, the design and implementation of reproducible probes are crucial for the continued development of TERS as a tool for nanoscopic analysis. Here we discuss conventional methods for the fabrication of TERS-ready tips, highlighting the problems therein, as well as detailing more recent developments to improve reducibility. In addition, the idea of remote excitation-TERS is enlightened upon, whereby TERS sensitivity is further improved by using propagating surface plasmons to separate the incident radiation from the tip apex, as well as how this can be incorporated into the fabrication process.

  18. Modeling of Slot Waveguide Sensors Based on Polymeric Materials

    PubMed Central

    Bettotti, Paolo; Pitanti, Alessandro; Rigo, Eveline; De Leonardis, Francesco; Passaro, Vittorio M. N.; Pavesi, Lorenzo

    2011-01-01

    Slot waveguides are very promising for optical sensing applications because of their peculiar spatial mode profile. In this paper we have carried out a detailed analysis of mode confinement properties in slot waveguides realized in very low refractive index materials. We show that the sensitivity of a slot waveguide is not directly related to the refractive index contrast of high and low materials forming the waveguide. Thus, a careful design of the structures allows the realization of high sensitivity devices even in very low refractive index materials (e.g., polymers) to be achieved. Advantages of low index dielectrics in terms of cost, functionalization and ease of fabrication are discussed while keeping both CMOS compatibility and integrable design schemes. Finally, applications of low index slot waveguides as substitute of bulky fiber capillary sensors or in ring resonator architectures are addressed. Theoretical results of this work are relevant to well established polymer technologies. PMID:22164020

  19. Modeling nearshore morphological evolution at seasonal scale

    USGS Publications Warehouse

    Walstra, D.-J.R.; Ruggiero, P.; Lesser, G.; Gelfenbaum, G.

    2006-01-01

    A process-based model is compared with field measurements to test and improve our ability to predict nearshore morphological change at seasonal time scales. The field experiment, along the dissipative beaches adjacent to Grays Harbor, Washington USA, successfully captured the transition between the high-energy erosive conditions of winter and the low-energy beach-building conditions typical of summer. The experiment documented shoreline progradation on the order of 20 m and as much as 175 m of onshore bar migration. Significant alongshore variability was observed in the morphological response of the sandbars over a 4 km reach of coast. A detailed sensitivity analysis suggests that the model results are more sensitive to adjusting the sediment transport associated with asymmetric oscillatory wave motions than to adjusting the transport due to mean currents. Initial results suggest that alongshore variations in the initial bathymetry are partially responsible for the observed alongshore variable morphological response during the experiment. Copyright ASCE 2006.

  20. Towards shot-noise limited diffraction experiments with table-top femtosecond hard x-ray sources

    PubMed Central

    Holtz, Marcel; Hauf, Christoph; Weisshaupt, Jannick; Salvador, Antonio-Andres Hernandez; Woerner, Michael; Elsaesser, Thomas

    2017-01-01

    Table-top laser-driven hard x-ray sources with kilohertz repetition rates are an attractive alternative to large-scale accelerator-based systems and have found widespread applications in x-ray studies of ultrafast structural dynamics. Hard x-ray pulses of 100 fs duration have been generated at the Cu Kα wavelength with a photon flux of up to 109 photons per pulse into the full solid angle, perfectly synchronized to the sub-100-fs optical pulses from the driving laser system. Based on spontaneous x-ray emission, such sources display a particular noise behavior which impacts the sensitivity of x-ray diffraction experiments. We present a detailed analysis of the photon statistics and temporal fluctuations of the x-ray flux, together with experimental strategies to optimize the sensitivity of optical pump/x-ray probe experiments. We demonstrate measurements close to the shot-noise limit of the x-ray source. PMID:28795079

  1. Controllability of Free-piston Stirling Engine/linear Alternator Driving a Dynamic Load

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Rauch, Jeffrey S.

    1994-01-01

    This paper presents the dynamic behavior of a Free-Piston Stirling Engine/linear alternator (FPSE/LA) driving a single-phase fractional horse-power induction motor. The controllability and dynamic stability of the system are discussed by means of sensitivity effects of variations in system parameters, engine controller, operating conditions, and mechanical loading on the induction motor. The approach used expands on a combined mechanical and thermodynamic formulation employed in a previous paper. The application of state-space technique and frequency domain analysis enhances understanding of the dynamic interactions. Engine-alternator parametric sensitivity studies, similar to those of the previous paper, are summarized. Detailed discussions are provided for parametric variations which relate to the engine controller and system operating conditions. The results suggest that the controllability of a FPSE-based power system is enhanced by proper operating conditions and built-in controls.

  2. Growth and substrate consumption of Nitrobacter agilis cells immobilized in carrageenan: part 1. Dynamic modeling.

    PubMed

    de Gooijer, C D; Wijffels, R H; Tramper, J

    1991-07-01

    The modeling of the growth of Nitrobacter agilis cell immobilized in kappa-carrageenan is presented. A detailed description is given of the modeling of internal diffusion and growth of cells in the support matrix in addition to external mass transfer resistance. The model predicts the substrate and biomass profiles in the support as well as the macroscopic oxygen consumption rate of the immobilized biocatalyst in time. The model is tested by experiments with continuously operated airlift loop reactors containing cells immobilized in kappa-carrageenan. The model describes experimental data very well. It is clearly shown that external mass transfer may not be neglected. Furthermore, a sensitivity analysis of the parameters at their values during the experiments revealed that apart from the radius of the spheres and the substrate bulk concentration, the external mass transfer resistance coefficient is the most sensitive parameter for our case.

  3. Ultra-sensitive flow measurement in individual nanopores through pressure--driven particle translocation.

    PubMed

    Gadaleta, Alessandro; Biance, Anne-Laure; Siria, Alessandro; Bocquet, Lyderic

    2015-05-07

    A challenge for the development of nanofluidics is to develop new instrumentation tools, able to probe the extremely small mass transport across individual nanochannels. Such tools are a prerequisite for the fundamental exploration of the breakdown of continuum transport in nanometric confinement. In this letter, we propose a novel method for the measurement of the hydrodynamic permeability of nanometric pores, by diverting the classical technique of Coulter counting to characterize a pressure-driven flow across an individual nanopore. Both the analysis of the translocation rate, as well as the detailed statistics of the dwell time of nanoparticles flowing across a single nanopore, allow us to evaluate the permeability of the system. We reach a sensitivity for the water flow down to a few femtoliters per second, which is more than two orders of magnitude better than state-of-the-art alternative methods.

  4. Rydberg Dipole Antennas

    NASA Astrophysics Data System (ADS)

    Stack, Daniel; Rodenburg, Bradon; Pappas, Stephen; Su, Wangshen; St. John, Marc; Kunz, Paul; Simon, Matt; Gordon, Joshua; Holloway, Christopher

    2017-04-01

    Measurements of microwave frequency electric fields by traditional methods (i.e. engineered antennas) have limited sensitivity and can be difficult to calibrate properly. A useful tool to address this problem are highly-excited (Rydberg) neutral atoms which have very large electric-dipole moments and many dipole-allowed transitions in the range of 1-500 GHz. Using Rydberg states, it is possible to sensitively probe the electric field in this frequency range using the combination of two quantum interference phenomena: electromagnetically induced transparency and the Autler-Townes effect. This atom-light interaction can be modeled by the classical description of a harmonically bound electron. The classical damped, driven, coupled-oscillators model yields significant insights into the deep connections between classical and quantum physics. We will present a detailed experimental analysis of the noise processes in making such measurements in the laboratory and discuss the prospects for building a practical atomic microwave receiver.

  5. A highly sensitive and accurate gene expression analysis by sequencing ("bead-seq") for a single cell.

    PubMed

    Matsunaga, Hiroko; Goto, Mari; Arikawa, Koji; Shirai, Masataka; Tsunoda, Hiroyuki; Huang, Huan; Kambara, Hideki

    2015-02-15

    Analyses of gene expressions in single cells are important for understanding detailed biological phenomena. Here, a highly sensitive and accurate method by sequencing (called "bead-seq") to obtain a whole gene expression profile for a single cell is proposed. A key feature of the method is to use a complementary DNA (cDNA) library on magnetic beads, which enables adding washing steps to remove residual reagents in a sample preparation process. By adding the washing steps, the next steps can be carried out under the optimal conditions without losing cDNAs. Error sources were carefully evaluated to conclude that the first several steps were the key steps. It is demonstrated that bead-seq is superior to the conventional methods for single-cell gene expression analyses in terms of reproducibility, quantitative accuracy, and biases caused during sample preparation and sequencing processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  7. A methodology for optimal MSW management, with an application in the waste transportation of Attica Region, Greece

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Economopoulou, M.A.; Economopoulou, A.A.; Economopoulos, A.P., E-mail: eco@otenet.gr

    2013-11-15

    Highlights: • A two-step (strategic and detailed optimal planning) methodology is used for solving complex MSW management problems. • A software package is outlined, which can be used for generating detailed optimal plans. • Sensitivity analysis compares alternative scenarios that address objections and/or wishes of local communities. • A case study shows the application of the above procedure in practice and demonstrates the results and benefits obtained. - Abstract: The paper describes a software system capable of formulating alternative optimal Municipal Solid Wastes (MSWs) management plans, each of which meets a set of constraints that may reflect selected objections and/ormore » wishes of local communities. The objective function to be minimized in each plan is the sum of the annualized capital investment and annual operating cost of all transportation, treatment and final disposal operations involved, taking into consideration the possible income from the sale of products and any other financial incentives or disincentives that may exist. For each plan formulated, the system generates several reports that define the plan, analyze its cost elements and yield an indicative profile of selected types of installations, as well as data files that facilitate the geographic representation of the optimal solution in maps through the use of GIS. A number of these reports compare the technical and economic data from all scenarios considered at the study area, municipality and installation level constituting in effect sensitivity analysis. The generation of alternative plans offers local authorities the opportunity of choice and the results of the sensitivity analysis allow them to choose wisely and with consensus. The paper presents also an application of this software system in the capital Region of Attica in Greece, for the purpose of developing an optimal waste transportation system in line with its approved waste management plan. The formulated plan was able to: (a) serve 113 Municipalities and Communities that generate nearly 2 million t/y of comingled MSW with distinctly different waste collection patterns, (b) take into consideration several existing waste transfer stations (WTS) and optimize their use within the overall plan, (c) select the most appropriate sites among the potentially suitable (new and in use) ones, (d) generate the optimal profile of each WTS proposed, and (e) perform sensitivity analysis so as to define the impact of selected sets of constraints (limitations in the availability of sites and in the capacity of their installations) on the design and cost of the ensuing optimal waste transfer system. The results show that optimal planning offers significant economic savings to municipalities, while reducing at the same time the present levels of traffic, fuel consumptions and air emissions in the congested Athens basin.« less

  8. A comparison of solute-transport solution techniques based on inverse modelling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2000-01-01

    Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.

  9. Quantifying the sensitivity of G. oxydans ATCC 621H and DSM 3504 to osmotic stress triggered by soluble buffers.

    PubMed

    Luchterhand, B; Fischöder, T; Grimm, A R; Wewetzer, S; Wunderlich, M; Schlepütz, T; Büchs, J

    2015-04-01

    In Gluconobacter oxydans cultivations on glucose, CaCO3 is typically used as pH-buffer. This buffer, however, has disadvantages: suspended CaCO3 particles make the medium turbid, thereby, obstructing analysis of microbial growth via optical density and scattered light. Upon searching for alternative soluble pH-buffers, bacterial growth and productivity was inhibited most probably due to osmotic stress. Thus, this study investigates in detail the osmotic sensitivity of G. oxydans ATCC 621H and DSM 3504 using the Respiratory Activity MOnitoring System. The tested soluble pH-buffers and other salts attained osmolalities of 0.32-1.19 osmol kg(-1). This study shows that G. oxydans ATCC 621H and DSM 3504 respond quite sensitively to increased osmolality in comparison to other microbial strains of industrial interest. Osmolality values of >0.5 osmol kg(-1) should not be exceeded to avoid inhibition of growth and product formation. This osmolality threshold needs to be considered when working with soluble pH-buffers.

  10. Bone Composition Diagnostics: Photoacoustics Versus Ultrasound

    NASA Astrophysics Data System (ADS)

    Yang, Lifeng; Lashkari, Bahman; Mandelis, Andreas; Tan, Joel W. Y.

    2015-06-01

    Ultrasound (US) backscatter from bones depends on the mechanical properties and the microstructure of the interrogated bone. On the other hand, photoacoustics (PA) is sensitive to optical properties of tissue and can detect composition variation. Therefore, PA can provide complementary information about bone health and integrity. In this work, a comparative study of US backscattering and PA back-propagating signals from animal trabecular bones was performed. Both methods were applied using a linear frequency modulation chirp and matched filtering. A 2.2 MHz ultrasonic transducer was employed to detect both signals. The use of the frequency domain facilitates spectral analysis. The variation of signals shows that in addition to sensitivity to mineral changes, PA exhibits sensitivity to changes in the organic part of the bone. It is, therefore, concluded that the combination of both modalities can provide complementary detailed information on bone health than either method separately. In addition, comparison of PA and US depthwise images shows the higher penetration of US. Surface scan images exhibit very weak correlation between US and PA which could be caused by the different signal generation origins in mechanical versus optical properties, respectively.

  11. What We Know About: Culturally Sensitive Instruction and Student Learning.

    ERIC Educational Resources Information Center

    Educational Research Service, Arlington, VA.

    Current research suggests that culture strongly influences students' learning patterns, communication styles, perceptions, and behavior. This research summary details ways that teachers can improve student learning by becoming aware of cultural differences and employing culturally sensitive instructional methods. Concentrating on how students…

  12. Profitability analysis of a femtosecond laser system for cataract surgery using a fuzzy logic approach

    PubMed Central

    Trigueros, José Antonio; Piñero, David P; Ismail, Mahmoud M

    2016-01-01

    AIM To define the financial and management conditions required to introduce a femtosecond laser system for cataract surgery in a clinic using a fuzzy logic approach. METHODS In the simulation performed in the current study, the costs associated to the acquisition and use of a commercially available femtosecond laser platform for cataract surgery (VICTUS, TECHNOLAS Perfect Vision GmbH, Bausch & Lomb, Munich, Germany) during a period of 5y were considered. A sensitivity analysis was performed considering such costs and the countable amortization of the system during this 5y period. Furthermore, a fuzzy logic analysis was used to obtain an estimation of the money income associated to each femtosecond laser-assisted cataract surgery (G). RESULTS According to the sensitivity analysis, the femtosecond laser system under evaluation can be profitable if 1400 cataract surgeries are performed per year and if each surgery can be invoiced more than $500. In contrast, the fuzzy logic analysis confirmed that the patient had to pay more per surgery, between $661.8 and $667.4 per surgery, without considering the cost of the intraocular lens (IOL). CONCLUSION A profitability of femtosecond laser systems for cataract surgery can be obtained after a detailed financial analysis, especially in those centers with large volumes of patients. The cost of the surgery for patients should be adapted to the real flow of patients with the ability of paying a reasonable range of cost. PMID:27500115

  13. INTEGRATING DETAILED SOIL SURVEY AND LANDTYPE MAPPING FOR WATERSHED SCALE ASSESSMENTS IN THE WESTERN OREGON CASCADE MOUNTAINS

    EPA Science Inventory

    Although the Western Oregon Cascades is one of the most intensely managed and economically important forest regions in North America, a lack of detailed soil information has hindered watershed-scale assessments of forest productivity, water supply, sensitive wildlife species, and...

  14. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait.

    PubMed

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2016-06-14

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. LC-MS-based characterization of the peptide reactivity of chemicals to improve the in vitro prediction of the skin sensitization potential.

    PubMed

    Natsch, Andreas; Gfeller, Hans

    2008-12-01

    A key step in the skin sensitization process is the formation of a covalent adduct between skin sensitizers and endogenous proteins and/or peptides in the skin. Based on this mechanistic understanding, there is a renewed interest in in vitro assays to determine the reactivity of chemicals toward peptides in order to predict their sensitization potential. A standardized peptide reactivity assay yielded a promising predictivity. This published assay is based on high-performance liquid chromatography with ultraviolet detection to quantify peptide depletion after incubation with test chemicals. We had observed that peptide depletion may be due to either adduct formation or peptide oxidation. Here we report a modified assay based on both liquid chromatography-mass spectrometry (LC-MS) analysis and detection of free thiol groups. This approach allows simultaneous determination of (1) peptide depletion, (2) peptide oxidation (dimerization), (3) adduct formation, and (4) thiol reactivity and thus generates a more detailed characterization of the reactivity of a molecule. Highly reactive molecules are further discriminated with a kinetic measure. The assay was validated on 80 chemicals. Peptide depletion could accurately be quantified both with LC-MS detection and depletion of thiol groups. The majority of the moderate/strong/extreme sensitizers formed detectable peptide adducts, but many sensitizers were also able to catalyze peptide oxidation. Whereas adduct formation was only observed for sensitizers, this oxidation reaction was also observed for two nonsensitizing fragrance aldehydes, indicating that peptide depletion might not always be regarded as sufficient evidence for rating a chemical as a sensitizer. Thus, this modified assay gives a more informed view of the peptide reactivity of chemicals to better predict their sensitization potential.

  16. Bidirectional Retroviral Integration Site PCR Methodology and Quantitative Data Analysis Workflow.

    PubMed

    Suryawanshi, Gajendra W; Xu, Song; Xie, Yiming; Chou, Tom; Kim, Namshin; Chen, Irvin S Y; Kim, Sanggu

    2017-06-14

    Integration Site (IS) assays are a critical component of the study of retroviral integration sites and their biological significance. In recent retroviral gene therapy studies, IS assays, in combination with next-generation sequencing, have been used as a cell-tracking tool to characterize clonal stem cell populations sharing the same IS. For the accurate comparison of repopulating stem cell clones within and across different samples, the detection sensitivity, data reproducibility, and high-throughput capacity of the assay are among the most important assay qualities. This work provides a detailed protocol and data analysis workflow for bidirectional IS analysis. The bidirectional assay can simultaneously sequence both upstream and downstream vector-host junctions. Compared to conventional unidirectional IS sequencing approaches, the bidirectional approach significantly improves IS detection rates and the characterization of integration events at both ends of the target DNA. The data analysis pipeline described here accurately identifies and enumerates identical IS sequences through multiple steps of comparison that map IS sequences onto the reference genome and determine sequencing errors. Using an optimized assay procedure, we have recently published the detailed repopulation patterns of thousands of Hematopoietic Stem Cell (HSC) clones following transplant in rhesus macaques, demonstrating for the first time the precise time point of HSC repopulation and the functional heterogeneity of HSCs in the primate system. The following protocol describes the step-by-step experimental procedure and data analysis workflow that accurately identifies and quantifies identical IS sequences.

  17. Label-free and highly sensitive optical imaging of detailed microcirculation within meninges and cortex in mice with the cranium left intact.

    PubMed

    Jia, Yali; An, Lin; Wang, Ruikang K

    2010-01-01

    We demonstrate for the first time that the detailed blood flow distribution within intracranial dura mater and cortex can be visualized by an ultrahigh sensitive optical microangiography (UHS-OMAG). The study uses an UHS-OMAG system operating at 1310 nm with an imaging speed at 150 frames per second that requires approximately 10 s to complete one 3-D scan of approximately 2.5 x 2.5 mm(2). The system is sensitive to blood flow with a velocity ranging from approximately 4 microms to approximately 23 mms. We show superior performance of UHS-OMAG in providing functional images of capillary level microcirculation within meninges in mice with the cranium left intact, the results of which correlate well with the standard dural histopathology.

  18. Label-free and highly sensitive optical imaging of detailed microcirculation within meninges and cortex in mice with the cranium left intact

    PubMed Central

    Jia, Yali; An, Lin; Wang, Ruikang K.

    2010-01-01

    We demonstrate for the first time that the detailed blood flow distribution within intracranial dura mater and cortex can be visualized by an ultrahigh sensitive optical microangiography (UHS-OMAG). The study uses an UHS-OMAG system operating at 1310 nm with an imaging speed at 150 frames per second that requires ∼10 s to complete one 3-D scan of ∼2.5×2.5 mm2. The system is sensitive to blood flow with a velocity ranging from ∼4 μm∕s to ∼23 mm∕s. We show superior performance of UHS-OMAG in providing functional images of capillary level microcirculation within meninges in mice with the cranium left intact, the results of which correlate well with the standard dural histopathology. PMID:20614993

  19. Validation and Sensitivity Analysis of a New Atmosphere-Soil-Vegetation Model.

    NASA Astrophysics Data System (ADS)

    Nagai, Haruyasu

    2002-02-01

    This paper describes details, validation, and sensitivity analysis of a new atmosphere-soil-vegetation model. The model consists of one-dimensional multilayer submodels for atmosphere, soil, and vegetation and radiation schemes for the transmission of solar and longwave radiations in canopy. The atmosphere submodel solves prognostic equations for horizontal wind components, potential temperature, specific humidity, fog water, and turbulence statistics by using a second-order closure model. The soil submodel calculates the transport of heat, liquid water, and water vapor. The vegetation submodel evaluates the heat and water budget on leaf surface and the downward liquid water flux. The model performance was tested by using measured data of the Cooperative Atmosphere-Surface Exchange Study (CASES). Calculated ground surface fluxes were mainly compared with observations at a winter wheat field, concerning the diurnal variation and change in 32 days of the first CASES field program in 1997, CASES-97. The measured surface fluxes did not satisfy the energy balance, so sensible and latent heat fluxes obtained by the eddy correlation method were corrected. By using options of the solar radiation scheme, which addresses the effect of the direct solar radiation component, calculated albedo agreed well with the observations. Some sensitivity analyses were also done for model settings. Model calculations of surface fluxes and surface temperature were in good agreement with measurements as a whole.

  20. Picogram determination of N-nitrosodimethylamine in water.

    PubMed

    Hu, Ruikang; Zhang, Lifeng; Yang, Zhaoguang

    2008-01-01

    N-nitrosodimethylamine (NDMA) persistence within surface waters is a major concern for downstream communities exploiting these waters as drinking water supplies. The objective of this study is to develop a novel and efficient analytical method for NDMA via different technologies: pulsed splitless gas chromatography-nitrogen phosphorus detector (GC-NPD), large volume injection (LVI) gas chromatography-mass spectrometry (GC/MS) via programmable temperature vaporizer (PTV) inlet or PTV-gas chromatography-triple quadruple mass spectrometry (GC-MS/MS) and continuous liquid-liquid extraction. It was found that the sensitivity required for NDMA analysis by GC-NPD can only be achieved when the NPD bead is extremely clean. LVI via PTV can greatly improve GC-MS system sensitivity for analyzing NDMA. With the help of DB-624 (25 m x 200 microm x 1.12 microm) connected with DB-5MS (30 m x 250 microm x 0.25 microm) in series, PTV-GC/MS could overcome the matrix interference for the trace analysis of NDMA. Variable instrument conditions were studied in detail, with the optimized process being validated via precision and accuracy studies. PTV- triple quadruple GC-MS/MS system could efficiently remove the interference on a single DB-5MS (30 m x 250 microm x 0.25 microm) column with good sensitivity and selectivity. The developed methods have been successfully applied to test NDMA in different types of water samples with satisfactory results. (c) IWA Publishing 2008.

  1. TASI Lectures on Cosmological Observables and String Theory

    NASA Astrophysics Data System (ADS)

    Silverstein, Eva

    These lectures provide an updated pedagogical treatment of the theoretical structure and phenomenology of some basic mechanisms for inflation, along with an overview of the structure of cosmological uplifts of holographic duality. A full treatment of the problem requires `ultraviolet completion' because of the sensitivity of inflation to quantum gravity effects, including back reaction and non-adiabatic production of heavy degrees of freedom. Cosmological observations imply accelerated expansion of the late universe, and provide increasingly precise constraints and discovery potential on the amplitude and shape of primordial tensor and scalar perturbations, and some of their correlation functions. Most backgrounds of string theory have positive potential energy, with a rich but still highly constrained landscape of solutions. The theory contains novel mechanisms for inflation, some subject to significant observational tests, with highly UV-sensitive tensor mode measurements being a prime example along with certain shapes of primordial correlation functions. Although the detailed ultraviolet completion is not accessible experimentally, some of these mechanisms directly stimulate a more systematic analysis of the space of low energy theories and signatures relevant for analysis of data, which is sensitive to physics orders of magnitude above the energy scale of inflation as a result of long time evolution (dangerous irrelevance) and the substantial amount of data (allowing constraints on quantities with signal/noise. Portions of these lectures appeared previously in Les Houches 2013, "Post-Planck Cosmology".

  2. Interfacial charge separation and photovoltaic efficiency in Fe(ii)-carbene sensitized solar cells.

    PubMed

    Pastore, Mariachiara; Duchanois, Thibaut; Liu, Li; Monari, Antonio; Assfeld, Xavier; Haacke, Stefan; Gros, Philippe C

    2016-10-12

    The first combined theoretical and photovoltaic characterization of both homoleptic and heteroleptic Fe(ii)-carbene sensitized photoanodes in working dye sensitized solar cells (DSSCs) has been performed. Three new heteroleptic Fe(ii)-NHC dye sensitizers have been synthesized, characterized and tested. Despite an improved interfacial charge separation in comparison to the homoleptic compounds, the heteroleptic complexes did not show boosted photovoltaic performances. The ab initio quantitative analysis of the interfacial electron and hole transfers and the measured photovoltaic data clearly evidenced fast recombination reactions for heteroleptics, even associated with un unfavorable directional electron flow, and hence slower injection rates, in the case of homoleptics. Notably, quantum mechanics calculations revealed that deprotonation of the not anchored carboxylic function in the homoleptic complex can effectively accelerate the electron injection rate and completely suppress the electron recombination to the oxidized dye. This result suggests that introduction of strong electron-donating substituents on the not-anchored carbene ligand in heteroleptic complexes, in such a way of mimicking the electronic effects of the carboxylate functionality, should yield markedly improved interfacial charge generation properties. The present results, providing for the first time a detailed understanding of the interfacial electron transfers and photovoltaic characterization in Fe(ii)-carbene sensitized solar cells, open the way to a rational molecular engineering of efficient iron-based dyes for photoelectrochemical applications.

  3. Sensitivity analysis of the Aquacrop and SAFYE crop models for the assessment of water limited winter wheat yield in regional scale applications.

    PubMed

    Silvestro, Paolo Cosmo; Pignatti, Stefano; Yang, Hao; Yang, Guijun; Pascucci, Simone; Castaldi, Fabio; Casa, Raffaele

    2017-01-01

    Process-based models can be usefully employed for the assessment of field and regional-scale impact of drought on crop yields. However, in many instances, especially when they are used at the regional scale, it is necessary to identify the parameters and input variables that most influence the outputs and to assess how their influence varies when climatic and environmental conditions change. In this work, two different crop models, able to represent yield response to water, Aquacrop and SAFYE, were compared, with the aim to quantify their complexity and plasticity through Global Sensitivity Analysis (GSA), using Morris and EFAST (Extended Fourier Amplitude Sensitivity Test) techniques, for moderate to strong water limited climate scenarios. Although the rankings of the sensitivity indices was influenced by the scenarios used, the correlation among the rankings, higher for SAFYE than for Aquacrop, assessed by the top-down correlation coefficient (TDCC), revealed clear patterns. Parameters and input variables related to phenology and to water stress physiological processes were found to be the most influential for Aquacrop. For SAFYE, it was found that the water stress could be inferred indirectly from the processes regulating leaf growth, described in the original SAFY model. SAFYE has a lower complexity and plasticity than Aquacrop, making it more suitable to less data demanding regional scale applications, in case the only objective is the assessment of crop yield and no detailed information is sought on the mechanisms of the stress factors affecting its limitations.

  4. Sensitization to sunflower pollen and lung functions in sunflower processing workers.

    PubMed

    Atis, S; Tutluoglu, B; Sahin, K; Yaman, M; Küçükusta, A R; Oktay, I

    2002-01-01

    This study aimed to investigate whether exposure to sunflower pollen (Helianthus annuus) increases both sensitization and respiratory symptoms, and whether or not it affects lung functions in sunflower processing workers. The largest sunflower processing factories in the Thrace region of Turkey participated in this study. Workers from the units directly exposed to sunflower seed enrolled as the study group (n = 102) and workers who were not directly exposed to Helianthus annuus pollen (n = 102) were the control group. Detailed questionnaires covering respiratory and allergic symptoms were completed, and skin prick tests and lung function tests were performed. We found a very high rate (23.5%) of sensitization to Helianthus annuus in the study group compared to the controls (P<0.001). Logistic regression analysis showed that the risk of sensitization to H. annuus was increased 4.7-fold (odds ratio = 4.17, 95%) confidence interval = 1.3-16.7) if subjects were exposed to sunflower pollen in the workplace. While asthmatic symptoms and allergic skin diseases were not different between the two groups, workers in the study group had a higher rate of allergic rhinitis and conjunctivitis (P<0.05). We found that pulmonary function was significantly impaired in the study group (P<0.01). Using a multivariate analysis model, inclusion in the study group was found to be a predictive factor for impairment of lung function (P=0.002). We conclude that sunflower pollen has high allergenic potential, especially when there is close contact, and exposure to sunflower pollen in the workplace can result in impairment in lung function.

  5. New pediatric vision screener, part II: electronics, software, signal processing and validation.

    PubMed

    Gramatikov, Boris I; Irsch, Kristina; Wu, Yi-Kai; Guyton, David L

    2016-02-04

    We have developed an improved pediatric vision screener (PVS) that can reliably detect central fixation, eye alignment and focus. The instrument identifies risk factors for amblyopia, namely eye misalignment and defocus. The device uses the birefringence of the human fovea (the most sensitive part of the retina). The optics have been reported in more detail previously. The present article focuses on the electronics and the analysis algorithms used. The objective of this study was to optimize the analog design, data acquisition, noise suppression techniques, the classification algorithms and the decision making thresholds, as well as to validate the performance of the research instrument on an initial group of young test subjects-18 patients with known vision abnormalities (eight male and 10 female), ages 4-25 (only one above 18) and 19 controls with proven lack of vision issues. Four statistical methods were used to derive decision making thresholds that would best separate patients with abnormalities from controls. Sensitivity and specificity were calculated for each method, and the most suitable one was selected. Both the central fixation and the focus detection criteria worked robustly and allowed reliable separation between normal test subjects and symptomatic subjects. The sensitivity of the instrument was 100 % for both central fixation and focus detection. The specificity was 100 % for central fixation and 89.5 % for focus detection. The overall sensitivity was 100 % and the overall specificity was 94.7 %. Despite the relatively small initial sample size, we believe that the PVS instrument design, the analysis methods employed, and the device as a whole, will prove valuable for mass screening of children.

  6. Sensitivity analysis of the Aquacrop and SAFYE crop models for the assessment of water limited winter wheat yield in regional scale applications

    PubMed Central

    Pignatti, Stefano; Yang, Hao; Yang, Guijun; Pascucci, Simone; Castaldi, Fabio

    2017-01-01

    Process-based models can be usefully employed for the assessment of field and regional-scale impact of drought on crop yields. However, in many instances, especially when they are used at the regional scale, it is necessary to identify the parameters and input variables that most influence the outputs and to assess how their influence varies when climatic and environmental conditions change. In this work, two different crop models, able to represent yield response to water, Aquacrop and SAFYE, were compared, with the aim to quantify their complexity and plasticity through Global Sensitivity Analysis (GSA), using Morris and EFAST (Extended Fourier Amplitude Sensitivity Test) techniques, for moderate to strong water limited climate scenarios. Although the rankings of the sensitivity indices was influenced by the scenarios used, the correlation among the rankings, higher for SAFYE than for Aquacrop, assessed by the top-down correlation coefficient (TDCC), revealed clear patterns. Parameters and input variables related to phenology and to water stress physiological processes were found to be the most influential for Aquacrop. For SAFYE, it was found that the water stress could be inferred indirectly from the processes regulating leaf growth, described in the original SAFY model. SAFYE has a lower complexity and plasticity than Aquacrop, making it more suitable to less data demanding regional scale applications, in case the only objective is the assessment of crop yield and no detailed information is sought on the mechanisms of the stress factors affecting its limitations. PMID:29107963

  7. Metastability and instability of organic crystalline substances.

    PubMed

    Randzio, Stanislaw L; Kutner, Andrzej

    2008-02-07

    Discovery of an unexpected and thermodynamically paradoxical transition from a crystalline state to an amorphous dense glassy state induced in pure organic substances by a direct absorption of a quantity of heat under atmospheric pressure and its detailed analysis performed with the use of a sensitive scanning transitiometer are described. The obtained results present first experimental precise evidence for understanding the mechanism of such a structural instability of crystalline substances in the form of c-a transition. The observed c-a transition is a purely physical phenomenon, occurring between two nonequilibrium states, a metastable crystalline phase and a dense glass, occurring through a local transient phenomenon of virtual melting. The metastable state of a crystalline substance can be caused by existence of a number of crystalline imperfections created either during crystallization or by external actions. By measuring extremely sensitive energetic effects, we found the present method to be helpful for quantitative determination of the critical number of imperfections in a crystalline solid, which make it metastable and for an indication under which conditions such a metastable crystalline form becomes unstable. By performing the transitiometric analysis of c-a transitions with two polymorphs of rosiglitazone maleate, we demonstrated to what extent this analysis is important in investigation of stability of crystalline components of drugs.

  8. Nuclear Thermal Propulsion Mars Mission Systems Analysis and Requirements Definition

    NASA Technical Reports Server (NTRS)

    Mulqueen, Jack; Chiroux, Robert C.; Thomas, Dan; Crane, Tracie

    2007-01-01

    This paper describes the Mars transportation vehicle design concepts developed by the Marshall Space Flight Center (MSFC) Advanced Concepts Office. These vehicle design concepts provide an indication of the most demanding and least demanding potential requirements for nuclear thermal propulsion systems for human Mars exploration missions from years 2025 to 2035. Vehicle concept options vary from large "all-up" vehicle configurations that would transport all of the elements for a Mars mission on one vehicle. to "split" mission vehicle configurations that would consist of separate smaller vehicles that would transport cargo elements and human crew elements to Mars separately. Parametric trades and sensitivity studies show NTP stage and engine design options that provide the best balanced set of metrics based on safety, reliability, performance, cost and mission objectives. Trade studies include the sensitivity of vehicle performance to nuclear engine characteristics such as thrust, specific impulse and nuclear reactor type. Tbe associated system requirements are aligned with the NASA Exploration Systems Mission Directorate (ESMD) Reference Mars mission as described in the Explorations Systems Architecture Study (ESAS) report. The focused trade studies include a detailed analysis of nuclear engine radiation shield requirements for human missions and analysis of nuclear thermal engine design options for the ESAS reference mission.

  9. Measurement of the Electroweak Single Top Quark Production Cross Section and the CKM Matrix Element $$|V_{tb}|$$ at CDF Run II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larana, Bruno Casal

    2010-01-01

    The establishment of the electroweak single top quark production at CDF is experimentally challenging. The small single top signal hidden under large uncertain background processes makes it necessary an excellent understanding of the detector and a detailed study of the processes involved. Moreover, simple counting experiments are not sufficient to extract enough information from the candidate event sample and multivariate analysis techniques are crucial to distinguish signal from background. This thesis presents the world’s most sensitive individual search, together with CDF’s Neural Network analysis, for the combined s- and t-channel single top production. This analysis uses a dataset that correspondsmore » to an integrated luminosity of 3.2fb -1, and is based on a Boosted Decision Tree method that combines information from several input variables to construct a final powerful discriminant, reaching a sensitivity to the combined single top quark production equivalent to 5.2σ. The measured combined single top quark production cross section is 2.1 +0.7 -0.6 pb assuming a top quark mass of 175 GeV/c 2. The probability that this result comes from a background-only fluctuation (p-value) is 0.0002, which corresponds to 3.5σ.« less

  10. Sensitivity analysis of heliostat aiming strategies and receiver size on annual thermal production of a molten salt external receiver

    NASA Astrophysics Data System (ADS)

    Servert, Jorge; González, Ana; Gil, Javier; López, Diego; Funes, Jose Felix; Jurado, Alfonso

    2017-06-01

    Even though receiver size and aiming strategy are to be jointly analyzed to optimize the thermal energy that can be extracted from a solar tower receiver, customarily, they have been studied as separated problems. The main reason is the high-level of detail required to define aiming strategies, which are often simplified in annual simulation models. Aiming strategies are usually focused on obtaining a homogeneous heat flux on the central receiver, with the goal to minimize the maximum heat flux value that may lead to damaging it. Some recent studies have addressed the effect of different aiming strategies on different receiver types, but they have only focused on the optical efficiency. The receiver size is also an additional parameter that has to be considered: larger receiver sizes provide a larger aiming surface and a reduction on spillage losses, but require higher investment while penalizing the thermal performance of the receiver due to the greater external convection losses. The present paper presents a sensitivity analysis of both factors for a predefined solar field at a fixed location, using a central receiver and molten salts as HTF. The analysis includes the design point values and annual energy outputs comparing the effect on the optical performance (measured using a spillage factor) and thermal energy production.

  11. A case study by life cycle assessment

    NASA Astrophysics Data System (ADS)

    Li, Shuyun

    2017-05-01

    This article aims to assess the potential environmental impact of an electrical grinder during its life cycle. The Life Cycle Inventory Analysis was conducted based on the Simplified Life Cycle Assessment (SLCA) Drivers that calculated from the Valuation of Social Cost and Simplified Life Cycle Assessment Model (VSSM). The detailed results for LCI can be found under Appendix II. The Life Cycle Impact Assessment was performed based on Eco-indicator 99 method. The analysis results indicated that the major contributor to the environmental impact as it accounts for over 60% overall SLCA output. In which, 60% of the emission resulted from the logistic required for the maintenance activities. This was measured by conducting the hotspot analysis. After performing sensitivity analysis, it is evidenced that changing fuel type results in significant decrease environmental footprint. The environmental benefit can also be seen from the negative output values of the recycling activities. By conducting Life Cycle Assessment analysis, the potential environmental impact of the electrical grinder was investigated.

  12. Spectro-spatial analysis of wave packet propagation in nonlinear acoustic metamaterials

    NASA Astrophysics Data System (ADS)

    Zhou, W. J.; Li, X. P.; Wang, Y. S.; Chen, W. Q.; Huang, G. L.

    2018-01-01

    The objective of this work is to analyze wave packet propagation in weakly nonlinear acoustic metamaterials and reveal the interior nonlinear wave mechanism through spectro-spatial analysis. The spectro-spatial analysis is based on full-scale transient analysis of the finite system, by which dispersion curves are generated from the transmitted waves and also verified by the perturbation method (the L-P method). We found that the spectro-spatial analysis can provide detailed information about the solitary wave in short-wavelength region which cannot be captured by the L-P method. It is also found that the optical wave modes in the nonlinear metamaterial are sensitive to the parameters of the nonlinear constitutive relation. Specifically, a significant frequency shift phenomenon is found in the middle-wavelength region of the optical wave branch, which makes this frequency region behave like a band gap for transient waves. This special frequency shift is then used to design a direction-biased waveguide device, and its efficiency is shown by numerical simulations.

  13. Adaptation of Laser Microdissection Technique for the Study of a Spontaneous Metastatic Mammary Carcinoma Mouse Model by NanoString Technologies

    PubMed Central

    Saylor, Karen L.; Anver, Miriam R.; Salomon, David S.; Golubeva, Yelena G.

    2016-01-01

    Laser capture microdissection (LCM) of tissue is an established tool in medical research for collection of distinguished cell populations under direct microscopic visualization for molecular analysis. LCM samples have been successfully analyzed in a number of genomic and proteomic downstream molecular applications. However, LCM sample collection and preparation procedure has to be adapted to each downstream analysis platform. In this present manuscript we describe in detail the adaptation of LCM methodology for the collection and preparation of fresh frozen samples for NanoString analysis based on a study of a model of mouse mammary gland carcinoma and its lung metastasis. Our adaptation of LCM sample preparation and workflow to the requirements of the NanoString platform allowed acquiring samples with high RNA quality. The NanoString analysis of such samples provided sensitive detection of genes of interest and their associated molecular pathways. NanoString is a reliable gene expression analysis platform that can be effectively coupled with LCM. PMID:27077656

  14. Space Shuttle Main Engine structural analysis and data reduction/evaluation. Volume 1: Aft Skirt analysis

    NASA Technical Reports Server (NTRS)

    Berry, David M.; Stansberry, Mark

    1989-01-01

    Using the ANSYS finite element program, a global model of the aft skirt and a detailed nonlinear model of the failure region was made. The analysis confirmed the area of failure in both STA-2B and STA-3 tests as the forging heat affected zone (HAZ) at the aft ring centerline. The highest hoop strain in the HAZ occurs in this area. However, the analysis does not predict failure as defined by ultimate elongation of the material equal to 3.5 percent total strain. The analysis correlates well with the strain gage data from both the Wyle influence test of the original design aft sjirt and the STA-3 test of the redesigned aft skirt. it is suggested that the sensitivity of the failure area material strength and stress/strain state to material properties and therefore to small manufacturing or processing variables is the most likely cause of failure below the expected material ultimate properties.

  15. A water stable europium coordination polymer as fluorescent sensor for detecting Fe3+, CrO42-, and Cr2O72- ions

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Zhang, Xiaolei; Gao, Peng; Hu, Ming

    2018-02-01

    A europium coordination polymer constructed by the 4‧-(4-carboxyphenyl)- 2,2‧:6‧,2″-terpyridine ligand (HL), namely, [EuL(CH3COO)Cl]n (1), has been prepared by the solvothermal method. Compound 1 was structurally characterized by the elemental analysis, FT-IR, powder X-ray diffractions (PXRD), thermogravimetric (TG) analysis, and single-crystal X-ray diffraction. Complex 1 displays a novel linear chain structure, which further extends to the 3D supramolecular structure via π···π and hydrogen bonds interactions. The luminescent properties of 1 were investigated in detail, which exhibit the fluorescent sensing for detecting Fe3+, CrO42-, and Cr2O72- ions in aqueous solution, respectively. In addition, 1 shows high sensitive and selective sensing for CrO42- and Cr2O72- anions with the great quenching efficiency. Furthermore, the luminescent sensing mechanisms of differentiating analytes are explored in detail. It is worth noting that there exists the weak interaction between Fe3+ ions and carboxylate oxygen atoms of CH3COO- groups through XPS characterization, resulting in the high quenching effect of 1.

  16. DEM Modeling of a Flexible Barrier Impacted by a Dry Granular Flow

    NASA Astrophysics Data System (ADS)

    Albaba, Adel; Lambert, Stéphane; Kneib, François; Chareyre, Bruno; Nicot, François

    2017-11-01

    Flexible barriers are widely used as protection structures against natural hazards in mountainous regions, in particular for containing granular materials such as debris flows, snow avalanches and rock slides. This article presents a discrete element method-based model developed in the aim of investigating the response of flexible barriers in such contexts. It allows for accounting for the peculiar mechanical and geometrical characteristics of both the granular flow and the barrier in a same framework, and with limited assumptions. The model, developed with YADE software, is described in detail, as well as its calibration. In particular, cables are modeled as continuous bodies. Besides, it naturally considers the sliding of rings along supporting cables. The model is then applied for a generic flexible barrier to demonstrate its capacities in accounting for the behavior of different components. A detailed analysis of the forces in the different components showed that energy dissipators (ED) had limited influence on total force applied to the barrier and retaining capacity, but greatly influenced the load transmission within the barrier and the force in anchors. A sensitivity analysis showed that the barrier's response significantly changes according to the choice of ED activation force and incoming flow conditions.

  17. Lunar Exploration Architecture Level Key Drivers and Sensitivities

    NASA Technical Reports Server (NTRS)

    Goodliff, Kandyce; Cirillo, William; Earle, Kevin; Reeves, J. D.; Shyface, Hilary; Andraschko, Mark; Merrill, R. Gabe; Stromgren, Chel; Cirillo, Christopher

    2009-01-01

    Strategic level analysis of the integrated behavior of lunar transportation and lunar surface systems architecture options is performed to assess the benefit, viability, affordability, and robustness of system design choices. This analysis employs both deterministic and probabilistic modeling techniques so that the extent of potential future uncertainties associated with each option are properly characterized. The results of these analyses are summarized in a predefined set of high-level Figures of Merit (FOMs) so as to provide senior NASA Constellation Program (CxP) and Exploration Systems Mission Directorate (ESMD) management with pertinent information to better inform strategic level decision making. The strategic level exploration architecture model is designed to perform analysis at as high a level as possible but still capture those details that have major impacts on system performance. The strategic analysis methodology focuses on integrated performance, affordability, and risk analysis, and captures the linkages and feedbacks between these three areas. Each of these results leads into the determination of the high-level FOMs. This strategic level analysis methodology has been previously applied to Space Shuttle and International Space Station assessments and is now being applied to the development of the Constellation Program point-of-departure lunar architecture. This paper provides an overview of the strategic analysis methodology and the lunar exploration architecture analyses to date. In studying these analysis results, the strategic analysis team has identified and characterized key drivers affecting the integrated architecture behavior. These key drivers include inclusion of a cargo lander, mission rate, mission location, fixed-versus- variable costs/return on investment, and the requirement for probabilistic analysis. Results of sensitivity analysis performed on lunar exploration architecture scenarios are also presented.

  18. Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.

  19. Extended principle component analysis - a useful tool to understand processes governing water quality at catchment scales

    NASA Astrophysics Data System (ADS)

    Selle, B.; Schwientek, M.

    2012-04-01

    Water quality of ground and surface waters in catchments is typically driven by many complex and interacting processes. While small scale processes are often studied in great detail, their relevance and interplay at catchment scales remain often poorly understood. For many catchments, extensive monitoring data on water quality have been collected for different purposes. These heterogeneous data sets contain valuable information on catchment scale processes but are rarely analysed using integrated methods. Principle component analysis (PCA) has previously been applied to this kind of data sets. However, a detailed analysis of scores, which are an important result of a PCA, is often missing. Mathematically, PCA expresses measured variables on water quality, e.g. nitrate concentrations, as linear combination of independent, not directly observable key processes. These computed key processes are represented by principle components. Their scores are interpretable as process intensities which vary in space and time. Subsequently, scores can be correlated with other key variables and catchment characteristics, such as water travel times and land use that were not considered in PCA. This detailed analysis of scores represents an extension of the commonly applied PCA which could considerably improve the understanding of processes governing water quality at catchment scales. In this study, we investigated the 170 km2 Ammer catchment in SW Germany which is characterised by an above average proportion of agricultural (71%) and urban (17%) areas. The Ammer River is mainly fed by karstic springs. For PCA, we separately analysed concentrations from (a) surface waters of the Ammer River and its tributaries, (b) spring waters from the main aquifers and (c) deep groundwater from production wells. This analysis was extended by a detailed analysis of scores. We analysed measured concentrations on major ions and selected organic micropollutants. Additionally, redox-sensitive variables and environmental tracers indicating groundwater age were analysed for deep groundwater from production wells. For deep groundwater, we found that microbial turnover was stronger influenced by local availability of energy sources than by travel times of groundwater to the wells. Groundwater quality primarily reflected the input of pollutants determined by landuse, e.g. agrochemicals. We concluded that for water quality in the Ammer catchment, conservative mixing of waters with different origin is more important than reactive transport processes along the flow path.

  20. Study on experimental characterization of carbon fiber reinforced polymer panel using digital image correlation: A sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Kashfuddoja, Mohammad; Prasath, R. G. R.; Ramji, M.

    2014-11-01

    In this work, the experimental characterization of polymer-matrix and polymer based carbon fiber reinforced composite laminate by employing a whole field non-contact digital image correlation (DIC) technique is presented. The properties are evaluated based on full field data obtained from DIC measurements by performing a series of tests as per ASTM standards. The evaluated properties are compared with the results obtained from conventional testing and analytical models and they are found to closely match. Further, sensitivity of DIC parameters on material properties is investigated and their optimum value is identified. It is found that the subset size has more influence on material properties as compared to step size and their predicted optimum value for the case of both matrix and composite material is found consistent with each other. The aspect ratio of region of interest (ROI) chosen for correlation should be the same as that of camera resolution aspect ratio for better correlation. Also, an open cutout panel made of the same composite laminate is taken into consideration to demonstrate the sensitivity of DIC parameters on predicting complex strain field surrounding the hole. It is observed that the strain field surrounding the hole is much more sensitive to step size rather than subset size. Lower step size produced highly pixilated strain field, showing sensitivity of local strain at the expense of computational time in addition with random scattered noisy pattern whereas higher step size mitigates the noisy pattern at the expense of losing the details present in data and even alters the natural trend of strain field leading to erroneous maximum strain locations. The subset size variation mainly presents a smoothing effect, eliminating noise from strain field while maintaining the details in the data without altering their natural trend. However, the increase in subset size significantly reduces the strain data at hole edge due to discontinuity in correlation. Also, the DIC results are compared with FEA prediction to ascertain the suitable value of DIC parameters towards better accuracy.

  1. Benefits of explicit urban parameterization in regional climate modeling to study climate and city interactions

    NASA Astrophysics Data System (ADS)

    Daniel, M.; Lemonsu, Aude; Déqué, M.; Somot, S.; Alias, A.; Masson, V.

    2018-06-01

    Most climate models do not explicitly model urban areas and at best describe them as rock covers. Nonetheless, the very high resolutions reached now by the regional climate models may justify and require a more realistic parameterization of surface exchanges between urban canopy and atmosphere. To quantify the potential impact of urbanization on the regional climate, and evaluate the benefits of a detailed urban canopy model compared with a simpler approach, a sensitivity study was carried out over France at a 12-km horizontal resolution with the ALADIN-Climate regional model for 1980-2009 time period. Different descriptions of land use and urban modeling were compared, corresponding to an explicit modeling of cities with the urban canopy model TEB, a conventional and simpler approach representing urban areas as rocks, and a vegetated experiment for which cities are replaced by natural covers. A general evaluation of ALADIN-Climate was first done, that showed an overestimation of the incoming solar radiation but satisfying results in terms of precipitation and near-surface temperatures. The sensitivity analysis then highlighted that urban areas had a significant impact on modeled near-surface temperature. A further analysis on a few large French cities indicated that over the 30 years of simulation they all induced a warming effect both at daytime and nighttime with values up to + 1.5 °C for the city of Paris. The urban model also led to a regional warming extending beyond the urban areas boundaries. Finally, the comparison to temperature observations available for Paris area highlighted that the detailed urban canopy model improved the modeling of the urban heat island compared with a simpler approach.

  2. [Comparison of simple pooling and bivariate model used in meta-analyses of diagnostic test accuracy published in Chinese journals].

    PubMed

    Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan

    2015-06-18

    To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.

  3. Comparison of patterns of allergen sensitization among inner-city Hispanic and African American children with asthma.

    PubMed

    Rastogi, Deepa; Reddy, Mamta; Neugebauer, Richard

    2006-11-01

    Among Hispanics, the largest minority ethnic group in the United States, asthma prevalence is increasing, particularly in inner-city neighborhoods. Although allergen sensitization among asthmatic African Americans has been extensively studied, similar details are not available for Hispanic children. To examine patterns of allergen sensitization, including the association with illness severity, in asthmatic children overall and in Hispanic and African American children living in a socioeconomically disadvantaged area of New York City. A retrospective medical record review of asthmatic children attending a community hospital in the South Bronx area of New York City was performed. Information abstracted included demographics, asthma severity classification, reported exposures to indoor allergens, and results of allergy testing. Among 384 children in the analysis, 270 (70.3%) were Hispanic and 114 (29.7%) were African American. Sensitization to indoor and outdoor allergens, respectively, did not differ between Hispanic (58.5% and 27.0%) and African American (58.8% and 32.6%) children. Allergen sensitization exhibited a direct, significant association with asthma severity for indoor allergens for the 2 ethnic groups combined and for Hispanics separately but not between asthma severity and outdoor allergens (P < .01). No correlation was found between self-reported allergen exposure and sensitization. Patterns of allergen sensitization among inner-city Hispanic asthmatic children resemble those among African American children, a finding that is likely explained by the similarity in levels of environmental exposures. With the increasing prevalence of asthma among inner-city Hispanic children, skin testing should be used frequently for objective evaluation of asthma in this ethnic group.

  4. Early growth response-1 negative feedback regulates skeletal muscle postprandial insulin sensitivity via activating Ptp1b transcription.

    PubMed

    Wu, Jing; Tao, Wei-Wei; Chong, Dan-Yang; Lai, Shan-Shan; Wang, Chuang; Liu, Qi; Zhang, Tong-Yu; Xue, Bin; Li, Chao-Jun

    2018-03-15

    Postprandial insulin desensitization plays a critical role in maintaining whole-body glucose homeostasis by avoiding the excessive absorption of blood glucose; however, the detailed mechanisms that underlie how the major player, skeletal muscle, desensitizes insulin action remain to be elucidated. Herein, we report that early growth response gene-1 ( Egr-1) is activated by insulin in skeletal muscle and provides feedback inhibition that regulates insulin sensitivity after a meal. The inhibition of the transcriptional activity of Egr-1 enhanced the phosphorylation of the insulin receptor (InsR) and Akt, thus increasing glucose uptake in L6 myotubes after insulin stimulation, whereas overexpression of Egr-1 decreased insulin sensitivity. Furthermore, deletion of Egr-1 in the skeletal muscle improved systemic insulin sensitivity and glucose tolerance, which resulted in lower blood glucose levels after refeeding. Mechanistic analysis demonstrated that EGR-1 inhibited InsR phosphorylation and glucose uptake in skeletal muscle by binding to the proximal promoter region of protein tyrosine phosphatase-1B (PTP1B) and directly activating transcription. PTP1B knockdown largely restored insulin sensitivity and enhanced glucose uptake, even under conditions of EGR-1 overexpression. Our results indicate that EGR-1/PTP1B signaling negatively regulates postprandial insulin sensitivity and suggest a potential therapeutic target for the prevention and treatment of excessive glucose absorption.-Wu, J., Tao, W.-W., Chong, D.-Y., Lai, S.-S., Wang, C., Liu, Q., Zhang, T.-Y., Xue, B., Li, C.-J. Early growth response-1 negative feedback regulates skeletal muscle postprandial insulin sensitivity via activating Ptp1b transcription.

  5. Effects of CO addition on the characteristics of laminar premixed CH{sub 4}/air opposed-jet flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, C.-Y.; Chao, Y.-C.; Chen, C.-P.

    2009-02-15

    The effects of CO addition on the characteristics of premixed CH{sub 4}/air opposed-jet flames are investigated experimentally and numerically. Experimental measurements and numerical simulations of the flame front position, temperature, and velocity are performed in stoichiometric CH{sub 4}/CO/air opposed-jet flames with various CO contents in the fuel. Thermocouple is used for the determination of flame temperature, velocity measurement is made using particle image velocimetry (PIV), and the flame front position is measured by direct photograph as well as with laser-induced predissociative fluorescence (LIPF) of OH imaging techniques. The laminar burning velocity is calculated using the PREMIX code of Chemkin collectionmore » 3.5. The flame structures of the premixed stoichiometric CH{sub 4}/CO/air opposed-jet flames are simulated using the OPPDIF package with GRI-Mech 3.0 chemical kinetic mechanisms and detailed transport properties. The measured flame front position, temperature, and velocity of the stoichiometric CH{sub 4}/CO/air flames are closely predicted by the numerical calculations. Detailed analysis of the calculated chemical kinetic structures reveals that as the CO content in the fuel is increased from 0% to 80%, CO oxidation (R99) increases significantly and contributes to a significant level of heat-release rate. It is also shown that the laminar burning velocity reaches a maximum value (57.5 cm/s) at the condition of 80% of CO in the fuel. Based on the results of sensitivity analysis, the chemistry of CO consumption shifts to the dry oxidation kinetics when CO content is further increased over 80%. Comparison between the results of computed laminar burning velocity, flame temperature, CO consumption rate, and sensitivity analysis reveals that the effect of CO addition on the laminar burning velocity of the stoichiometric CH{sub 4}/CO/air flames is due mostly to the transition of the dominant chemical kinetic steps. (author)« less

  6. Structure, porosity and stress regime of the upper oceanic crust: Sonic and ultrasonic logging of DSDP Hole 504B

    USGS Publications Warehouse

    Newmark, R.L.; Anderson, R.N.; Moos, D.; Zoback, M.D.

    1985-01-01

    The layered structure of the oceanic crust is characterized by changes in geophysical gradients rather than by abrupt layer boundaries. Correlation of geophysical logs and cores recovered from DSDP Hole 504B provides some insight into the physical properties which control these gradient changes. Borehole televiewer logging in Hole 504B provides a continuous image of wellbore reflectivity into the oceanic crust, revealing detailed structures not apparent otherwise, due to the low percentage of core recovery. Physical characteristics of the crustal layers 2A, 2B and 2C such as the detailed sonic velocity and lithostratigraphic structure are obtained through analysis of the sonic, borehole televiewer and electrical resistivity logs. A prediction of bulk hydrated mineral content, consistent with comparison to the recovered material, suggests a change in the nature of the alteration with depth. Data from the sonic, borehole televiewer, electrical resistivity and other porosity-sensitive logs are used to calculate the variation of porosity in the crustal layers 2A, 2B and 2C. Several of the well logs which are sensitive to the presence of fractures and open porosity in the formation indicate many zones of intense fracturing. Interpretation of these observations suggests that there may be a fundamental pattern of cooling-induced structure in the oceanic crust. ?? 1985.

  7. Photoacoustic tomography of the human finger: towards the assessment of inflammatory joint diseases

    NASA Astrophysics Data System (ADS)

    van Es, P.; Biswas, S. K.; Bernelot Moens, H. J.; Steenbergen, W.; Manohar, S.

    2015-03-01

    Inflammatory arthritis is often manifested in finger joints. The growth of new or withdrawal of old blood vessels can be a sensitive marker for these diseases. Photoacoustic (PA) imaging has great potential in this respect since it allows the sensitive and highly resolved visualization of blood. We systematically investigated PA imaging of finger vasculature in healthy volunteers using a newly developed PA tomographic system. We present the PA results which show excellent detail of the vasculature. Vessels with diameters ranging between 100 μm and 1.5 mm are visible along with details of the skin, including the epidermis and the subpapillary plexus. The focus of all the studies is at the proximal and distal interphalangeal joints, and in the context of ultimately visualizing the inflamed synovial membrane in patients. This work is important in laying the foundation for detailed research into PA imaging of the phalangeal vasculature in patients suffering from rheumatoid arthritis.

  8. Phonetic Detail in the Developing Lexicon

    ERIC Educational Resources Information Center

    Swingley, Daniel

    2003-01-01

    Although infants show remarkable sensitivity to linguistically relevant phonetic variation in speech, young children sometimes appear not to make use of this sensitivity. Here, children' s knowledge of the sound-forms of familiar words was assessed using a visual fixation task. Dutch 19-month-olds were shown pairs of pictures and heard correct…

  9. Hands on or hands off? Disgust sensitivity and preference for environmental education activities

    Treesearch

    Robert D. Bixler; Myron F. Floyd

    1999-01-01

    Detailed descriptions of barriers to enviromuental education (EE) can provide opportunities for educators to foresee potential problems in programs. High disgust sensitivity is an intrapersonal barrier that constrains preference for learning opportunities involving manipulation of some organic materials. Middle school students in Texas (N = 450)...

  10. Distributional Cost-Effectiveness Analysis

    PubMed Central

    Asaria, Miqdad; Griffin, Susan; Cookson, Richard

    2015-01-01

    Distributional cost-effectiveness analysis (DCEA) is a framework for incorporating health inequality concerns into the economic evaluation of health sector interventions. In this tutorial, we describe the technical details of how to conduct DCEA, using an illustrative example comparing alternative ways of implementing the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP). The 2 key stages in DCEA are 1) modeling social distributions of health associated with different interventions, and 2) evaluating social distributions of health with respect to the dual objectives of improving total population health and reducing unfair health inequality. As well as describing the technical methods used, we also identify the data requirements and the social value judgments that have to be made. Finally, we demonstrate the use of sensitivity analyses to explore the impacts of alternative modeling assumptions and social value judgments. PMID:25908564

  11. PDCO: Polarizational-directional correlation from oriented nuclei

    NASA Astrophysics Data System (ADS)

    Droste, Ch.; Rohoziński, S. G.; Starosta, K.; Morek, T.; Srebrny, J.; Magierski, P.

    1996-02-01

    A general formula is given for correlation between two polarized gamma rays ( γ1 and γ2) emitted in a cascade from an oriented (for example, due to a heavy ion reaction) nucleus. It allows us or one to calculate the angular correlation between: (a) Linear polarizations of γ1 and γ2. (b) Polarization of γ1 and direction of γ2 or vice versa. (c) Directions of γ1 and γ2 (DCO). The formula, discussed in detail for the case (b), can be used in the analysis of data coming from the modern multidetector gamma ray spectrometers that contain new generation detectors (e.g. CLOVER) sensitive to the polarization. The analysis of polarization together with DCO ratio can lead to a unique spin/parity assignment and a mixing ratio determination.

  12. Digital Droplet PCR: CNV Analysis and Other Applications.

    PubMed

    Mazaika, Erica; Homsy, Jason

    2014-07-14

    Digital droplet PCR (ddPCR) is an assay that combines state-of-the-art microfluidics technology with TaqMan-based PCR to achieve precise target DNA quantification at high levels of sensitivity and specificity. Because quantification is achieved without the need for standard assays in an easy to interpret, unambiguous digital readout, ddPCR is far simpler, faster, and less error prone than real-time qPCR. The basic protocol can be modified with minor adjustments to suit a wide range of applications, such as CNV analysis, rare variant detection, SNP genotyping, and transcript quantification. This unit describes the ddPCR workflow in detail for the Bio-Rad QX100 system, but the theory and data interpretation are generalizable to any ddPCR system. Copyright © 2014 John Wiley & Sons, Inc.

  13. Study and evaluation of impulse mass spectrometers for ion analysis in the D and E regions of the ionosphere

    NASA Technical Reports Server (NTRS)

    Kendall, B. R.

    1979-01-01

    Theoretical and numerical analyses were made of planar, cylindrical and spherical electrode time-of-flight mass spectrometers in order to optimize their operating conditions. A numerical analysis of potential barrier gating in time-of-flight spectrometers was also made. The results were used in the design of several small mass spectrometers. These were constructed and tested in a laboratory space simulator. Detailed experimental studies of a miniature cylindrical electrode time of flight mass spectrometer and of a miniature hemispherical electrode time of flight mass spectrometer were made. The extremely high sensitivity of these instruments and their ability to operate at D region pressures with an open source make them ideal instruments for D region ion composition measurements.

  14. Pleural ultrasonography versus chest radiography for the diagnosis of pneumothorax: review of the literature and meta-analysis.

    PubMed

    Alrajab, Saadah; Youssef, Asser M; Akkus, Nuri I; Caldito, Gloria

    2013-09-23

    Ultrasonography is being increasingly utilized in acute care settings with expanding applications. Pneumothorax evaluation by ultrasonography is a fast, safe, easy and inexpensive alternative to chest radiographs. In this review, we provide a comprehensive analysis of the current literature comparing ultrasonography and chest radiography for the diagnosis of pneumothorax. We searched English-language articles in MEDLINE, EMBASE and Cochrane Library dealing with both ultrasonography and chest radiography for diagnosis of pneumothorax. In eligible studies that met strict inclusion criteria, we conducted a meta-analysis to evaluate the diagnostic accuracy of pleural ultrasonography in comparison with chest radiography for the diagnosis of pneumothorax. We reviewed 601 articles and selected 25 original research articles for detailed review. Only 13 articles met all of our inclusion criteria and were included in the final analysis. One study used lung sliding sign alone, 12 studies used lung sliding and comet tail signs, and 6 studies searched for lung point in addition to the other two signs. Ultrasonography had a pooled sensitivity of 78.6% (95% CI, 68.1 to 98.1) and a specificity of 98.4% (95% CI, 97.3 to 99.5). Chest radiography had a pooled sensitivity of 39.8% (95% CI, 29.4 to 50.3) and a specificity of 99.3% (95% CI, 98.4 to 100). Our meta-regression and subgroup analyses indicate that consecutive sampling of patients compared to convenience sampling provided higher sensitivity results for both ultrasonography and chest radiography. Consecutive versus nonconsecutive sampling and trauma versus nontrauma settings were significant sources of heterogeneity. In addition, subgroup analysis showed significant variations related to operator and type of probe used. Our study indicates that ultrasonography is more accurate than chest radiography for detection of pneumothorax. The results support the previous investigations in this field, add new valuable information obtained from subgroup analysis, and provide accurate estimates for the performance parameters of both bedside ultrasonography and chest radiography for pneumothorax evaluation.

  15. Fukushima Daiichi Unit 1 Uncertainty Analysis-Exploration of Core Melt Progression Uncertain Parameters-Volume II.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denman, Matthew R.; Brooks, Dusty Marie

    Sandia National Laboratories (SNL) has conducted an uncertainty analysi s (UA) on the Fukushima Daiichi unit (1F1) accident progression wit h the MELCOR code. Volume I of the 1F1 UA discusses the physical modeling details and time history results of the UA. Volume II of the 1F1 UA discusses the statistical viewpoint. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). The goal of this work was to perform a focused evaluation of uncertainty in core damage progression behavior and its effect on keymore » figures - of - merit (e.g., hydrogen production, fraction of intact fuel, vessel lower head failure) and in doing so assess the applicability of traditional sensitivity analysis techniques .« less

  16. Grazing incidence beam expander

    NASA Astrophysics Data System (ADS)

    Akkapeddi, P. R.; Glenn, P.; Fuschetto, A.; Appert, Q.; Viswanathan, V. K.

    1985-01-01

    A Grazing Incidence Beam Expander (GIBE) telescope is being designed and fabricated to be used as an equivalent end mirror in a long laser resonator cavity. The design requirements for this GIBE flow down from a generic Free Electron Laser (FEL) resonator. The nature of the FEL gain volume (a thin, pencil-like, on-axis region) dictates that the output beam be very small. Such a thin beam with the high power levels characteristic of FELs would have to travel perhaps hundreds of meters or more before expanding enough to allow reflection from cooled mirrors. A GIBE, on the other hand, would allow placing these optics closer to the gain region and thus reduces the cavity lengths substantially. Results are presented relating to optical and mechanical design, alignment sensitivity analysis, radius of curvature analysis, laser cavity stability analysis of a linear stable concentric laser cavity with a GIBE. Fabrication details of the GIBE are also given.

  17. Grazing incidence beam expander

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akkapeddi, P.R.; Glenn, P.; Fuschetto, A.

    1985-01-01

    A Grazing Incidence Beam Expander (GIBE) telescope is being designed and fabricated to be used as an equivalent end mirror in a long laser resonator cavity. The design requirements for this GIBE flow down from a generic Free Electron Laser (FEL) resonator. The nature of the FEL gain volume (a thin, pencil-like, on-axis region) dictates that the output beam be very small. Such a thin beam with the high power levels characteristic of FELs would have to travel perhaps hundreds of meters or more before expanding enough to allow reflection from cooled mirrors. A GIBE, on the other hand, wouldmore » allow placing these optics closer to the gain region and thus reduces the cavity lengths substantially. Results are presented relating to optical and mechanical design, alignment sensitivity analysis, radius of curvature analysis, laser cavity stability analysis of a linear stable concentric laser cavity with a GIBE. Fabrication details of the GIBE are also given.« less

  18. Global analysis of the yeast lipidome by quantitative shotgun mass spectrometry.

    PubMed

    Ejsing, Christer S; Sampaio, Julio L; Surendranath, Vineeth; Duchoslav, Eva; Ekroos, Kim; Klemm, Robin W; Simons, Kai; Shevchenko, Andrej

    2009-02-17

    Although the transcriptome, proteome, and interactome of several eukaryotic model organisms have been described in detail, lipidomes remain relatively uncharacterized. Using Saccharomyces cerevisiae as an example, we demonstrate that automated shotgun lipidomics analysis enabled lipidome-wide absolute quantification of individual molecular lipid species by streamlined processing of a single sample of only 2 million yeast cells. By comparative lipidomics, we achieved the absolute quantification of 250 molecular lipid species covering 21 major lipid classes. This analysis provided approximately 95% coverage of the yeast lipidome achieved with 125-fold improvement in sensitivity compared with previous approaches. Comparative lipidomics demonstrated that growth temperature and defects in lipid biosynthesis induce ripple effects throughout the molecular composition of the yeast lipidome. This work serves as a resource for molecular characterization of eukaryotic lipidomes, and establishes shotgun lipidomics as a powerful platform for complementing biochemical studies and other systems-level approaches.

  19. Single-Molecule Analysis for RISC Assembly and Target Cleavage.

    PubMed

    Sasaki, Hiroshi M; Tadakuma, Hisashi; Tomari, Yukihide

    2018-01-01

    RNA-induced silencing complex (RISC) is a small RNA-protein complex that mediates silencing of complementary target RNAs. Biochemistry has been successfully used to characterize the molecular mechanism of RISC assembly and function for nearly two decades. However, further dissection of intermediate states during the reactions has been warranted to fill in the gaps in our understanding of RNA silencing mechanisms. Single-molecule analysis with total internal reflection fluorescence (TIRF) microscopy is a powerful imaging-based approach to interrogate complex formation and dynamics at the individual molecule level with high sensitivity. Combining this technique with our recently established in vitro reconstitution system of fly Ago2-RISC, we have developed a single-molecule observation system for RISC assembly. In this chapter, we summarize the detailed protocol for single-molecule analysis of chaperone-assisted assembly of fly Ago2-RISC as well as its target cleavage reaction.

  20. Polycyclic Aromatic Hydrocarbon Far-infrared Spectroscopy

    NASA Astrophysics Data System (ADS)

    Boersma, C.; Bauschlicher, C. W., Jr.; Ricca, A.; Mattioda, A. L.; Peeters, E.; Tielens, A. G. G. M.; Allamandola, L. J.

    2011-03-01

    The far-IR characteristics of astrophysically relevant polycyclic aromatic hydrocarbons (PAHs) averaging in size around 100 carbon atoms have been studied using the theoretical spectra in the NASA Ames PAH IR Spectroscopic Database. These spectra were calculated using density functional theory. Selections of PAH species are made, grouped together by common characteristics or trends, such as size, shape, charge, and composition, and their far-IR spectra compared. The out-of-plane modes involving the entire molecule are explored in detail, astronomical relevance is assessed, and an observing strategy is discussed. It is shown that PAHs produce richer far-IR spectra with increasing size. PAHs also produce richer far-IR spectra with increasing number of irregularities. However, series of irregular-shaped PAHs with the same compact core have common "Jumping-Jack" modes that "pile up" at specific frequencies in their average spectrum. For the PAHs studied here, around 100 carbon atoms in size, this band falls near 50 μm. PAH charge and nitrogen inclusion affect band intensities but have little effect on far-IR band positions. Detailed analysis of the two-dimensional, out-of-plane bending "drumhead" modes in the coronene and pyrene "families" and the one-dimensional, out-of-plane bending "bar" modes in the acene "family" show that these molecular vibrations can be treated as classical vibrating sheets and bars of graphene, respectively. The analysis also shows that the peak position of these modes is very sensitive to the area of the emitting PAH and does not depend on the particular geometry. Thus, these longest wavelength PAH bands could provide a unique handle on the size of the largest species in the interstellar PAH family. However, these bands are weak. Observing highly excited regions showing the mid-IR bands in which the emission from classical dust peaks at short wavelengths offers the best chance of detecting PAH emission in the far-IR. For these regions sensitivity is not an issue, spectral contrast is maximized and the PAH population is only comprised of highly stable, compact symmetric PAHs, such as the members of the pyrene and coronene "families" discussed in detail here.

  1. Systems analysis of apoptotic priming in ovarian cancer identifies vulnerabilities and predictors of drug response.

    PubMed

    Zervantonakis, Ioannis K; Iavarone, Claudia; Chen, Hsing-Yu; Selfors, Laura M; Palakurthi, Sangeetha; Liu, Joyce F; Drapkin, Ronny; Matulonis, Ursula; Leverson, Joel D; Sampath, Deepak; Mills, Gordon B; Brugge, Joan S

    2017-08-28

    The lack of effective chemotherapies for high-grade serous ovarian cancers (HGS-OvCa) has motivated a search for alternative treatment strategies. Here, we present an unbiased systems-approach to interrogate a panel of 14 well-annotated HGS-OvCa patient-derived xenografts for sensitivity to PI3K and PI3K/mTOR inhibitors and uncover cell death vulnerabilities. Proteomic analysis reveals that PI3K/mTOR inhibition in HGS-OvCa patient-derived xenografts induces both pro-apoptotic and anti-apoptotic signaling responses that limit cell killing, but also primes cells for inhibitors of anti-apoptotic proteins. In-depth quantitative analysis of BCL-2 family proteins and other apoptotic regulators, together with computational modeling and selective anti-apoptotic protein inhibitors, uncovers new mechanistic details about apoptotic regulators that are predictive of drug sensitivity (BIM, caspase-3, BCL-X L ) and resistance (MCL-1, XIAP). Our systems-approach presents a strategy for systematic analysis of the mechanisms that limit effective tumor cell killing and the identification of apoptotic vulnerabilities to overcome drug resistance in ovarian and other cancers.High-grade serous ovarian cancers (HGS-OvCa) frequently develop chemotherapy resistance. Here, the authors through a systematic analysis of proteomic and drug response data of 14 HGS-OvCa PDXs demonstrate that targeting apoptosis regulators can improve response of these tumors to inhibitors of the PI3K/mTOR pathway.

  2. Wintertime ozone and nitrogen oxide photochemistry and nighttime chemistry in a Western oil and gas basin

    NASA Astrophysics Data System (ADS)

    Brown, S. S.; Edwards, P. M.; Patel, S.; Dube, W. P.; Williams, E. J.; Roberts, J. M.; McLaren, R.; Kercher, J. P.; Gilman, J. B.; Lerner, B. M.; Warneke, C.; Geiger, F.; De Gouw, J. A.; Tsai, C.; Stutz, J.; Young, C. J.; Washenfelder, R. A.; Parrish, D. D.

    2012-12-01

    Oil and gas development in mountain basins of the Western United States has led to frequent exceedences of National Ambient Air Quality Standards for ozone during the winter season. The Uintah Basin Winter Ozone Study took place during February and March 2012 in northeast Utah with the goal of providing detailed chemical and meteorological data to understand this phenomenon. Although snow and cold pool stagnation conditions that lead to winter ozone buildup were not encountered during the study period, the detailed measurements did provide a unique data set to understand the chemistry of key air pollutants in a desert environment during winter. This presentation will examine both the photochemistry and the nighttime chemistry of nitrogen oxides, ozone and VOCs, with the goal of understanding the observed photochemistry and its relationship to nighttime chemistry through a set of box models. The photochemical box model is based on the master chemical mechanism (MCM), a detailed model for VOC degradation and ozone production. The presentation will examine the sensitivity of ozone photochemistry to different parameters, including pollutant concentrations likely to be characteristic of cold pool conditions, and the strength of radical sources derived from heterogeneous chemical reactions. The goal of the analysis will be to identify the factors most likely to be responsible for the higher ozone events that have been observed during colder years with less detailed chemical measurements.

  3. Dynamics of the Bingham Canyon rock avalanches (Utah, USA) resolved from topographic, seismic, and infrasound data

    NASA Astrophysics Data System (ADS)

    Moore, Jeffrey R.; Pankow, Kristine L.; Ford, Sean R.; Koper, Keith D.; Hale, J. Mark; Aaron, Jordan; Larsen, Chris F.

    2017-03-01

    The 2013 Bingham Canyon Mine rock avalanches represent one of the largest cumulative landslide events in recorded U.S. history and provide a unique opportunity to test remote analysis techniques for landslide characterization. Here we combine aerial photogrammetry surveying, topographic reconstruction, numerical runout modeling, and analysis of broadband seismic and infrasound data to extract salient details of the dynamics and evolution of the multiphase landslide event. Our results reveal a cumulative intact rock source volume of 52 Mm3, which mobilized in two main rock avalanche phases separated by 1.5 h. We estimate that the first rock avalanche had 1.5-2 times greater volume than the second. Each failure initiated by sliding along a gently dipping (21°), highly persistent basal fault before transitioning to a rock avalanche and spilling into the inner pit. The trajectory and duration of the two rock avalanches were reconstructed using runout modeling and independent force history inversion of intermediate-period (10-50 s) seismic data. Intermediate- and shorter-period (1-50 s) seismic data were sensitive to intervals of mass redirection and constrained finer details of the individual slide dynamics. Back projecting short-period (0.2-1 s) seismic energy, we located the two rock avalanches within 2 and 4 km of the mine. Further analysis of infrasound and seismic data revealed that the cumulative event included an additional 11 smaller landslides (volumes 104-105 m3) and that a trailing signal following the second rock avalanche may result from an air-coupled Rayleigh wave. Our results demonstrate new and refined techniques for detailed remote characterization of the dynamics and evolution of large landslides.

  4. HydroApps: An R package for statistical simulation to use in regional analysis

    NASA Astrophysics Data System (ADS)

    Ganora, D.

    2013-12-01

    The HydroApps package is a newborn R extension initially developed to support the use of a recent model for flood frequency estimation developed for applications in Northwestern Italy; it also contains some general tools for regional analyses and can be easily extended to include other statistical models. The package is currently at an experimental level of development. The HydroApps is a corollary of the SSEM project for regional flood frequency analysis, although it was developed independently to support various instances of regional analyses. Its aim is to provide a basis for interplay between statistical simulation and practical operational use. In particular, the main module of the package deals with the building of the confidence bands of flood frequency curves expressed by means of their L-moments. Other functions include pre-processing and visualization of hydrologic time series, analysis of the optimal design-flood under uncertainty, but also tools useful in water resources management for the estimation of flow duration curves and their sensitivity to water withdrawals. Particular attention is devoted to the code granularity, i.e. the level of detail and aggregation of the code: a greater detail means more low-level functions, which entails more flexibility but reduces the ease of use for practical use. A balance between detail and simplicity is necessary and can be resolved with appropriate wrapping functions and specific help pages for each working block. From a more general viewpoint, the package has not really and user-friendly interface, but runs on multiple operating systems and it's easy to update, as many other open-source projects., The HydroApps functions and their features are reported in order to share ideas and materials to improve the ';technological' and information transfer between scientist communities and final users like policy makers.

  5. Dynamics of the Bingham Canyon rock avalanches (Utah, USA) resolved from topographic, seismic, and infrasound data: Bingham Canyon Rock Avalanches

    DOE PAGES

    Moore, Jeffrey R.; Pankow, Kristine L.; Ford, Sean R.; ...

    2017-03-01

    The 2013 Bingham Canyon Mine rock avalanches represent one of the largest cumulative landslide events in recorded U.S. history and provide a unique opportunity to test remote analysis techniques for landslide characterization. We combine aerial photogrammetry surveying, topographic reconstruction, numerical runout modeling, and analysis of broadband seismic and infrasound data to extract salient details of the dynamics and evolution of the multiphase landslide event. Our results reveal a cumulative intact rock source volume of 52 Mm 3, which mobilized in two main rock avalanche phases separated by 1.5 h. We estimate that the first rock avalanche had 1.5–2 times greatermore » volume than the second. Each failure initiated by sliding along a gently dipping (21°), highly persistent basal fault before transitioning to a rock avalanche and spilling into the inner pit. The trajectory and duration of the two rock avalanches were reconstructed using runout modeling and independent force history inversion of intermediate-period (10–50 s) seismic data. Intermediate- and shorter-period (1–50 s) seismic data were sensitive to intervals of mass redirection and constrained finer details of the individual slide dynamics. Back projecting short-period (0.2–1 s) seismic energy, we located the two rock avalanches within 2 and 4 km of the mine. Further analysis of infrasound and seismic data revealed that the cumulative event included an additional 11 smaller landslides (volumes ~10 4–10 5 m 3) and that a trailing signal following the second rock avalanche may result from an air-coupled Rayleigh wave. These results demonstrate new and refined techniques for detailed remote characterization of the dynamics and evolution of large landslides.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Jeffrey R.; Pankow, Kristine L.; Ford, Sean R.

    The 2013 Bingham Canyon Mine rock avalanches represent one of the largest cumulative landslide events in recorded U.S. history and provide a unique opportunity to test remote analysis techniques for landslide characterization. We combine aerial photogrammetry surveying, topographic reconstruction, numerical runout modeling, and analysis of broadband seismic and infrasound data to extract salient details of the dynamics and evolution of the multiphase landslide event. Our results reveal a cumulative intact rock source volume of 52 Mm 3, which mobilized in two main rock avalanche phases separated by 1.5 h. We estimate that the first rock avalanche had 1.5–2 times greatermore » volume than the second. Each failure initiated by sliding along a gently dipping (21°), highly persistent basal fault before transitioning to a rock avalanche and spilling into the inner pit. The trajectory and duration of the two rock avalanches were reconstructed using runout modeling and independent force history inversion of intermediate-period (10–50 s) seismic data. Intermediate- and shorter-period (1–50 s) seismic data were sensitive to intervals of mass redirection and constrained finer details of the individual slide dynamics. Back projecting short-period (0.2–1 s) seismic energy, we located the two rock avalanches within 2 and 4 km of the mine. Further analysis of infrasound and seismic data revealed that the cumulative event included an additional 11 smaller landslides (volumes ~10 4–10 5 m 3) and that a trailing signal following the second rock avalanche may result from an air-coupled Rayleigh wave. These results demonstrate new and refined techniques for detailed remote characterization of the dynamics and evolution of large landslides.« less

  7. 78 FR 66929 - Intent To Conduct a Detailed Economic Impact Analysis

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-07

    ... EXPORT-IMPORT BANK Intent To Conduct a Detailed Economic Impact Analysis AGENCY: Policy and... Federal Register notice informing the public of its intent to conduct a detailed economic impact analysis... subject to a detailed economic impact analysis. DATES: The Federal Register notice published on August 5...

  8. Alcohol biosensing by polyamidoamine (PAMAM)/cysteamine/alcohol oxidase-modified gold electrode.

    PubMed

    Akin, Mehriban; Yuksel, Merve; Geyik, Caner; Odaci, Dilek; Bluma, Arne; Höpfner, Tim; Beutel, Sascha; Scheper, Thomas; Timur, Suna

    2010-01-01

    A highly stable and sensitive amperometric alcohol biosensor was developed by immobilizing alcohol oxidase (AOX) through Polyamidoamine (PAMAM) dendrimers on a cysteamine-modified gold electrode surface. Ethanol determination is based on the consumption of dissolved oxygen content due to the enzymatic reaction. The decrease in oxygen level was monitored at -0.7 V vs. Ag/AgCl and correlated with ethanol concentration. Optimization of variables affecting the system was performed. The optimized ethanol biosensor showed a wide linearity from 0.025 to 1.0 mM with 100 s response time and detection limit of (LOD) 0.016 mM. In the characterization studies, besides linearity some parameters such as operational and storage stability, reproducibility, repeatability, and substrate specificity were studied in detail. Stability studies showed a good preservation of the bioanalytical properties of the sensor, 67% of its initial sensitivity was kept after 1 month storage at 4 degrees C. The analytical characteristics of the system were also evaluated for alcohol determination in flow injection analysis (FIA) mode. Finally, proposed biosensor was applied for ethanol analysis in various alcoholic beverage as well as offline monitoring of alcohol production through the yeast cultivation. Copyright 2010 American Institute of Chemical Engineers

  9. In-depth analysis of accidental oil spills from tankers in the context of global spill trends from all sources.

    PubMed

    Burgherr, Peter

    2007-02-09

    This study gives a global overview of accidental oil spills from all sources (> or =700t) for the period 1970-2004, followed by a detailed examination of trends in accidental tanker spills. The present analysis of the number and volume of tanker spills includes temporal and spatial spill trends, aspects of spill size distribution as well as trends of key factors (i.e., flag state, hull type, tanker age, accident cause and sensitivity of location). Results show that the total number and volume of tanker spills have significantly decreased since the 1970s, which is in contrast to increases in maritime transport of oil and to popular perceptions following recent catastrophic events. However, many spills still occur in ecologically sensitive locations because the major maritime transport routes often cross the boundaries of the Large Marine Ecosystems, but the substantially lower total spill volume is an important contribution to potentially reduce overall ecosystem impacts. In summary, the improvements achieved in the past decades have been the result of a set of initiatives and regulations implemented by governments, international organizations and the shipping industry.

  10. Plant pressure sensitive adhesives: similar chemical properties in distantly related plant lineages.

    PubMed

    Frenzke, Lena; Lederer, Albena; Malanin, Mikhail; Eichhorn, Klaus-Jochen; Neinhuis, Christoph; Voigt, Dagmar

    2016-07-01

    A mixture of resins based on aliphatic esters and carboxylic acids occurs in distantly related genera Peperomia and Roridula , serving different functions as adhesion in seed dispersal and prey capture. According to mechanical characteristics, adhesive secretions on both leaves of the carnivorous flypaper Roridula gorgonias and epizoochorous fruits of Peperomia polystachya were expected to be similar. The chemical analysis of these adhesives turned out to be challenging because of the limited available mass for analysis. Size exclusion chromatography and Fourier transform infrared spectroscopy were suitable methods for the identification of a mixture of compounds, most appropriately containing natural resins based on aliphatic esters and carboxylic acids. The IR spectra of the Peperomia and Roridula adhesive resemble each other; they correspond to that of a synthetic ethylene-vinyl acetate copolymer, but slightly differ from that of natural tree resins. Thus, the pressure sensitive adhesive properties of the plant adhesives are chemically proved. Such adhesives seem to appear independently in distantly related plant lineages, habitats, life forms, as well as plant organs, and serve different functions such as prey capture in Roridula and fruit dispersal in Peperomia. However, more detailed chemical analyses still remain challenging because of the small available volume of plant adhesive.

  11. Postmenopausal bleeding: value of imaging.

    PubMed

    Reinhold, Caroline; Khalili, Ida

    2002-05-01

    Endovaginal sonography in combination with HSG is an effective screening tool in evaluating patients with postmenopausal bleeding. Endovaginal sonography is highly sensitive for detecting endometrial carcinoma and can identify patients at low risk for endometrial disease obviating the need for endometrial sampling in this subgroup of patients. In patients with abnormal findings at sonography, a detailed morphologic analysis can be used to determine which patients can undergo blind endometrial sampling successfully versus those who would benefit from hysteroscopic guidance. In patients in whom endovaginal sonography and HSG are inadequate, MRI may provide additional information on the appearance of the endometrium, particularly in patients in whom endometrial sampling is difficult (eg, patients with cervical stenosis).

  12. Rapid Characterization of Microorganisms by Mass Spectrometry—What Can Be Learned and How?

    NASA Astrophysics Data System (ADS)

    Fenselau, Catherine C.

    2013-08-01

    Strategies for the rapid and reliable analysis of microorganisms have been sought to meet national needs in defense, homeland security, space exploration, food and water safety, and clinical diagnosis. Mass spectrometry has long been a candidate technique because it is extremely rapid and can provide highly specific information. It has excellent sensitivity. Molecular and fragment ion masses provide detailed fingerprints, which can also be interpreted. Mass spectrometry is also a broad band method—everything has a mass—and it is automatable. Mass spectrometry is a physiochemical method that is orthogonal and complementary to biochemical and morphological methods used to characterize microorganisms.

  13. Rapid characterization of microorganisms by mass spectrometry--what can be learned and how?

    PubMed

    Fenselau, Catherine C

    2013-08-01

    Strategies for the rapid and reliable analysis of microorganisms have been sought to meet national needs in defense, homeland security, space exploration, food and water safety, and clinical diagnosis. Mass spectrometry has long been a candidate technique because it is extremely rapid and can provide highly specific information. It has excellent sensitivity. Molecular and fragment ion masses provide detailed fingerprints, which can also be interpreted. Mass spectrometry is also a broad band method--everything has a mass--and it is automatable. Mass spectrometry is a physiochemical method that is orthogonal and complementary to biochemical and morphological methods used to characterize microorganisms.

  14. How to distinguish various components of the SHG signal recorded from the solid/liquid interface?

    NASA Astrophysics Data System (ADS)

    Gassin, Pierre-Marie; Martin-Gassin, Gaelle; Prelot, Benedicte; Zajac, Jerzy

    2016-11-01

    Second harmonic generation (SHG) may be an important tool to probe buried solid/liquid interfaces because of its inherent surface sensitivity. A detailed interpretation of dye adsorption onto Si-SiO2 wafer is not straightforward because both adsorbent and adsorbate contribute to the overall SHG signal. The polarization resolved SHG analysis points out that the adsorbent and adsorbate contributions are out of phase by π/2 in the present system. The surface nonlinear susceptibility χ(2) represents thus a complex tensor in which its real part is related to the adsorbent contribution and its imaginary part to the adsorbate one.

  15. An improved car-following model accounting for the preceding car's taillight

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Tang, Tie-Qiao; Yu, Shao-Wei

    2018-02-01

    During the deceleration process, the preceding car's taillight may have influences on its following car's driving behavior. In this paper, we propose an extended car-following model with consideration of the preceding car's taillight. Two typical situations are used to simulate each car's movement and study the effects of the preceding car's taillight on the driving behavior. Meanwhile, sensitivity analysis of the model parameter is in detail discussed. The numerical results show that the proposed model can improve the stability of traffic flow and the traffic safety can be enhanced without a decrease of efficiency especially when cars pass through a signalized intersection.

  16. S-matrix analysis of the baryon electric charge correlation

    NASA Astrophysics Data System (ADS)

    Lo, Pok Man; Friman, Bengt; Redlich, Krzysztof; Sasaki, Chihiro

    2018-03-01

    We compute the correlation of the net baryon number with the electric charge (χBQ) for an interacting hadron gas using the S-matrix formulation of statistical mechanics. The observable χBQ is particularly sensitive to the details of the pion-nucleon interaction, which are consistently incorporated in the current scheme via the empirical scattering phase shifts. Comparing to the recent lattice QCD studies in the (2 + 1)-flavor system, we find that the natural implementation of interactions and the proper treatment of resonances in the S-matrix approach lead to an improved description of the lattice data over that obtained in the hadron resonance gas model.

  17. Fluorescence Spectroscopy for the Monitoring of Food Processes.

    PubMed

    Ahmad, Muhammad Haseeb; Sahar, Amna; Hitzmann, Bernd

    Different analytical techniques have been used to examine the complexity of food samples. Among them, fluorescence spectroscopy cannot be ignored in developing rapid and non-invasive analytical methodologies. It is one of the most sensitive spectroscopic approaches employed in identification, classification, authentication, quantification, and optimization of different parameters during food handling, processing, and storage and uses different chemometric tools. Chemometrics helps to retrieve useful information from spectral data utilized in the characterization of food samples. This contribution discusses in detail the potential of fluorescence spectroscopy of different foods, such as dairy, meat, fish, eggs, edible oil, cereals, fruit, vegetables, etc., for qualitative and quantitative analysis with different chemometric approaches.

  18. NEMOTAM: tangent and adjoint models for the ocean modelling platform NEMO

    NASA Astrophysics Data System (ADS)

    Vidard, A.; Bouttier, P.-A.; Vigilant, F.

    2015-04-01

    Tangent linear and adjoint models (TAMs) are efficient tools to analyse and to control dynamical systems such as NEMO. They can be involved in a large range of applications such as sensitivity analysis, parameter estimation or the computation of characteristic vectors. A TAM is also required by the 4D-Var algorithm, which is one of the major methods in data assimilation. This paper describes the development and the validation of the tangent linear and adjoint model for the NEMO ocean modelling platform (NEMOTAM). The diagnostic tools that are available alongside NEMOTAM are detailed and discussed, and several applications are also presented.

  19. NEMOTAM: tangent and adjoint models for the ocean modelling platform NEMO

    NASA Astrophysics Data System (ADS)

    Vidard, A.; Bouttier, P.-A.; Vigilant, F.

    2014-10-01

    The tangent linear and adjoint model (TAM) are efficient tools to analyse and to control dynamical systems such as NEMO. They can be involved in a large range of applications such as sensitivity analysis, parameter estimation or the computation of characteristics vectors. TAM is also required by the 4-D-VAR algorithm which is one of the major method in Data Assimilation. This paper describes the development and the validation of the Tangent linear and Adjoint Model for the NEMO ocean modelling platform (NEMOTAM). The diagnostic tools that are available alongside NEMOTAM are detailed and discussed and several applications are also presented.

  20. Development of the surface-sensitive soft x-ray absorption fine structure measurement technique for the bulk insulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yonemura, Takumi, E-mail: yonemura-takumi@sei.co.jp; Iihara, Junji; Uemura, Shigeaki

    We have succeeded in measuring X-ray absorption fine structure (TEY-XAFS) spectra of insulating plate samples by total electron yield. The biggest problem is how to suppress the charge-up. We have attempted to deposit a gold stripe electrode on the surface and obtained a TEY-XAFS spectrum. This indicates that the metal stripe electrode is very useful in the TEY-XAFS measurement of the insulating plate samples. In the detailed analysis, we have found that the effective area for suppressing charge-up was approximately 120 μm from the edge of the electrode.

  1. Single atom catalysts on amorphous supports: A quenched disorder perspective

    NASA Astrophysics Data System (ADS)

    Peters, Baron; Scott, Susannah L.

    2015-03-01

    Phenomenological models that invoke catalyst sites with different adsorption constants and rate constants are well-established, but computational and experimental methods are just beginning to provide atomically resolved details about amorphous surfaces and their active sites. This letter develops a statistical transformation from the quenched disorder distribution of site structures to the distribution of activation energies for sites on amorphous supports. We show that the overall kinetics are highly sensitive to the precise nature of the low energy tail in the activation energy distribution. Our analysis motivates further development of systematic methods to identify and understand the most reactive members of the active site distribution.

  2. Surface emitting ring quantum cascade lasers for chemical sensing

    NASA Astrophysics Data System (ADS)

    Szedlak, Rolf; Hayden, Jakob; Martín-Mateos, Pedro; Holzbauer, Martin; Harrer, Andreas; Schwarz, Benedikt; Hinkov, Borislav; MacFarland, Donald; Zederbauer, Tobias; Detz, Hermann; Andrews, Aaron Maxwell; Schrenk, Werner; Acedo, Pablo; Lendl, Bernhard; Strasser, Gottfried

    2018-01-01

    We review recent advances in chemical sensing applications based on surface emitting ring quantum cascade lasers (QCLs). Such lasers can be implemented in monolithically integrated on-chip laser/detector devices forming compact gas sensors, which are based on direct absorption spectroscopy according to the Beer-Lambert law. Furthermore, we present experimental results on radio frequency modulation up to 150 MHz of surface emitting ring QCLs. This technique provides detailed insight into the modulation characteristics of such lasers. The gained knowledge facilitates the utilization of ring QCLs in combination with spectroscopic techniques, such as heterodyne phase-sensitive dispersion spectroscopy for gas detection and analysis.

  3. An experimental investigation of Iosipescu specimen for composite materials

    NASA Technical Reports Server (NTRS)

    Ho, H.; Tsai, M. Y.; Morton, J.; Farley, G. L.

    1991-01-01

    A detailed experimental evaluation of the Iosipescu specimen tested in the modified Wyoming fixture is presented. Moire interferometry is employed to determine the deformation of unidirectional and cross-ply graphite-epoxy specimens. The results of the moire experiments are compared to those from the traditional strain-gage method. It is shown that the strain-gage readings from one surface of a specimen together with corresponding data from moire interferometry on the opposite face documented an extreme sensitivity of some fiber orientations to twisting. A localized hybrid analysis is introduced to perform efficient reduction of moire data, producing whole-field strain distributions in the specimen test sections.

  4. NMR and MS Methods for Metabolomics.

    PubMed

    Amberg, Alexander; Riefke, Björn; Schlotterbeck, Götz; Ross, Alfred; Senn, Hans; Dieterle, Frank; Keck, Matthias

    2017-01-01

    Metabolomics, also often referred as "metabolic profiling," is the systematic profiling of metabolites in biofluids or tissues of organisms and their temporal changes. In the last decade, metabolomics has become more and more popular in drug development, molecular medicine, and other biotechnology fields, since it profiles directly the phenotype and changes thereof in contrast to other "-omics" technologies. The increasing popularity of metabolomics has been possible only due to the enormous development in the technology and bioinformatics fields. In particular, the analytical technologies supporting metabolomics, i.e., NMR, UPLC-MS, and GC-MS, have evolved into sensitive and highly reproducible platforms allowing the determination of hundreds of metabolites in parallel. This chapter describes the best practices of metabolomics as seen today. All important steps of metabolic profiling in drug development and molecular medicine are described in great detail, starting from sample preparation to determining the measurement details of all analytical platforms, and finally to discussing the corresponding specific steps of data analysis.

  5. NMR and MS methods for metabonomics.

    PubMed

    Dieterle, Frank; Riefke, Björn; Schlotterbeck, Götz; Ross, Alfred; Senn, Hans; Amberg, Alexander

    2011-01-01

    Metabonomics, also often referred to as "metabolomics" or "metabolic profiling," is the systematic profiling of metabolites in bio-fluids or tissues of organisms and their temporal changes. In the last decade, metabonomics has become increasingly popular in drug development, molecular medicine, and other biotechnology fields, since it profiles directly the phenotype and changes thereof in contrast to other "-omics" technologies. The increasing popularity of metabonomics has been possible only due to the enormous development in the technology and bioinformatics fields. In particular, the analytical technologies supporting metabonomics, i.e., NMR, LC-MS, UPLC-MS, and GC-MS have evolved into sensitive and highly reproducible platforms allowing the determination of hundreds of metabolites in parallel. This chapter describes the best practices of metabonomics as seen today. All important steps of metabolic profiling in drug development and molecular medicine are described in great detail, starting from sample preparation, to determining the measurement details of all analytical platforms, and finally, to discussing the corresponding specific steps of data analysis.

  6. The multi-disciplinary design study: A life cycle cost algorithm

    NASA Technical Reports Server (NTRS)

    Harding, R. R.; Pichi, F. J.

    1988-01-01

    The approach and results of a Life Cycle Cost (LCC) analysis of the Space Station Solar Dynamic Power Subsystem (SDPS) including gimbal pointing and power output performance are documented. The Multi-Discipline Design Tool (MDDT) computer program developed during the 1986 study has been modified to include the design, performance, and cost algorithms for the SDPS as described. As with the Space Station structural and control subsystems, the LCC of the SDPS can be computed within the MDDT program as a function of the engineering design variables. Two simple examples of MDDT's capability to evaluate cost sensitivity and design based on LCC are included. MDDT was designed to accept NASA's IMAT computer program data as input so that IMAT's detailed structural and controls design capability can be assessed with expected system LCC as computed by MDDT. No changes to IMAT were required. Detailed knowledge of IMAT is not required to perform the LCC analyses as the interface with IMAT is noninteractive.

  7. Comprehensive two-dimensional gas chromatography for the analysis of Fischer-Tropsch oil products.

    PubMed

    van der Westhuizen, Rina; Crous, Renier; de Villiers, André; Sandra, Pat

    2010-12-24

    The Fischer-Tropsch (FT) process involves a series of catalysed reactions of carbon monoxide and hydrogen, originating from coal, natural gas or biomass, leading to a variety of synthetic chemicals and fuels. The benefits of comprehensive two-dimensional gas chromatography (GC×GC) compared to one-dimensional GC (1D-GC) for the detailed investigation of the oil products of low and high temperature FT processes are presented. GC×GC provides more accurate quantitative data to construct Anderson-Schultz-Flory (ASF) selectivity models that correlate the FT product distribution with reaction variables. On the other hand, the high peak capacity and sensitivity of GC×GC allow the detailed study of components present at trace level. Analyses of the aromatic and oxygenated fractions of a high temperature FT (HT-FT) process are presented. GC×GC data have been used to optimise or tune the HT-FT process by using a lab-scale micro-FT-reactor. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Realized volatility and absolute return volatility: a comparison indicating market risk.

    PubMed

    Zheng, Zeyu; Qiao, Zhi; Takaishi, Tetsuya; Stanley, H Eugene; Li, Baowen

    2014-01-01

    Measuring volatility in financial markets is a primary challenge in the theory and practice of risk management and is essential when developing investment strategies. Although the vast literature on the topic describes many different models, two nonparametric measurements have emerged and received wide use over the past decade: realized volatility and absolute return volatility. The former is strongly favored in the financial sector and the latter by econophysicists. We examine the memory and clustering features of these two methods and find that both enable strong predictions. We compare the two in detail and find that although realized volatility has a better short-term effect that allows predictions of near-future market behavior, absolute return volatility is easier to calculate and, as a risk indicator, has approximately the same sensitivity as realized volatility. Our detailed empirical analysis yields valuable guidelines for both researchers and market participants because it provides a significantly clearer comparison of the strengths and weaknesses of the two methods.

  9. Realized Volatility and Absolute Return Volatility: A Comparison Indicating Market Risk

    PubMed Central

    Takaishi, Tetsuya; Stanley, H. Eugene; Li, Baowen

    2014-01-01

    Measuring volatility in financial markets is a primary challenge in the theory and practice of risk management and is essential when developing investment strategies. Although the vast literature on the topic describes many different models, two nonparametric measurements have emerged and received wide use over the past decade: realized volatility and absolute return volatility. The former is strongly favored in the financial sector and the latter by econophysicists. We examine the memory and clustering features of these two methods and find that both enable strong predictions. We compare the two in detail and find that although realized volatility has a better short-term effect that allows predictions of near-future market behavior, absolute return volatility is easier to calculate and, as a risk indicator, has approximately the same sensitivity as realized volatility. Our detailed empirical analysis yields valuable guidelines for both researchers and market participants because it provides a significantly clearer comparison of the strengths and weaknesses of the two methods. PMID:25054439

  10. Detailed Physiologic Characterization Reveals Diverse Mechanisms for Novel Genetic Loci Regulating Glucose and Insulin Metabolism in Humans

    PubMed Central

    Ingelsson, Erik; Langenberg, Claudia; Hivert, Marie-France; Prokopenko, Inga; Lyssenko, Valeriya; Dupuis, Josée; Mägi, Reedik; Sharp, Stephen; Jackson, Anne U.; Assimes, Themistocles L.; Shrader, Peter; Knowles, Joshua W.; Zethelius, Björn; Abbasi, Fahim A.; Bergman, Richard N.; Bergmann, Antje; Berne, Christian; Boehnke, Michael; Bonnycastle, Lori L.; Bornstein, Stefan R.; Buchanan, Thomas A.; Bumpstead, Suzannah J.; Böttcher, Yvonne; Chines, Peter; Collins, Francis S.; Cooper, Cyrus C.; Dennison, Elaine M.; Erdos, Michael R.; Ferrannini, Ele; Fox, Caroline S.; Graessler, Jürgen; Hao, Ke; Isomaa, Bo; Jameson, Karen A.; Kovacs, Peter; Kuusisto, Johanna; Laakso, Markku; Ladenvall, Claes; Mohlke, Karen L.; Morken, Mario A.; Narisu, Narisu; Nathan, David M.; Pascoe, Laura; Payne, Felicity; Petrie, John R.; Sayer, Avan A.; Schwarz, Peter E. H.; Scott, Laura J.; Stringham, Heather M.; Stumvoll, Michael; Swift, Amy J.; Syvänen, Ann-Christine; Tuomi, Tiinamaija; Tuomilehto, Jaakko; Tönjes, Anke; Valle, Timo T.; Williams, Gordon H.; Lind, Lars; Barroso, Inês; Quertermous, Thomas; Walker, Mark; Wareham, Nicholas J.; Meigs, James B.; McCarthy, Mark I.; Groop, Leif; Watanabe, Richard M.; Florez, Jose C.

    2010-01-01

    OBJECTIVE Recent genome-wide association studies have revealed loci associated with glucose and insulin-related traits. We aimed to characterize 19 such loci using detailed measures of insulin processing, secretion, and sensitivity to help elucidate their role in regulation of glucose control, insulin secretion and/or action. RESEARCH DESIGN AND METHODS We investigated associations of loci identified by the Meta-Analyses of Glucose and Insulin-related traits Consortium (MAGIC) with circulating proinsulin, measures of insulin secretion and sensitivity from oral glucose tolerance tests (OGTTs), euglycemic clamps, insulin suppression tests, or frequently sampled intravenous glucose tolerance tests in nondiabetic humans (n = 29,084). RESULTS The glucose-raising allele in MADD was associated with abnormal insulin processing (a dramatic effect on higher proinsulin levels, but no association with insulinogenic index) at extremely persuasive levels of statistical significance (P = 2.1 × 10−71). Defects in insulin processing and insulin secretion were seen in glucose-raising allele carriers at TCF7L2, SCL30A8, GIPR, and C2CD4B. Abnormalities in early insulin secretion were suggested in glucose-raising allele carriers at MTNR1B, GCK, FADS1, DGKB, and PROX1 (lower insulinogenic index; no association with proinsulin or insulin sensitivity). Two loci previously associated with fasting insulin (GCKR and IGF1) were associated with OGTT-derived insulin sensitivity indices in a consistent direction. CONCLUSIONS Genetic loci identified through their effect on hyperglycemia and/or hyperinsulinemia demonstrate considerable heterogeneity in associations with measures of insulin processing, secretion, and sensitivity. Our findings emphasize the importance of detailed physiological characterization of such loci for improved understanding of pathways associated with alterations in glucose homeostasis and eventually type 2 diabetes. PMID:20185807

  11. Risk-based maintenance of ethylene oxide production facilities.

    PubMed

    Khan, Faisal I; Haddara, Mahmoud R

    2004-05-20

    This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.

  12. Ground-based experiments complement microgravity flight opportunities in the investigation of the effects of space flight on the immune response: is protein kinase C gravity sensitive?

    NASA Technical Reports Server (NTRS)

    Chapes, S. K.; Woods, K. M.; Armstrong, J. W.; Spooner, B. S. (Principal Investigator)

    1993-01-01

    This manuscript briefly reviews ground-based and flight experiments, discusses how those experiments complement each other, and details how those experiments lead us to speculate about the gravity-sensitive nature of protein kinase C.

  13. Recollection and Familiarity in Recognition Memory: Evidence from ROC Curves

    ERIC Educational Resources Information Center

    Heathcote, Andrew; Raymond, Frances; Dunn, John

    2006-01-01

    Does recognition memory rely on discrete recollection, continuous evidence, or both? Is continuous evidence sensitive to only the recency and duration of study (familiarity), or is it also sensitive to details of the study episode? Dual process theories assume recognition is based on recollection and familiarity, with only recollection providing…

  14. Processing Nonnative Consonant Clusters in the Classroom: Perception and Production of Phonetic Detail

    ERIC Educational Resources Information Center

    Davidson, Lisa; Wilson, Colin

    2016-01-01

    Recent research has shown that speakers are sensitive to non-contrastive phonetic detail present in nonnative speech (e.g. Escudero et al. 2012; Wilson et al. 2014). Difficulties in interpreting and implementing unfamiliar phonetic variation can lead nonnative speakers to modify second language forms by vowel epenthesis and other changes. These…

  15. Language-General Biases and Language-Specific Experience Contribute to Phonological Detail in Toddlers' Word Representations

    ERIC Educational Resources Information Center

    Tsuji, Sho; Fikkert, Paula; Yamane, Naoto; Mazuka, Reiko

    2016-01-01

    Although toddlers in their 2nd year of life generally have phonologically detailed representations of words, a consistent lack of sensitivity to certain kinds of phonological changes has been reported. The origin of these insensitivities is poorly understood, and uncovering their cause is crucial for obtaining a complete picture of early…

  16. Studies on the synthesis, spectroscopic analysis, molecular docking and DFT calculations on 1-hydroxy-2-(4-hydroxyphenyl)-4,5-dimethyl-imidazol 3-oxide

    NASA Astrophysics Data System (ADS)

    Benzon, K. B.; Sheena, Mary Y.; Panicker, C. Yohannan; Armaković, Stevan; Armaković, Sanja J.; Pradhan, Kiran; Nanda, Ashis Kumar; Van Alsenoy, C.

    2017-02-01

    In this work we have investigated in details the spectroscopic and reactive properties of newly synthesized imidazole derivative, namely the 1-hydroxy-2-(4-hydroxyphenyl)-4,5-dimethyl-imidazole 3-oxide (HHPDI). FT-IR and NMR spectra were measured and compared with theoretically obtained data provided by calculations of potential energy distribution and chemical shifts, respectively. Insight into the global reactivity properties has been obtained by analysis of frontier molecular orbitals, while local reactivity properties have been investigated by analysis of charge distribution, ionization energies and Fukui functions. NBO analysis was also employed to understand the stability of molecule, while hyperpolarizability has been calculated in order to assess the nonlinear optical properties of title molecule. Sensitivity towards autoxidation and hydrolysis mechanisms has been investigated by calculations of bond dissociation energies and radial distribution functions, respectively. Molecular docking study was also performed, in order to determine the pharmaceutical potential of the investigated molecule.

  17. A Comprehensive Availability Modeling and Analysis of a Virtualized Servers System Using Stochastic Reward Nets

    PubMed Central

    Kim, Dong Seong; Park, Jong Sou

    2014-01-01

    It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Norman A.; /SLAC

    Maximizing the physics performance of detectors being designed for the International Linear Collider, while remaining sensitive to cost constraints, requires a powerful, efficient, and flexible simulation, reconstruction and analysis environment to study the capabilities of a large number of different detector designs. The preparation of Letters Of Intent for the International Linear Collider involved the detailed study of dozens of detector options, layouts and readout technologies; the final physics benchmarking studies required the reconstruction and analysis of hundreds of millions of events. We describe the Java-based software toolkit (org.lcsim) which was used for full event reconstruction and analysis. The componentsmore » are fully modular and are available for tasks from digitization of tracking detector signals through to cluster finding, pattern recognition, track-fitting, calorimeter clustering, individual particle reconstruction, jet-finding, and analysis. The detector is defined by the same xml input files used for the detector response simulation, ensuring the simulation and reconstruction geometries are always commensurate by construction. We discuss the architecture as well as the performance.« less

  19. Estimating the cost of mental loading in a bimodal divided-attention task: Combining reaction time, heart-rate variability and signal-detection theory

    NASA Technical Reports Server (NTRS)

    Casper, Patricia A.; Kantowitz, Barry H.

    1988-01-01

    Multiple approaches are necessary for understanding and measuring workload. In particular, physiological systems identifiable by employing cardiac measures are related to cognitive systems. One issue of debate in measuring cardiac output is the grain of analysis used in recording and summarizing data. Various experiments are reviewed, the majority of which were directed at supporting or contradicting Lacey's intake-rejection hypothesis. Two of the experiments observed heart rate in operational environments and found virtually no changes associated with mental load. The major problems facing researchers using heart rate variability, or sinus arrhthmia, as a dependent measure have been associated with valid and sensitive scoring and preventing contamination of observed results by influences unrelated to cognition. Spectral analysis of heart rate variability offers two useful procedures: analysis from the time domain and analysis from the frequency domain. Most recently, data have been collected in a divided attention experiment, the performance measures and cardiac measures of which are detailed.

  20. The Advanced Gamma-ray Imaging System (AGIS)-Science Highlights

    NASA Astrophysics Data System (ADS)

    Buckley, J.; Coppi, P.; Digel, S.; Funk, S.; Krawczynski, H.; Krennrich, F.; Pohl, M.; Romani, R.; Vassiliev, V.

    2008-12-01

    The Advanced Gamma-ray Imaging System (AGIS), a future gamma-ray telescope consisting of an array of ~50 atmospheric Cherenkov telescopes distributed over an area of ~1 km2, will provide a powerful new tool for exploring the high-energy universe. The order-of-magnitude increase in sensitivity and improved angular resolution could provide the first detailed images of γ-ray emission from other nearby galaxies or galaxy clusters. The large effective area will provide unprecedented sensitivity to short transients (such as flares from AGNs and GRBs) probing both intrinsic spectral variability (revealing the details of the acceleration mechanism and geometry) as well as constraining the high-energy dispersion in the velocity of light (probing the structure of spacetime and Lorentz invariance). A wide field of view (~4 times that of current instruments) and excellent angular resolution (several times better than current instruments) will allow for an unprecedented survey of the Galactic plane, providing a deep unobscured survey of SNRs, X-ray binaries, pulsar-wind nebulae, molecular cloud complexes and other sources. The differential flux sensitivity of ~10-13 erg cm-2 sec-1 will rival the most sensitive X-ray instruments for these extended Galactic sources. The excellent capabilities of AGIS at energies below 100 GeV will provide sensitivity to AGN and GRBs out to cosmological redshifts, increasing the number of AGNs detected at high energies from about 20 to more than 100, permitting population studies that will provide valuable insights into both a unified model for AGN and a detailed measurement of the effects of intergalactic absorption from the diffuse extragalactic background light. A new instrument with fast-slewing wide-field telescopes could provide detections of a number of long-duration GRBs providing important physical constraints from this new spectral component. The new array will also have excellent background rejection and very large effective area, providing the very high sensitivity needed to detect emission from dark matter annihilation in Galactic substructure or nearby Dwarf spheroidal galaxies.

  1. Analysis and design of planar waveguide elements for use in filters and sensors

    NASA Astrophysics Data System (ADS)

    Chen, Guangzhou

    In this dissertation we present both theoretical analysis and practical design considerations for planar optical waveguide devices. The analysis takes into account both transverse dimensions of the waveguides and is based on supermode theory combined with the resonance method for the determination of the propagation constants and field profiles of the supermodes. An improved accuracy has been achieved by including corrections due to the fields in the corner regions of the waveguides using perturbation theory. We analyze in detail two particular devices, an optical filter/combiner and an optical sensor. An optical wavelength filter/combiner is a common element in an integrated optical circuit. A new "bend free" filter/combiner is proposed and analyzed. The new wavelength filter consists of only straight parallel channels, which considerably simplify both the analysis and fabrication of the device. We show in detail how the operation of the device depends upon each of the design parameters. The intrinsic power loss in the proposed filter/combiner is minimized. The optical sensor is another important device and the sensitivity of measurement is an important issue in its design. Two operating mechanisms used in prior optical sensors are evanescent wave sensing or surface plasmon excitation. In this dissertation, we present a sensor with a directional coupler structure in which a measurand to be detected is interfaced with one side of the cladding. The analysis shows that it is possible to make a high resolution device by adjusting the design parameters. The dimensions and materials used in an optimized design are presented.

  2. Polyploidization mechanisms: temperature environment can induce diploid gamete formation in Rosa sp.

    PubMed

    Pécrix, Yann; Rallo, Géraldine; Folzer, Hélène; Cigna, Mireille; Gudin, Serge; Le Bris, Manuel

    2011-06-01

    Polyploidy is an important evolutionary phenomenon but the mechanisms by which polyploidy arises still remain underexplored. There may be an environmental component to polyploidization. This study aimed to clarify how temperature may promote diploid gamete formation considered an essential element for sexual polyploidization. First of all, a detailed cytological analysis of microsporogenesis and microgametogenesis was performed to target precisely the key developmental stages which are the most sensitive to temperature. Then, heat-induced modifications in sporad and pollen characteristics were analysed through an exposition of high temperature gradient. Rosa plants are sensitive to high temperatures with a developmental sensitivity window limited to meiosis. Moreover, the range of efficient temperatures is actually narrow. 36 °C at early meiosis led to a decrease in pollen viability, pollen ectexine defects but especially the appearance of numerous diploid pollen grains. They resulted from dyads or triads mainly formed following heat-induced spindle misorientations in telophase II. A high temperature environment has the potential to increase gamete ploidy level. The high frequencies of diplogametes obtained at some extreme temperatures support the hypothesis that polyploidization events could have occurred in adverse conditions and suggest polyploidization facilitating in a global change context.

  3. Dissection of the components for PIP2 activation and thermosensation in TRP channels

    PubMed Central

    Brauchi, Sebastian; Orta, Gerardo; Mascayano, Carolina; Salazar, Marcelo; Raddatz, Natalia; Urbina, Hector; Rosenmann, Eduardo; Gonzalez-Nilo, Fernando; Latorre, Ramon

    2007-01-01

    Phosphatidylinositol 4,5-bisphosphate (PIP2) plays a central role in the activation of several transient receptor potential (TRP) channels. The role of PIP2 on temperature gating of thermoTRP channels has not been explored in detail, and the process of temperature activation is largely unexplained. In this work, we have exchanged different segments of the C-terminal region between cold-sensitive (TRPM8) and heat-sensitive (TRPV1) channels, trying to understand the role of the segment in PIP2 and temperature activation. A chimera in which the proximal part of the C-terminal of TRPV1 replaces an equivalent section of TRPM8 C-terminal is activated by PIP2 and confers the phenotype of heat activation. PIP2, but not temperature sensitivity, disappears when positively charged residues contained in the exchanged region are neutralized. Shortening the exchanged segment to a length of 11 aa produces voltage-dependent and temperature-insensitive channels. Our findings suggest the existence of different activation domains for temperature, PIP2, and voltage. We provide an interpretation for channel–PIP2 interaction using a full-atom molecular model of TRPV1 and PIP2 docking analysis. PMID:17548815

  4. Nonimmediate hypersensitivity reactions to iodinated contrast media.

    PubMed

    Gómez, Enrique; Ariza, Adriana; Blanca-López, Natalia; Torres, Maria J

    2013-08-01

    To provide a detailed analysis of the latest findings on the mechanisms underlying the nonimmediate reactions to iodinated contrast media and comment on the recent advances in diagnosis, focusing on the roles of the skin test, drug provocation test (DPT), and lymphocyte transformation test (LTT). Several studies have reported new findings supporting an important role for T-lymphocytes in the nonimmediate reactions to iodinated contrast media. The LTT has been used as an in-vitro tool for diagnosis, but with variable results. However, the inclusion of autologous monocyte-derived dendritic cells as professional antigen-presenting cells has improved the sensitivity of this test. Regarding in-vivo diagnosis, although skin testing has been routine, it has now been shown that its sensitivity and negative predictive value are low. Recent studies have demonstrated that the DPT is a well tolerated and useful procedure that is necessary to confirm the diagnosis of nonimmediate hypersensitivity reactions to iodinated contrast media. Nonimmediate reactions to contrast media are usually T-cell mediated. Diagnosis is based on skin testing, although its sensitivity and negative predictive value are not optimal. Consequently, drug provocation testing is often needed to confirm the diagnosis and also to seek alternative contrast media that can be tolerated.

  5. Field-effect sensors - from pH sensing to biosensing: sensitivity enhancement using streptavidin-biotin as a model system.

    PubMed

    Lowe, Benjamin M; Sun, Kai; Zeimpekis, Ioannis; Skylaris, Chris-Kriton; Green, Nicolas G

    2017-11-06

    Field-Effect Transistor sensors (FET-sensors) have been receiving increasing attention for biomolecular sensing over the last two decades due to their potential for ultra-high sensitivity sensing, label-free operation, cost reduction and miniaturisation. Whilst the commercial application of FET-sensors in pH sensing has been realised, their commercial application in biomolecular sensing (termed BioFETs) is hindered by poor understanding of how to optimise device design for highly reproducible operation and high sensitivity. In part, these problems stem from the highly interdisciplinary nature of the problems encountered in this field, in which knowledge of biomolecular-binding kinetics, surface chemistry, electrical double layer physics and electrical engineering is required. In this work, a quantitative analysis and critical review has been performed comparing literature FET-sensor data for pH-sensing with data for sensing of biomolecular streptavidin binding to surface-bound biotin systems. The aim is to provide the first systematic, quantitative comparison of BioFET results for a single biomolecular analyte, specifically streptavidin, which is the most commonly used model protein in biosensing experiments, and often used as an initial proof-of-concept for new biosensor designs. This novel quantitative and comparative analysis of the surface potential behaviour of a range of devices demonstrated a strong contrast between the trends observed in pH-sensing and those in biomolecule-sensing. Potential explanations are discussed in detail and surface-chemistry optimisation is shown to be a vital component in sensitivity-enhancement. Factors which can influence the response, yet which have not always been fully appreciated, are explored and practical suggestions are provided on how to improve experimental design.

  6. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    NASA Astrophysics Data System (ADS)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems, particularly at the laboratory scale.

  7. [INVITED] Highly sensitive LSPR based photonic crystal fiber sensor with embodiment of nanospheres in different material domain

    NASA Astrophysics Data System (ADS)

    Paul, D.; Biswas, R.

    2018-05-01

    We report a highly sensitive Localized surface plasmon resonance (LSPR) based photonic crystal fiber (PCF) sensor by embedding an array of gold nanospheres into the first layer of air-holes of PCF. We present a comprehensive analysis on the basis of progressive variation of refractive indices of analytes as well as sizes of the nanospheres. In the proposed sensing scheme, refractive indices of the analytes have been changed from 1 to 1.41(RIU), accompanied by alteration of the sizes of nanospheres ranging 40-70 nm. The entire study has been executed in the context of different material based PCFs (viz. phosphate and crown) and the corresponding results have been analyzed and compared. We observe a declining trend in modal loss in each set of PCFs with increment of RI of the analyte. Lower loss has been observed in case of crown based PCF. The sensor shows highest sensitivity ∼27,000 nm/RIU for crown based PCF for nanosphere of 70 nm with average wavelength interrogation sensitivity ∼5333.53 nm/RIU. In case of phosphate based PCF, highest sensitivity is found to be ∼18,000 nm/RIU with an average interrogation sensitivity ∼4555.56 nm/RIU for 40 nm of Au nanosphere. Moreover, the additional sensing parameters have been observed to highlight the better design of the modelled LSPR based photonic crystal fiber sensor. As such, the resolution (R), limit of detection (LOD) and sensitivity (S) of the proposed sensor in each case (viz. phosphate and crown PCF) have been discussed by using wavelength interrogation technique. The proposed study provides a basis for detailed investigation of LSPR phenomenon for PCF utilizing noble metal nanospheres (AuNPs).

  8. Blood DNA methylation biomarkers predict clinical reactivity in food-sensitized infants.

    PubMed

    Martino, David; Dang, Thanh; Sexton-Oates, Alexandra; Prescott, Susan; Tang, Mimi L K; Dharmage, Shyamali; Gurrin, Lyle; Koplin, Jennifer; Ponsonby, Anne-Louise; Allen, Katrina J; Saffery, Richard

    2015-05-01

    The diagnosis of food allergy (FA) can be challenging because approximately half of food-sensitized patients are asymptomatic. Current diagnostic tests are excellent makers of sensitization but poor predictors of clinical reactivity. Thus oral food challenges (OFCs) are required to determine a patient's risk of reactivity. We sought to discover genomic biomarkers of clinical FA with utility for predicting food challenge outcomes. Genome-wide DNA methylation (DNAm) profiling was performed on blood mononuclear cells from volunteers who had undergone objective OFCs, concurrent skin prick tests, and specific IgE tests. Fifty-eight food-sensitized patients (aged 11-15 months) were assessed, half of whom were clinically reactive. Thirteen nonallergic control subjects were also assessed. Reproducibility was assessed in an additional 48 samples by using methylation data from an independent population of patients with clinical FA. Using a supervised learning approach, we discovered a DNAm signature of 96 CpG sites that predict clinical outcomes. Diagnostic scores were derived from these 96 methylation sites, and cutoffs were determined in a sensitivity analysis. Methylation biomarkers outperformed allergen-specific IgE and skin prick tests for predicting OFC outcomes. FA status was correctly predicted in the replication cohort with an accuracy of 79.2%. DNAm biomarkers with clinical utility for predicting food challenge outcomes are readily detectable in blood. The development of this technology in detailed follow-up studies will yield highly innovative diagnostic assays. Copyright © 2015 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  9. Approaching attometer laser vibrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rembe, Christian; Kadner, Lisa; Giesen, Moritz

    2014-05-27

    The heterodyne two-beam interferometer has been proven to be the optimal solution for laser-Doppler vibrometry regarding accuracy and signal robustness. The theoretical resolution limit for a two-beam interferometer of laser class 3R (up to 5 mW visible measurement-light) is in the regime of a few femtometer per square-root Hertz and well suited to study vibrations in microstructures. However, some new applications of RF-MEM resonators, nanostructures, and surface-nano-defect detection require resolutions beyond that limit. The resolution depends only on the noise and the sensor sensitivity to specimen displacements. The noise is already defined in nowadays systems by the quantum nature ofmore » light for a properly designed optical sensor and more light would lead to an inacceptable influence like heating of a very tiny structure. Thus, noise can only be improved by squeezed-light techniques which require a negligible loss of measurement light which is impossible for almost all technical measurement tasks. Thus, improving the sensitivity is the only possible path which could make attometer laser vibrometry possible. Decreasing the measurement wavelength would increase the sensitivity but would also increase the photon shot noise. In this paper, we discuss an approach to increase the sensitivity by assembling an additional mirror between interferometer and specimen to form an optical cavity. A detailed theoretical analysis of this setup is presented and we derive the resolution limit, discuss the main contributions to the uncertainty budget, and show a first experiment proving the sensitivity amplification of our approach.« less

  10. Sensitivity to Structure in the Speech Signal by Children with Speech Sound Disorder and Reading Disability

    PubMed Central

    Johnson, Erin Phinney; Pennington, Bruce F.; Lowenstein, Joanna H.; Nittrouer, Susan

    2011-01-01

    Purpose Children with speech sound disorder (SSD) and reading disability (RD) have poor phonological awareness, a problem believed to arise largely from deficits in processing the sensory information in speech, specifically individual acoustic cues. However, such cues are details of acoustic structure. Recent theories suggest that listeners also need to be able to integrate those details to perceive linguistically relevant form. This study examined abilities of children with SSD, RD, and SSD+RD not only to process acoustic cues but also to recover linguistically relevant form from the speech signal. Method Ten- to 11-year-olds with SSD (n = 17), RD (n = 16), SSD+RD (n = 17), and Controls (n = 16) were tested to examine their sensitivity to (1) voice onset times (VOT); (2) spectral structure in fricative-vowel syllables; and (3) vocoded sentences. Results Children in all groups performed similarly with VOT stimuli, but children with disorders showed delays on other tasks, although the specifics of their performance varied. Conclusion Children with poor phonemic awareness not only lack sensitivity to acoustic details, but are also less able to recover linguistically relevant forms. This is contrary to one of the main current theories of the relation between spoken and written language development. PMID:21329941

  11. Mixed-phase altocumulus clouds over Leipzig: Remote sensing measurements and spectral cloud microphysics simulations

    NASA Astrophysics Data System (ADS)

    Simmel, Martin; Bühl, Johannes; Ansmann, Albert; Tegen, Ina

    2015-04-01

    The present work combines remote sensing observations and detailed microphysics cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather warm temperatures of -6°C. For comparison, a second mixed phase case at about -25°C is introduced. To further look into the details of cloud microphysical processes a simple dynamics model of the Asai-Kasahara type is combined with detailed spectral microphysics forming the model system AK-SPECS. Temperature and humidity profiles are taken either from observation (radiosonde) or GDAS reanalysis. Vertical velocities are prescribed to force the dynamics as well as main cloud features to be close to the observations. Subsequently, sensitivity studies with respect to dynamical as well as ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity) whereas the ice phase is much more sensitive to the microphysical parameters (ice nuclei (IN) number, ice particle shape). The choice of ice particle shape may induce large uncertainties which are in the same order as those for the temperature-dependent IN number distribution.

  12. Development of MRM-based assays for the absolute quantitation of plasma proteins.

    PubMed

    Kuzyk, Michael A; Parker, Carol E; Domanski, Dominik; Borchers, Christoph H

    2013-01-01

    Multiple reaction monitoring (MRM), sometimes called selected reaction monitoring (SRM), is a directed tandem mass spectrometric technique performed on to triple quadrupole mass spectrometers. MRM assays can be used to sensitively and specifically quantify proteins based on peptides that are specific to the target protein. Stable-isotope-labeled standard peptide analogues (SIS peptides) of target peptides are added to enzymatic digests of samples, and quantified along with the native peptides during MRM analysis. Monitoring of the intact peptide and a collision-induced fragment of this peptide (an ion pair) can be used to provide information on the absolute peptide concentration of the peptide in the sample and, by inference, the concentration of the intact protein. This technique provides high specificity by selecting for biophysical parameters that are unique to the target peptides: (1) the molecular weight of the peptide, (2) the generation of a specific fragment from the peptide, and (3) the HPLC retention time during LC/MRM-MS analysis. MRM is a highly sensitive technique that has been shown to be capable of detecting attomole levels of target peptides in complex samples such as tryptic digests of human plasma. This chapter provides a detailed description of how to develop and use an MRM protein assay. It includes sections on the critical "first step" of selecting the target peptides, as well as optimization of MRM acquisition parameters for maximum sensitivity of the ion pairs that will be used in the final method, and characterization of the final MRM assay.

  13. High-resolution spectral analysis of ammonia near 6.2 μm using a cw EC-QCL coupled with cavity ring-down spectroscopy.

    PubMed

    Maithani, Sanchi; Mandal, Santanu; Maity, Abhijit; Pal, Mithun; Pradhan, Manik

    2018-04-30

    We report on the development of a mid-infrared cavity ring-down spectrometer (CRDS) coupled with a continuous wave (cw) external cavity quantum cascade laser (EC-QCL), operating between 6.0 μm and 6.3 μm, for high-resolution spectroscopic studies of ammonia (NH3) which served as a bench-mark molecule in this spectral region. We characterized the EC-QCL based CRDS system in detail and achieved a noise-equivalent absorption (NEA) coefficient of 2.11 × 10-9 cm-1 Hz-1/2 for a 100 Hz data acquisition rate. We thereafter exploited the system for high-resolution spectroscopic analysis of interference-free 10 transition lines of the ν4 fundamental vibrational band of NH3 centred at ∼6.2 μm. We probed the strongest interference-free absorption line RQ(4,3) of ν4, centred at 1613.370 cm-1 for highly-sensitive trace detection of NH3 and subsequently achieved a minimum detection sensitivity (1σ) of 2.78 × 109 molecules per cm3 which translated into the detection limit of 740 parts-per-trillion by volume (pptv/10-12) at a pressure of 115 Torr for an integration time of ∼167 seconds. To demonstrate the efficacy of the present system in real-life applications, we finally measured the mixing ratios of NH3 present in ambient air and human exhaled breath with high sensitivity and molecular specificity.

  14. Analysis of Alpha Backgrounds in DarkSide-50

    NASA Astrophysics Data System (ADS)

    Monte, Alissa; DarkSide Collaboration

    2017-01-01

    DarkSide-50 is the current phase of the DarkSide direct dark matter search program, operating underground at the Laboratori Nazionali del Gran Sasso in Italy. The detector is a dual-phase argon Time Projection Chamber (TPC), designed for direct detection of Weakly Interacting Massive Particles, and housed within an active veto system of liquid scintillator and water Cherenkov detectors. Since switching to a target of low radioactivity argon extracted from underground sources in April, 2016, the background is no longer dominated by naturally occurring 39Ar. However, alpha backgrounds from radon and its daughters remain, both from the liquid argon bulk and internal detector surfaces. I will present details of the analysis used to understand and quantify alpha backgrounds, as well as to understand other types of radon contamination that may be present, and our sensitivity to them.

  15. Pyrotechnic hazards classification and evaluation program. Phase 2, segment 1: Records and experience analysis

    NASA Technical Reports Server (NTRS)

    1970-01-01

    A comprehensive search review and analysis was made of various technical documents relating to both pyrotechnics and high explosives testing, handling, storage, manufacturing, physical and chemical characteristics and accidents and incidents. Of approximately 5000 technical abstracts reviewed, 300 applicable documents were analyzed in detail. These 300 documents were then converted to a subject matrix so that they may be readily referenced for application to the current programs. It was generally concluded that information in several important categories was lacking. Two of the more important categories were in pyrotechnics sensitivity testing and TNT equivalency testing. A general recommendation resulting from this study was that this activity continue and a comprehensive data bank be generated that would allow immediate access to a large volume of pertinent information in a relatively short period of time.

  16. The Hebrew CHILDES corpus: transcription and morphological analysis

    PubMed Central

    Albert, Aviad; MacWhinney, Brian; Nir, Bracha

    2014-01-01

    We present a corpus of transcribed spoken Hebrew that reflects spoken interactions between children and adults. The corpus is an integral part of the CHILDES database, which distributes similar corpora for over 25 languages. We introduce a dedicated transcription scheme for the spoken Hebrew data that is sensitive to both the phonology and the standard orthography of the language. We also introduce a morphological analyzer that was specifically developed for this corpus. The analyzer adequately covers the entire corpus, producing detailed correct analyses for all tokens. Evaluation on a new corpus reveals high coverage as well. Finally, we describe a morphological disambiguation module that selects the correct analysis of each token in context. The result is a high-quality morphologically-annotated CHILDES corpus of Hebrew, along with a set of tools that can be applied to new corpora. PMID:25419199

  17. One way Doppler extractor. Volume 1: Vernier technique

    NASA Technical Reports Server (NTRS)

    Blasco, R. W.; Klein, S.; Nossen, E. J.; Starner, E. R.; Yanosov, J. A.

    1974-01-01

    A feasibility analysis, trade-offs, and implementation for a One Way Doppler Extraction system are discussed. A Doppler error analysis shows that quantization error is a primary source of Doppler measurement error. Several competing extraction techniques are compared and a Vernier technique is developed which obtains high Doppler resolution with low speed logic. Parameter trade-offs and sensitivities for the Vernier technique are analyzed, leading to a hardware design configuration. A detailed design, operation, and performance evaluation of the resulting breadboard model is presented which verifies the theoretical performance predictions. Performance tests have verified that the breadboard is capable of extracting Doppler, on an S-band signal, to an accuracy of less than 0.02 Hertz for a one second averaging period. This corresponds to a range rate error of no more than 3 millimeters per second.

  18. Texture analysis of pulmonary parenchymateous changes related to pulmonary thromboembolism in dogs - a novel approach using quantitative methods.

    PubMed

    Marschner, C B; Kokla, M; Amigo, J M; Rozanski, E A; Wiinberg, B; McEvoy, F J

    2017-07-11

    Diagnosis of pulmonary thromboembolism (PTE) in dogs relies on computed tomography pulmonary angiography (CTPA), but detailed interpretation of CTPA images is demanding for the radiologist and only large vessels may be evaluated. New approaches for better detection of smaller thrombi include dual energy computed tomography (DECT) as well as computer assisted diagnosis (CAD) techniques. The purpose of this study was to investigate the performance of quantitative texture analysis for detecting dogs with PTE using grey-level co-occurrence matrices (GLCM) and multivariate statistical classification analyses. CT images from healthy (n = 6) and diseased (n = 29) dogs with and without PTE confirmed on CTPA were segmented so that only tissue with CT numbers between -1024 and -250 Houndsfield Units (HU) was preserved. GLCM analysis and subsequent multivariate classification analyses were performed on texture parameters extracted from these images. Leave-one-dog-out cross validation and receiver operator characteristic (ROC) showed that the models generated from the texture analysis were able to predict healthy dogs with optimal levels of performance. Partial Least Square Discriminant Analysis (PLS-DA) obtained a sensitivity of 94% and a specificity of 96%, while Support Vector Machines (SVM) yielded a sensitivity of 99% and a specificity of 100%. The models, however, performed worse in classifying the type of disease in the diseased dog group: In diseased dogs with PTE sensitivities were 30% (PLS-DA) and 38% (SVM), and specificities were 80% (PLS-DA) and 89% (SVM). In diseased dogs without PTE the sensitivities of the models were 59% (PLS-DA) and 79% (SVM) and specificities were 79% (PLS-DA) and 82% (SVM). The results indicate that texture analysis of CTPA images using GLCM is an effective tool for distinguishing healthy from abnormal lung. Furthermore the texture of pulmonary parenchyma in dogs with PTE is altered, when compared to the texture of pulmonary parenchyma of healthy dogs. The models' poorer performance in classifying dogs within the diseased group, may be related to the low number of dogs compared to texture variables, a lack of balanced number of dogs within each group or a real lack of difference in the texture features among the diseased dogs.

  19. DEVELOPMENT OF MESOSCALE AIR QUALITY SIMULATION MODELS. VOLUME 1. COMPARATIVE SENSITIVITY STUDIES OF PUFF, PLUME, AND GRID MODELS FOR LONG DISTANCE DISPERSION

    EPA Science Inventory

    This report provides detailed comparisons and sensitivity analyses of three candidate models, MESOPLUME, MESOPUFF, and MESOGRID. This was not a validation study; there was no suitable regional air quality data base for the Four Corners area. Rather, the models have been evaluated...

  20. Measuring the Knowledge and Attitudes of Health Care Staff toward Older People: Sensitivity of Measurement Instruments

    ERIC Educational Resources Information Center

    Cowan, David T.; Fitzpatrick, Joanne M.; Roberts, Julia D.; While, Alison E.

    2004-01-01

    This paper discusses the sensitivity of instruments used to measure knowledge and attitudes toward older people. Existing standardized measurement instruments are reviewed, including a detailed examination of Palmore's Facts on Ageing Quiz (FAQ). A recent study conducted by the research team into the knowledge and attitudes of support workers (n =…

  1. Measuring the Knowledge and Attitudes of Health Care Staff toward Older People: Sensitivity of Measurement Instruments

    ERIC Educational Resources Information Center

    Cowan, David T.; Fitzpatrick, Joanne M.; Roberts, Julia D.; While, Alison E.

    2004-01-01

    This paper discusses the sensitivity of instruments used to measure knowledge and attitudes toward older people. Existing standardized measurement instruments are reviewed, including a detailed examination of Palmore's Facts on Ageing Quiz (FAQ). A recent study conducted by the research team into the knowledge and attitudes of support workers…

  2. The Visualization of Infrared Radiation Using Thermal Sensitive Foils

    ERIC Educational Resources Information Center

    Bochnícek, Zdenek

    2013-01-01

    This paper describes a set of demonstration school experiments where infrared radiation is detected using thermal sensitive foils. The possibility of using standard glass lenses for infrared imaging is discussed in detail. It is shown that with optic components made from glass, infrared radiation up to 2.5 µm of wavelength can be detected. The…

  3. Analysis of spatiotemporal metabolomic dynamics for sensitively monitoring biological alterations in cisplatin-induced acute kidney injury.

    PubMed

    Irie, Miho; Hayakawa, Eisuke; Fujimura, Yoshinori; Honda, Youhei; Setoyama, Daiki; Wariishi, Hiroyuki; Hyodo, Fuminori; Miura, Daisuke

    2018-01-29

    Clinical application of the major anticancer drug, cisplatin, is limited by severe side effects, especially acute kidney injury (AKI) caused by nephrotoxicity. The detailed metabolic mechanism is still largely unknown. Here, we used an integrated technique combining mass spectrometry imaging (MSI) and liquid chromatography-mass spectrometry (LC-MS) to visualize the diverse spatiotemporal metabolic dynamics in the mouse kidney after cisplatin dosing. Biological responses to cisplatin was more sensitively detected within 24 h as a metabolic alteration, which is much earlier than possible with the conventional clinical chemistry method of blood urea nitrogen (BUN) measurement. Region-specific changes (e.g., medulla and cortex) in metabolites related to DNA damage and energy generation were observed over the 72-h exposure period. Therefore, this metabolomics approach may become a novel strategy for elucidating early renal responses to cisplatin, prior to the detection of kidney damage evaluated by conventional method. Copyright © 2018. Published by Elsevier Inc.

  4. The meteorological monitoring system for the Kennedy Space Center/Cape Canaveral Air Station

    NASA Technical Reports Server (NTRS)

    Dianic, Allan V.

    1994-01-01

    The Kennedy Space Center (KSC) and Cape Canaveral Air Station (CCAS) are involved in many weather-sensitive operations. Manned and unmanned vehicle launches, which occur several times each year, are obvious example of operations whose success and safety are dependent upon favorable meteorological conditions. Other operations involving NASA, Air Force, and contractor personnel, including daily operations to maintain facilities, refurbish launch structures, prepare vehicles for launch, and handle hazardous materials, are less publicized but are no less weather-sensitive. The Meteorological Monitoring System (MMS) is a computer network which acquires, processes, disseminates, and monitors near real-time and forecast meteorological information to assist operational personnel and weather forecasters with the task of minimizing the risk to personnel, materials, and the surrounding population. CLIPS has been integrated into the MMS to provide quality control analysis and data monitoring. This paper describes aspects of the MMS relevant to CLIPS including requirements, actual implementation details, and results of performance testing.

  5. Numerical analysis of hypersonic turbulent film cooling flows

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Chen, C. P.; Wei, H.

    1992-01-01

    As a building block, numerical capabilities for predicting heat flux and turbulent flowfields of hypersonic vehicles require extensive model validations. Computational procedures for calculating turbulent flows and heat fluxes for supersonic film cooling with parallel slot injections are described in this study. Two injectant mass flow rates with matched and unmatched pressure conditions using the database of Holden et al. (1990) are considered. To avoid uncertainties associated with the boundary conditions in testing turbulence models, detailed three-dimensional flowfields of the injection nozzle were calculated. Two computational fluid dynamics codes, GASP and FDNS, with the algebraic Baldwin-Lomax and k-epsilon models with compressibility corrections were used. It was found that the B-L model which resolves near-wall viscous sublayer is very sensitive to the inlet boundary conditions at the nozzle exit face. The k-epsilon models with improved wall functions are less sensitive to the inlet boundary conditions. The testings show that compressibility corrections are necessary for the k-epsilon model to realistically predict the heat fluxes of the hypersonic film cooling problems.

  6. Ion channel profile of TRPM8 cold receptors reveals a novel role of TASK-3 potassium channels in thermosensation

    PubMed Central

    Morenilla-Palao, Cruz; Luis, Enoch; Fernández-Peña, Carlos; Quintero, Eva; Weaver, Janelle L.; Bayliss, Douglas A.; Viana, Félix

    2017-01-01

    Summary Animals sense cold ambient temperatures through the activation of peripheral thermoreceptors that express TRPM8, a cold- and menthol-activated ion channel. These receptors can discriminate a very wide range of temperatures from innocuous to noxious. The molecular mechanism responsible for the variable sensitivity of individual cold receptors to temperature is unclear. To address this question, we performed a detailed ion channel expression analysis of cold sensitive neurons, combining BAC transgenesis with a molecular profiling approach in FACS purified TRPM8 neurons. We found that TASK-3 leak potassium channels are highly enriched in a subpopulation of these sensory neurons. The thermal threshold of TRPM8 cold neurons is decreased during TASK-3 blockade and in mice lacking TASK-3 and, most importantly, these mice display hypersensitivity to cold. Our results demonstrate a novel role of TASK-3 channels in thermosensation, showing that a channel-based combinatorial strategy in TRPM8 cold thermoreceptors leads to molecular specialization and functional diversity. PMID:25199828

  7. Influence of Cobalt on the Properties of Load-Sensitive Magnesium Alloys

    PubMed Central

    Klose, Christian; Demminger, Christian; Mroz, Gregor; Reimche, Wilfried; Bach, Friedrich-Wilhelm; Maier, Hans Jürgen; Kerber, Kai

    2013-01-01

    In this study, magnesium is alloyed with varying amounts of the ferromagnetic alloying element cobalt in order to obtain lightweight load-sensitive materials with sensory properties which allow an online-monitoring of mechanical forces applied to components made from Mg-Co alloys. An optimized casting process with the use of extruded Mg-Co powder rods is utilized which enables the production of magnetic magnesium alloys with a reproducible Co concentration. The efficiency of the casting process is confirmed by SEM analyses. Microstructures and Co-rich precipitations of various Mg-Co alloys are investigated by means of EDS and XRD analyses. The Mg-Co alloys' mechanical strengths are determined by tensile tests. Magnetic properties of the Mg-Co sensor alloys depending on the cobalt content and the acting mechanical load are measured utilizing the harmonic analysis of eddy-current signals. Within the scope of this work, the influence of the element cobalt on magnesium is investigated in detail and an optimal cobalt concentration is defined based on the performed examinations. PMID:23344376

  8. Surface determination through atomically resolved secondary-electron imaging

    PubMed Central

    Ciston, J.; Brown, H. G.; D'Alfonso, A. J.; Koirala, P.; Ophus, C.; Lin, Y.; Suzuki, Y.; Inada, H.; Zhu, Y.; Allen, L. J.; Marks, L. D.

    2015-01-01

    Unique determination of the atomic structure of technologically relevant surfaces is often limited by both a need for homogeneous crystals and ambiguity of registration between the surface and bulk. Atomically resolved secondary-electron imaging is extremely sensitive to this registration and is compatible with faceted nanomaterials, but has not been previously utilized for surface structure determination. Here we report a detailed experimental atomic-resolution secondary-electron microscopy analysis of the c(6 × 2) reconstruction on strontium titanate (001) coupled with careful simulation of secondary-electron images, density functional theory calculations and surface monolayer-sensitive aberration-corrected plan-view high-resolution transmission electron microscopy. Our work reveals several unexpected findings, including an amended registry of the surface on the bulk and strontium atoms with unusual seven-fold coordination within a typically high surface coverage of square pyramidal TiO5 units. Dielectric screening is found to play a critical role in attenuating secondary-electron generation processes from valence orbitals. PMID:26082275

  9. SuperADAM: Upgraded polarized neutron reflectometer at the Institut Laue-Langevin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devishvili, A.; Zhernenkov, K.; Institut Laue-Langevin, BP 156, 38042 Grenoble

    2013-02-15

    A new neutron reflectometer SuperADAM has recently been built and commissioned at the Institut Laue-Langevin, Grenoble, France. It replaces the previous neutron reflectometer ADAM. The new instrument uses a solid state polarizer/wavelength filter providing a highly polarized (up to 98.6%) monochromatic neutron flux of 8 Multiplication-Sign 10{sup 4} n cm{sup -2} s{sup -1} with monochromatization {Delta}{lambda}/{lambda}= 0.7% and angular divergence {Delta}{alpha}= 0.2 mrad. The instrument includes both single and position sensitive detectors. The position sensitive detector allows simultaneous measurement of specular reflection and off-specular scattering. Polarization analysis for both specular reflection and off-specular scattering is achieved using either mirror analyzersmore » or a {sup 3}He spin filter cell. High efficiency detectors, low background, and high flux provides a dynamic range of up to seven decades in reflectivity. Detailed specifications and the instrument capabilities are illustrated with examples of recently collected data in the fields of thin film magnetism and thin polymer films.« less

  10. SuperADAM: Upgraded polarized neutron reflectometer at the Institut Laue-Langevin

    NASA Astrophysics Data System (ADS)

    Devishvili, A.; Zhernenkov, K.; Dennison, A. J. C.; Toperverg, B. P.; Wolff, M.; Hjörvarsson, B.; Zabel, H.

    2013-02-01

    A new neutron reflectometer SuperADAM has recently been built and commissioned at the Institut Laue-Langevin, Grenoble, France. It replaces the previous neutron reflectometer ADAM. The new instrument uses a solid state polarizer/wavelength filter providing a highly polarized (up to 98.6%) monochromatic neutron flux of 8 × 104 n cm-2 s-1 with monochromatization Δλ/λ = 0.7% and angular divergence Δα = 0.2 mrad. The instrument includes both single and position sensitive detectors. The position sensitive detector allows simultaneous measurement of specular reflection and off-specular scattering. Polarization analysis for both specular reflection and off-specular scattering is achieved using either mirror analyzers or a 3He spin filter cell. High efficiency detectors, low background, and high flux provides a dynamic range of up to seven decades in reflectivity. Detailed specifications and the instrument capabilities are illustrated with examples of recently collected data in the fields of thin film magnetism and thin polymer films.

  11. Surface determination through atomically resolved secondary-electron imaging

    DOE PAGES

    Ciston, J.; Brown, H. G.; D’Alfonso, A. J.; ...

    2015-06-17

    We report that unique determination of the atomic structure of technologically relevant surfaces is often limited by both a need for homogeneous crystals and ambiguity of registration between the surface and bulk. Atomically resolved secondary-electron imaging is extremely sensitive to this registration and is compatible with faceted nanomaterials, but has not been previously utilized for surface structure determination. Here we show a detailed experimental atomic-resolution secondary-electron microscopy analysis of the c(6 x 2) reconstruction on strontium titanate (001) coupled with careful simulation of secondary-electron images, density functional theory calculations and surface monolayer-sensitive aberration-corrected plan-view high-resolution transmission electron microscopy. Our workmore » reveals several unexpected findings, including an amended registry of the surface on the bulk and strontium atoms with unusual seven-fold coordination within a typically high surface coverage of square pyramidal TiO 5 units. Lastly, dielectric screening is found to play a critical role in attenuating secondary-electron generation processes from valence orbitals.« less

  12. SuperADAM: upgraded polarized neutron reflectometer at the Institut Laue-Langevin.

    PubMed

    Devishvili, A; Zhernenkov, K; Dennison, A J C; Toperverg, B P; Wolff, M; Hjörvarsson, B; Zabel, H

    2013-02-01

    A new neutron reflectometer SuperADAM has recently been built and commissioned at the Institut Laue-Langevin, Grenoble, France. It replaces the previous neutron reflectometer ADAM. The new instrument uses a solid state polarizer/wavelength filter providing a highly polarized (up to 98.6%) monochromatic neutron flux of 8 × 10(4) n cm(-2) s(-1) with monochromatization Δλ∕λ = 0.7% and angular divergence Δα = 0.2 mrad. The instrument includes both single and position sensitive detectors. The position sensitive detector allows simultaneous measurement of specular reflection and off-specular scattering. Polarization analysis for both specular reflection and off-specular scattering is achieved using either mirror analyzers or a (3)He spin filter cell. High efficiency detectors, low background, and high flux provides a dynamic range of up to seven decades in reflectivity. Detailed specifications and the instrument capabilities are illustrated with examples of recently collected data in the fields of thin film magnetism and thin polymer films.

  13. Nanowire Chemical/Biological Sensors: Status and a Roadmap for the Future.

    PubMed

    Fennell, John F; Liu, Sophie F; Azzarelli, Joseph M; Weis, Jonathan G; Rochat, Sébastien; Mirica, Katherine A; Ravnsbæk, Jens B; Swager, Timothy M

    2016-01-22

    Chemiresistive sensors are becoming increasingly important as they offer an inexpensive option to conventional analytical instrumentation, they can be readily integrated into electronic devices, and they have low power requirements. Nanowires (NWs) are a major theme in chemosensor development. High surface area, interwire junctions, and restricted conduction pathways give intrinsically high sensitivity and new mechanisms to transduce the binding or action of analytes. This Review details the status of NW chemosensors with selected examples from the literature. We begin by proposing a principle for understanding electrical transport and transduction mechanisms in NW sensors. Next, we offer the reader a review of device performance parameters. Then, we consider the different NW types followed by a summary of NW assembly and different device platform architectures. Subsequently, we discuss NW functionalization strategies. Finally, we propose future developments in NW sensing to address selectivity, sensor drift, sensitivity, response analysis, and emerging applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A multifaceted FISH approach to study endogenous RNAs and DNAs in native nuclear and cell structures.

    PubMed

    Byron, Meg; Hall, Lisa L; Lawrence, Jeanne B

    2013-01-01

    Fluorescence in situ hybridization (FISH) is not a singular technique, but a battery of powerful and versatile tools for examining the distribution of endogenous genes and RNAs in precise context with each other and in relation to specific proteins or cell structures. This unit offers the details of highly sensitive and successful protocols that were initially developed largely in our lab and honed over a number of years. Our emphasis is on analysis of nuclear RNAs and DNA to address specific biological questions about nuclear structure, pre-mRNA metabolism, or the role of noncoding RNAs; however, cytoplasmic RNA detection is also discussed. Multifaceted molecular cytological approaches bring precise resolution and sensitive multicolor detection to illuminate the organization and functional roles of endogenous genes and their RNAs within the native structure of fixed cells. Solutions to several common technical pitfalls are discussed, as are cautions regarding the judicious use of digital imaging and the rigors of analyzing and interpreting complex molecular cytological results.

  15. A parametric sensitivity study for single-stage-to-orbit hypersonic vehicles using trajectory optimization

    NASA Astrophysics Data System (ADS)

    Lovell, T. Alan; Schmidt, D. K.

    1994-03-01

    The class of hypersonic vehicle configurations with single stage-to-orbit (SSTO) capability reflect highly integrated airframe and propulsion systems. These designs are also known to exhibit a large degree of interaction between the airframe and engine dynamics. Consequently, even simplified hypersonic models are characterized by tightly coupled nonlinear equations of motion. In addition, hypersonic SSTO vehicles present a major system design challenge; the vehicle's overall mission performance is a function of its subsystem efficiencies including structural, aerodynamic, propulsive, and operational. Further, all subsystem efficiencies are interrelated, hence, independent optimization of the subsystems is not likely to lead to an optimum design. Thus, it is desired to know the effect of various subsystem efficiencies on overall mission performance. For the purposes of this analysis, mission performance will be measured in terms of the payload weight inserted into orbit. In this report, a trajectory optimization problem is formulated for a generic hypersonic lifting body for a specified orbit-injection mission. A solution method is outlined, and results are detailed for the generic vehicle, referred to as the baseline model. After evaluating the performance of the baseline model, a sensitivity study is presented to determine the effect of various subsystem efficiencies on mission performance. This consists of performing a parametric analysis of the basic design parameters, generating a matrix of configurations, and determining the mission performance of each configuration. Also, the performance loss due to constraining the total head load experienced by the vehicle is evaluated. The key results from this analysis include the formulation of the sizing problem for this vehicle class using trajectory optimization, characteristics of the optimal trajectories, and the subsystem design sensitivities.

  16. Cylindrical optical resonators: fundamental properties and bio-sensing characteristics

    NASA Astrophysics Data System (ADS)

    Khozeymeh, Foroogh; Razaghi, Mohammad

    2018-04-01

    In this paper, detailed theoretical analysis of cylindrical resonators is demonstrated. As illustrated, these kinds of resonators can be used as optical bio-sensing devices. The proposed structure is analyzed using an analytical method based on Lam's approximation. This method is systematic and has simplified the tedious process of whispering-gallery mode (WGM) wavelength analysis in optical cylindrical biosensors. By this method, analysis of higher radial orders of high angular momentum WGMs has been possible. Using closed-form analytical equations, resonance wavelengths of higher radial and angular order WGMs of TE and TM polarization waves are calculated. It is shown that high angular momentum WGMs are more appropriate for bio-sensing applications. Some of the calculations are done using a numerical non-linear Newton method. A perfect match of 99.84% between the analytical and the numerical methods has been achieved. In order to verify the validity of the calculations, Meep simulations based on the finite difference time domain (FDTD) method are performed. In this case, a match of 96.70% between the analytical and FDTD results has been obtained. The analytical predictions are in good agreement with other experimental work (99.99% match). These results validate the proposed analytical modelling for the fast design of optical cylindrical biosensors. It is shown that by extending the proposed two-layer resonator structure analyzing scheme, it is possible to study a three-layer cylindrical resonator structure as well. Moreover, by this method, fast sensitivity optimization in cylindrical resonator-based biosensors has been possible. Sensitivity of the WGM resonances is analyzed as a function of the structural parameters of the cylindrical resonators. Based on the results, fourth radial order WGMs, with a resonator radius of 50 μm, display the most bulk refractive index sensitivity of 41.50 (nm/RIU).

  17. Sensitivity analysis of a data assimilation technique for hindcasting and forecasting hydrodynamics of a complex coastal water body

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Hartnett, Michael

    2017-02-01

    Accurate forecasting of coastal surface currents is of great economic importance due to marine activities such as marine renewable energy and fish farms in coastal regions in recent twenty years. Advanced oceanographic observation systems such as satellites and radars can provide many parameters of interest, such as surface currents and waves, with fine spatial resolution in near real time. To enhance modelling capability, data assimilation (DA) techniques which combine the available measurements with the hydrodynamic models have been used since the 1990s in oceanography. Assimilating measurements into hydrodynamic models makes the original model background states follow the observation trajectory, then uses it to provide more accurate forecasting information. Galway Bay is an open, wind dominated water body on which two coastal radars are deployed. An efficient and easy to implement sequential DA algorithm named Optimal Interpolation (OI) was used to blend radar surface current data into a three-dimensional Environmental Fluid Dynamics Code (EFDC) model. Two empirical parameters, horizontal correlation length and DA cycle length (CL), are inherent within OI. No guidance has previously been published regarding selection of appropriate values of these parameters or how sensitive OI DA is to variations in their values. Detailed sensitivity analysis has been performed on both of these parameters and results presented. Appropriate value of DA CL was examined and determined on producing the minimum Root-Mean-Square-Error (RMSE) between radar data and model background states. Analysis was performed to evaluate assimilation index (AI) of using an OI DA algorithm in the model. AI of the half-day forecasting mean vectors' directions was over 50% in the best assimilation model. The ability of using OI to improve model forecasts was also assessed and is reported upon.

  18. A parametric sensitivity study for single-stage-to-orbit hypersonic vehicles using trajectory optimization

    NASA Technical Reports Server (NTRS)

    Lovell, T. Alan; Schmidt, D. K.

    1994-01-01

    The class of hypersonic vehicle configurations with single stage-to-orbit (SSTO) capability reflect highly integrated airframe and propulsion systems. These designs are also known to exhibit a large degree of interaction between the airframe and engine dynamics. Consequently, even simplified hypersonic models are characterized by tightly coupled nonlinear equations of motion. In addition, hypersonic SSTO vehicles present a major system design challenge; the vehicle's overall mission performance is a function of its subsystem efficiencies including structural, aerodynamic, propulsive, and operational. Further, all subsystem efficiencies are interrelated, hence, independent optimization of the subsystems is not likely to lead to an optimum design. Thus, it is desired to know the effect of various subsystem efficiencies on overall mission performance. For the purposes of this analysis, mission performance will be measured in terms of the payload weight inserted into orbit. In this report, a trajectory optimization problem is formulated for a generic hypersonic lifting body for a specified orbit-injection mission. A solution method is outlined, and results are detailed for the generic vehicle, referred to as the baseline model. After evaluating the performance of the baseline model, a sensitivity study is presented to determine the effect of various subsystem efficiencies on mission performance. This consists of performing a parametric analysis of the basic design parameters, generating a matrix of configurations, and determining the mission performance of each configuration. Also, the performance loss due to constraining the total head load experienced by the vehicle is evaluated. The key results from this analysis include the formulation of the sizing problem for this vehicle class using trajectory optimization, characteristics of the optimal trajectories, and the subsystem design sensitivities.

  19. Extracting additional risk managers information from a risk assessment of Listeria monocytogenes in deli meats.

    PubMed

    Pérez-Rodríguez, F; van Asselt, E D; Garcia-Gimeno, R M; Zurera, G; Zwietering, M H

    2007-05-01

    The risk assessment study of Listeria monocytogenes in ready-to-eat foods conducted by the U.S. Food and Drug Administration is an example of an extensive quantitative microbiological risk assessment that could be used by risk analysts and other scientists to obtain information and by managers and stakeholders to make decisions on food safety management. The present study was conducted to investigate how detailed sensitivity analysis can be used by assessors to extract more information on risk factors and how results can be communicated to managers and stakeholders in an understandable way. The extended sensitivity analysis revealed that the extremes at the right side of the dose distribution (at consumption, 9 to 11.5 log CFU per serving) were responsible for most of the cases of listeriosis simulated. For concentration at retail, values below the detection limit of 0.04 CFU/g and the often used limit for L. monocytogenes of 100 CFU/g (also at retail) were associated with a high number of annual cases of listeriosis (about 29 and 82%, respectively). This association can be explained by growth of L. monocytogenes at both average and extreme values of temperature and time, indicating that a wide distribution can lead to high risk levels. Another finding is the importance of the maximal population density (i.e., the maximum concentration of L. monocytogenes assumed at a certain temperature) for accurately estimating the risk of infection by opportunistic pathogens such as L. monocytogenes. According to the obtained results, mainly concentrations corresponding to the highest maximal population densities caused risk in the simulation. However, sensitivity analysis applied to the uncertainty parameters revealed that prevalence at retail was the most important source of uncertainty in the model.

  20. A portable high-speed camera system for vocal fold examinations.

    PubMed

    Hertegård, Stellan; Larsson, Hans

    2014-11-01

    In this article, we present a new portable low-cost system for high-speed examinations of the vocal folds. Analysis of glottal vibratory parameters from the high-speed recordings is compared with videostroboscopic recordings. The high-speed system is built around a Fastec 1 monochrome camera, which is used with newly developed software, High-Speed Studio (HSS). The HSS has options for video/image recording, contains a database, and has a set of analysis options. The Fastec/HSS system has been used clinically since 2011 in more than 2000 patient examinations and recordings. The Fastec 1 camera has sufficient time resolution (≥4000 frames/s) and light sensitivity (ISO 3200) to produce images for detailed analyses of parameters pertinent to vocal fold function. The camera can be used with both rigid and flexible endoscopes. The HSS software includes options for analyses of glottal vibrations, such as kymogram, phase asymmetry, glottal area variation, open and closed phase, and angle of vocal fold abduction. It can also be used for separate analysis of the left and vocal fold movements, including maximum speed during opening and closing, a parameter possibly related to vocal fold elasticity. A blinded analysis of 32 patients with various voice disorders examined with both the Fastec/HSS system and videostroboscopy showed that the high-speed recordings were significantly better for the analysis of glottal parameters (eg, mucosal wave and vibration asymmetry). The monochrome high-speed system can be used in daily clinical work within normal clinical time limits for patient examinations. A detailed analysis can be made of voice disorders and laryngeal pathology at a relatively low cost. Copyright © 2014 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  1. Electron microprobe analysis and histochemical examination of the calcium distribution in human bone trabeculae: a methodological study using biopsy specimens from post-traumatic osteopenia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obrant, K.J.; Odselius, R.

    1984-01-01

    Energy dispersive X-ray microanalysis (EDX) (or electron microprobe analysis) of the relative intensity for calcium in different bone trabeculae from the tibia epiphysis, and in different parts of one and the same trabecula, was performed on 3 patients who had earlier had a fracture of the ipsilateral tibia-diaphysis. The variation in intensity was compared with the histochemical patterns obtained with both the Goldner and the von Kossa staining techniques for detecting calcium in tissues. Previously reported calcium distribution features, found to be typical for posttraumatic osteopenia, such as striated mineralization patterns in individual trabeculae and large differences in mineralization levelmore » between different trabeculae, could be verified both by means of the two histochemical procedures and from the electron microprobe analysis. A pronounced difference was observed, however, between the two histochemical staining techniques as regards their sensitivity to detect calcium. To judge from the values obtained from the EDX measurements, the sensitivity of the Goldner technique should be more than ten times higher than that of von Kossa. The EDX measurements gave more detailed information than either of the two histochemical techniques: great variations in the intensity of the calcium peak were found in trabeculae stained as unmineralized as well as mineralized.« less

  2. Detection of Nonvolatile Inorganic Oxidizer-Based Explosives from Wipe Collections by Infrared Thermal Desorption-Direct Analysis in Real Time Mass Spectrometry.

    PubMed

    Forbes, Thomas P; Sisco, Edward; Staymates, Matthew

    2018-05-07

    Infrared thermal desorption (IRTD) was coupled with direct analysis in real time mass spectrometry (DART-MS) for the detection of both inorganic and organic explosives from wipe collected samples. This platform generated discrete and rapid heating rates that allowed volatile and semivolatile organic explosives to thermally desorb at relatively lower temperatures, while still achieving elevated temperatures required to desorb nonvolatile inorganic oxidizer-based explosives. IRTD-DART-MS demonstrated the thermal desorption and detection of refractory potassium chlorate and potassium perchlorate oxidizers, compounds difficult to desorb with traditional moderate-temperature resistance-based thermal desorbers. Nanogram to sub-nanogram sensitivities were established for analysis of a range of organic and inorganic oxidizer-based explosive compounds, with further enhancement limited by the thermal properties of the most common commercial wipe materials. Detailed investigations and high-speed visualization revealed conduction from the heated glass-mica base plate as the dominant process for heating of the wipe and analyte materials, resulting in thermal desorption through boiling, aerosolization, and vaporization of samples. The thermal desorption and ionization characteristics of the IRTD-DART technique resulted in optimal sensitivity for the formation of nitrate adducts with both organic and inorganic species. The IRTD-DART-MS coupling and IRTD in general offer promising explosive detection capabilities to the defense, security, and law enforcement arenas.

  3. Quantitative x-ray photoelectron spectroscopy: Quadrupole effects, shake-up, Shirley background, and relative sensitivity factors from a database of true x-ray photoelectron spectra

    NASA Astrophysics Data System (ADS)

    Seah, M. P.; Gilmore, I. S.

    2006-05-01

    An analysis is provided of the x-ray photoelectron spectroscopy (XPS) intensities measured in the National Physical Laboratory (NPL) XPS database for 46 solid elements. This present analysis does not change our previous conclusions concerning the excellent correlation between experimental intensities, following deconvolving the spectra with angle-averaged reflection electron energy loss data, and the theoretical intensities involving the dipole approximation using Scofield’s cross sections. Here, more recent calculations for cross sections by Trzhaskovskaya involving quadrupole terms are evaluated and it is shown that their cross sections diverge from the experimental database results by up to a factor of 5. The quadrupole angular terms lead to small corrections that are close to our measurement limit but do appear to be supported in the present analysis. Measurements of the extent of shake-up for the 46 elements broadly agree with the calculations of Yarzhemsky but not in detail. The predicted constancy in the shake-up contribution by Yarzhemsky implies that the use of the Shirley background will lead to a peak area that is a constant fraction of the true peak area including the shake-up intensities. However, the measured variability of the shake-up contribution makes the Shirley background invalid for quantification except for situations where the sensitivity factors are from reference samples similar to those being analyzed.

  4. The impact of prior knowledge from participant instructions in a mock crime P300 Concealed Information Test.

    PubMed

    Winograd, Michael R; Rosenfeld, J Peter

    2014-12-01

    In P300-Concealed Information Tests used with mock crime scenarios, the amount of detail revealed to a participant prior to the commission of the mock crime can have a serious impact on a study's validity. We predicted that exposure to crime details through instructions would bias detection rates toward enhanced sensitivity. In a 2 × 2 factorial design, participants were either informed (through mock crime instructions) or naïve as to the identity of a to-be-stolen item, and then either committed (guilty) or did not commit (innocent) the crime. Results showed that prior knowledge of the stolen item was sufficient to cause 69% of innocent-informed participants to be incorrectly classified as guilty. Further, we found a trend toward enhanced detection rate for guilty-informed participants over guilty-naïve participants. Results suggest that revealing details to participants through instructions biases detection rates in the P300-CIT toward enhanced sensitivity. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Lorence, Christopher B.; Hall, Kenneth C.

    1995-01-01

    A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide vanes are redesigned for reduced downstream radiated noise. In addition, a framework detailing how the two-dimensional version of the method may be used to redesign three-dimensional geometries is presented.

  6. Impact of production strategies and animal performance on economic values of dairy sheep traits.

    PubMed

    Krupová, Z; Wolfová, M; Krupa, E; Oravcová, M; Daňo, J; Huba, J; Polák, P

    2012-03-01

    The objective of this study was to carry out a sensitivity analysis on the impact of various production strategies and performance levels on the relative economic values (REVs) of traits in dairy sheep. A bio-economic model implemented in the program package ECOWEIGHT was used to simulate the profit function for a semi-extensive production system with the Slovak multi-purpose breed Improved Valachian and to calculate the REV of 14 production and functional traits. The following production strategies were analysed: differing proportions of milk processed to cheese, customary weaning and early weaning of lambs with immediate sale or sale after artificial rearing, seasonal lambing in winter and aseasonal lambing in autumn. Results of the sensitivity analysis are presented in detail for the four economically most important traits: 150 days milk yield, conception rate of ewes, litter size and ewe productive lifetime. Impacts of the differences in the mean value of each of these four traits on REVs of all other traits were also examined. Simulated changes in the production circumstances had a higher impact on the REV for milk yield than on REVs of the other traits investigated. The proportion of milk processed to cheese, weaning management strategy for lambs and level of milk yield were the main factors influencing the REV of milk yield. The REVs for conception rate of ewes were highly sensitive to the current mean level of the trait. The REV of ewe productive lifetime was most sensitive to variation in ewe conception rate, and the REV of litter size was most affected by weaning strategy for lambs. On the basis of the results of sensitivity analyses, it is recommended that economic values of traits for the overall breeding objective for dairy sheep be calculated as the weighted average of the economic values obtained for the most common production strategies of Slovak dairy sheep farms and that economic values be adjusted after substantial changes in performance levels of the traits.

  7. High-Pressure Oxygen Test Evaluations

    NASA Technical Reports Server (NTRS)

    Schwinghamer, R. J.; Key, C. F.

    1974-01-01

    The relevance of impact sensitivity testing to the development of the space shuttle main engine is discussed in the light of the special requirements for the engine. The background and history of the evolution of liquid and gaseous oxygen testing techniques and philosophy is discussed also. The parameters critical to reliable testing are treated in considerable detail, and test apparatus and procedures are described and discussed. Materials threshold sensitivity determination procedures are considered and a decision logic diagram for sensitivity threshold determination was plotted. Finally, high-pressure materials sensitivity test data are given for selected metallic and nonmetallic materials.

  8. When is exposure to a natural disaster traumatic? Comparison of a trauma questionnaire and disaster exposure inventory.

    PubMed

    Harville, Emily W; Jacobs, Marni; Boynton-Jarrett, Renée

    2015-01-01

    Few studies have compared the sensitivity of trauma questionnaires to disaster inventories for assessing the prevalence of exposure to natural disaster or associated risk for post-disaster psychopathology. The objective of this analysis was to compare reporting of disaster exposure on a trauma questionnaire (Brief Trauma Questionnaire [BTQ]) to an inventory of disaster experience. Between 2011 and 2014, a sample of 841 reproductive-aged southern Louisiana women were interviewed using the BTQ and completed a detailed inventory about exposure to hurricanes and flooding. Post-traumatic stress disorder (PTSD) symptomology was measured with the Post-Traumatic Stress Checklist, and depression with the Edinburgh Depression Scale. The single question addressing disaster exposure on the BTQ had a sensitivity of between 65% and 70% relative to the more detailed questions. Reporting disaster exposure on the BTQ was more likely for those who reported illness/injury due to a hurricane or flood (74%-77%) or danger (77-79%), compared to those who reported damage (69-71%) or evacuation (64-68%). Reporting disaster exposure on the BTQ was associated with depression (odds ratio [OR] 2.29, 95% confidence interval [CI] 1.43-3.68). A single question is unlikely to be useful for assessing the degree of exposure to disaster across a broad population, and varies in utility depending on the mental health outcome of interest: the single trauma question is useful for assessing depression risk.

  9. Antigen sensitivity is a major determinant of CD8+ T-cell polyfunctionality and HIV-suppressive activity

    PubMed Central

    Almeida, Jorge R.; Sauce, Delphine; Price, David A.; Papagno, Laura; Shin, So Youn; Moris, Arnaud; Larsen, Martin; Pancino, Gianfranco; Douek, Daniel C.; Autran, Brigitte; Sáez-Cirión, Asier

    2009-01-01

    CD8+ T cells are major players in the immune response against HIV. However, recent failures in the development of T cell–based vaccines against HIV-1 have emphasized the need to reassess our basic knowledge of T cell–mediated efficacy. CD8+ T cells from HIV-1–infected patients with slow disease progression exhibit potent polyfunctionality and HIV-suppressive activity, yet the factors that unify these properties are incompletely understood. We performed a detailed study of the interplay between T-cell functional attributes using a bank of HIV-specific CD8+ T-cell clones isolated in vitro; this approach enabled us to overcome inherent difficulties related to the in vivo heterogeneity of T-cell populations and address the underlying determinants that synthesize the qualities required for antiviral efficacy. Conclusions were supported by ex vivo analysis of HIV-specific CD8+ T cells from infected donors. We report that attributes of CD8+ T-cell efficacy against HIV are linked at the level of antigen sensitivity. Highly sensitive CD8+ T cells display polyfunctional profiles and potent HIV-suppressive activity. These data provide new insights into the mechanisms underlying CD8+ T-cell efficacy against HIV, and indicate that vaccine strategies should focus on the induction of HIV-specific T cells with high levels of antigen sensitivity to elicit potent antiviral efficacy. PMID:19389882

  10. Antigen sensitivity is a major determinant of CD8+ T-cell polyfunctionality and HIV-suppressive activity.

    PubMed

    Almeida, Jorge R; Sauce, Delphine; Price, David A; Papagno, Laura; Shin, So Youn; Moris, Arnaud; Larsen, Martin; Pancino, Gianfranco; Douek, Daniel C; Autran, Brigitte; Sáez-Cirión, Asier; Appay, Victor

    2009-06-18

    CD8(+) T cells are major players in the immune response against HIV. However, recent failures in the development of T cell-based vaccines against HIV-1 have emphasized the need to reassess our basic knowledge of T cell-mediated efficacy. CD8(+) T cells from HIV-1-infected patients with slow disease progression exhibit potent polyfunctionality and HIV-suppressive activity, yet the factors that unify these properties are incompletely understood. We performed a detailed study of the interplay between T-cell functional attributes using a bank of HIV-specific CD8(+) T-cell clones isolated in vitro; this approach enabled us to overcome inherent difficulties related to the in vivo heterogeneity of T-cell populations and address the underlying determinants that synthesize the qualities required for antiviral efficacy. Conclusions were supported by ex vivo analysis of HIV-specific CD8(+) T cells from infected donors. We report that attributes of CD8(+) T-cell efficacy against HIV are linked at the level of antigen sensitivity. Highly sensitive CD8(+) T cells display polyfunctional profiles and potent HIV-suppressive activity. These data provide new insights into the mechanisms underlying CD8(+) T-cell efficacy against HIV, and indicate that vaccine strategies should focus on the induction of HIV-specific T cells with high levels of antigen sensitivity to elicit potent antiviral efficacy.

  11. Temporal Informative Analysis in Smart-ICU Monitoring: M-HealthCare Perspective.

    PubMed

    Bhatia, Munish; Sood, Sandeep K

    2016-08-01

    The rapid introduction of Internet of Things (IoT) Technology has boosted the service deliverance aspects of health sector in terms of m-health, and remote patient monitoring. IoT Technology is not only capable of sensing the acute details of sensitive events from wider perspectives, but it also provides a means to deliver services in time sensitive and efficient manner. Henceforth, IoT Technology has been efficiently adopted in different fields of the healthcare domain. In this paper, a framework for IoT based patient monitoring in Intensive Care Unit (ICU) is presented to enhance the deliverance of curative services. Though ICUs remained a center of attraction for high quality care among researchers, still number of studies have depicted the vulnerability to a patient's life during ICU stay. The work presented in this study addresses such concerns in terms of efficient monitoring of various events (and anomalies) with temporal associations, followed by time sensitive alert generation procedure. In order to validate the system, it was deployed in 3 ICU room facilities for 30 days in which nearly 81 patients were monitored during their ICU stay. The results obtained after implementation depicts that IoT equipped ICUs are more efficient in monitoring sensitive events as compared to manual monitoring and traditional Tele-ICU monitoring. Moreover, the adopted methodology for alert generation with information presentation further enhances the utility of the system.

  12. A new predictive multi-zone model for HCCI engine combustion

    DOE PAGES

    Bissoli, Mattia; Frassoldati, Alessio; Cuoci, Alberto; ...

    2016-06-30

    Here, this work introduces a new predictive multi-zone model for the description of combustion in Homogeneous Charge Compression Ignition (HCCI) engines. The model exploits the existing OpenSMOKE++ computational suite to handle detailed kinetic mechanisms, providing reliable predictions of the in-cylinder auto-ignition processes. All the elements with a significant impact on the combustion performances and emissions, like turbulence, heat and mass exchanges, crevices, residual burned gases, thermal and feed stratification are taken into account. Compared to other computational approaches, this model improves the description of mixture stratification phenomena by coupling a wall heat transfer model derived from CFD application with amore » proper turbulence model. Furthermore, the calibration of this multi-zone model requires only three parameters, which can be derived from a non-reactive CFD simulation: these adaptive variables depend only on the engine geometry and remain fixed across a wide range of operating conditions, allowing the prediction of auto-ignition, pressure traces and pollutants. This computational framework enables the use of detail kinetic mechanisms, as well as Rate of Production Analysis (RoPA) and Sensitivity Analysis (SA) to investigate the complex chemistry involved in the auto-ignition and the pollutants formation processes. In the final sections of the paper, these capabilities are demonstrated through the comparison with experimental data.« less

  13. Investigation of the multiplet features of SrTiO 3 in X-ray absorption spectra based on configuration interaction calculations

    DOE PAGES

    Wu, M.; Xin, Houlin L.; Wang, J. O.; ...

    2018-04-24

    Synchrotron-based L 2,3-edge absorption spectra show strong sensitivities to the local electronic structure and chemical environment. However, detailed physical information cannot be extracted easily without computational aids. Here in this study using the experimental Ti L 2,3-edges absorption spectrum of SrTiO 3as a fingerprint and considering full multiplet effects, calculations yield different energy parameters characterizing local ground state properties. The peak splitting and intensity ratios of the L 3 and L 2 set of peaks are carefully analyzed quantitatively, giving rise to a small hybridization energy around 1.2 eV, and the different hybridization energy values reported in the literature aremore » further addressed. Finally, absorption spectra with different linearly polarized photons under various tetragonal crystal fields are investigated, revealing a non-linear orbital–lattice interaction, and a theoretical guidance for material engineering of SrTiO 3-based thin films and heterostructures is offered. Finally, detailed analysis of spectrum shifts with different tetragonal crystal fields suggests that the e g crystal field splitting is a necessary parameter for a thorough analysis of the spectra, even though it is not relevant for the ground state properties.« less

  14. Investigation of the multiplet features of SrTiO 3 in X-ray absorption spectra based on configuration interaction calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, M.; Xin, Houlin L.; Wang, J. O.

    Synchrotron-based L 2,3-edge absorption spectra show strong sensitivities to the local electronic structure and chemical environment. However, detailed physical information cannot be extracted easily without computational aids. Here in this study using the experimental Ti L 2,3-edges absorption spectrum of SrTiO 3as a fingerprint and considering full multiplet effects, calculations yield different energy parameters characterizing local ground state properties. The peak splitting and intensity ratios of the L 3 and L 2 set of peaks are carefully analyzed quantitatively, giving rise to a small hybridization energy around 1.2 eV, and the different hybridization energy values reported in the literature aremore » further addressed. Finally, absorption spectra with different linearly polarized photons under various tetragonal crystal fields are investigated, revealing a non-linear orbital–lattice interaction, and a theoretical guidance for material engineering of SrTiO 3-based thin films and heterostructures is offered. Finally, detailed analysis of spectrum shifts with different tetragonal crystal fields suggests that the e g crystal field splitting is a necessary parameter for a thorough analysis of the spectra, even though it is not relevant for the ground state properties.« less

  15. Chaotic dynamics and diffusion in a piecewise linear equation

    NASA Astrophysics Data System (ADS)

    Shahrear, Pabel; Glass, Leon; Edwards, Rod

    2015-03-01

    Genetic interactions are often modeled by logical networks in which time is discrete and all gene activity states update simultaneously. However, there is no synchronizing clock in organisms. An alternative model assumes that the logical network is preserved and plays a key role in driving the dynamics in piecewise nonlinear differential equations. We examine dynamics in a particular 4-dimensional equation of this class. In the equation, two of the variables form a negative feedback loop that drives a second negative feedback loop. By modifying the original equations by eliminating exponential decay, we generate a modified system that is amenable to detailed analysis. In the modified system, we can determine in detail the Poincaré (return) map on a cross section to the flow. By analyzing the eigenvalues of the map for the different trajectories, we are able to show that except for a set of measure 0, the flow must necessarily have an eigenvalue greater than 1 and hence there is sensitive dependence on initial conditions. Further, there is an irregular oscillation whose amplitude is described by a diffusive process that is well-modeled by the Irwin-Hall distribution. There is a large class of other piecewise-linear networks that might be analyzed using similar methods. The analysis gives insight into possible origins of chaotic dynamics in periodically forced dynamical systems.

  16. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  17. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan

    2016-09-01

    Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  18. 49 CFR 1520.5 - Sensitive security information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., including threats against cyber infrastructure. (8) Security measures. Specific details of aviation...) Critical aviation, maritime, or rail infrastructure asset information. Any list identifying systems or...

  19. 49 CFR 1520.5 - Sensitive security information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., including threats against cyber infrastructure. (8) Security measures. Specific details of aviation...) Critical aviation, maritime, or rail infrastructure asset information. Any list identifying systems or...

  20. 49 CFR 1520.5 - Sensitive security information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., including threats against cyber infrastructure. (8) Security measures. Specific details of aviation...) Critical aviation, maritime, or rail infrastructure asset information. Any list identifying systems or...

  1. 49 CFR 1520.5 - Sensitive security information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., including threats against cyber infrastructure. (8) Security measures. Specific details of aviation...) Critical aviation, maritime, or rail infrastructure asset information. Any list identifying systems or...

  2. Modeling the atmospheric chemistry of TICs

    NASA Astrophysics Data System (ADS)

    Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John

    2009-05-01

    An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.

  3. Innovative Ge Quantum Dot Functional Sensing/Metrology Devices

    DTIC Science & Technology

    2015-05-20

    sensitive to charge number and local temperature with unprecedented precision. Accordingly we have made progresses in the innovative functionalities...sensors feature excellent sensitivity on charge number, local temperature, and photoresponsivity in the visible to near IR wavelength.  “Designer” Ge...Detailed knowledge and understanding of how the QDs are created, and especially their interactions with their local environments are therefore crucial to

  4. Neutrinos from Hell: the Dawn of Neutrino Geophysics

    ScienceCinema

    Gratta, Giorgio

    2018-02-26

    Seismic waves have been for long time the only messenger reporting on the conditions deep inside the Earth. While global seismology provides amazing details about the structure of our planet, it is only sensitive to the mechanical properties of rocks and not to their chemical composition. In the last 5 years KamLAND and Borexino have started measuring anti-neutrinos produced by Uranium and Thorium inside the Earth. Such "Geoneutrinos" double the number of tools available to study the Earth's interior, enabling a sort of global chemical analysis of the planet, albeit for two elements only. I will discuss the results of these new measurements and put them in the context of the Earth Sciences.

  5. Organizing and addressing magnetic molecules.

    PubMed

    Gatteschi, Dante; Cornia, Andrea; Mannini, Matteo; Sessoli, Roberta

    2009-04-20

    Magnetic molecules ranging from simple organic radicals to single-molecule magnets (SMMs) are intensively investigated for their potential applications in molecule-based information storage and processing. The goal of this Article is to review recent achievements in the organization of magnetic molecules on surfaces and in their individual probing and manipulation. We stress that the inherent fragility and redox sensitivity of most SMM complexes, combined with the noninnocent role played by the substrate, ask for a careful evaluation of the structural and electronic properties of deposited molecules going beyond routine methods for surface analysis. Detailed magnetic information can be directly obtained using X-ray magnetic circular dichroism or newly emerging scanning probe techniques with magnetic detection capabilities.

  6. Statistical theory and methodology for remote sensing data analysis with special emphasis on LACIE

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1975-01-01

    Crop proportion estimators for determining crop acreage through the use of remote sensing were evaluated. Several studies of these estimators were conducted, including an empirical comparison of the different estimators (using actual data) and an empirical study of the sensitivity (robustness) of the class of mixture estimators. The effect of missing data upon crop classification procedures is discussed in detail including a simulation of the missing data effect. The final problem addressed is that of taking yield data (bushels per acre) gathered at several yield stations and extrapolating these values over some specified large region. Computer programs developed in support of some of these activities are described.

  7. Disposable Screen Printed Electrochemical Sensors: Tools for Environmental Monitoring

    PubMed Central

    Hayat, Akhtar; Marty, Jean Louis

    2014-01-01

    Screen printing technology is a widely used technique for the fabrication of electrochemical sensors. This methodology is likely to underpin the progressive drive towards miniaturized, sensitive and portable devices, and has already established its route from “lab-to-market” for a plethora of sensors. The application of these sensors for analysis of environmental samples has been the major focus of research in this field. As a consequence, this work will focus on recent important advances in the design and fabrication of disposable screen printed sensors for the electrochemical detection of environmental contaminants. Special emphasis is given on sensor fabrication methodology, operating details and performance characteristics for environmental applications. PMID:24932865

  8. Structure of turbulent non-premixed flames modeled with two-step chemistry

    NASA Technical Reports Server (NTRS)

    Chen, J. H.; Mahalingam, S.; Puri, I. K.; Vervisch, L.

    1992-01-01

    Direct numerical simulations of turbulent diffusion flames modeled with finite-rate, two-step chemistry, A + B yields I, A + I yields P, were carried out. A detailed analysis of the turbulent flame structure reveals the complex nature of the penetration of various reactive species across two reaction zones in mixture fraction space. Due to this two zone structure, these flames were found to be robust, resisting extinction over the parameter ranges investigated. As in single-step computations, mixture fraction dissipation rate and the mixture fraction were found to be statistically correlated. Simulations involving unequal molecular diffusivities suggest that the small scale mixing process and, hence, the turbulent flame structure is sensitive to the Schmidt number.

  9. Protoplanetary disk observations in the ALMA era

    NASA Astrophysics Data System (ADS)

    Salyk, Colette

    2018-06-01

    In this talk, I’ll discuss how ALMA is advancing our understanding of protoplanetary disks with its unprecedented sensitivity and spatial resolution. In particular, I’ll focus on how ALMA is providing our first detailed view of gas-phase chemistry in giant planet forming regions, allowing us to test our ideas about how planets develop their diverse characteristics. Interpretation of these spectroscopic datasets requires sophisticated modeling tools and accurate laboratory data, as protoplanetary disks are ever-evolving environments that span a large range in density, temperature, and radiation field. I’ll discuss some recent results that highlight the important interplay between modeling and data analysis/interpretation, and suggest research directions that ALMA is likely to pursue going forward.

  10. Conformations and charge distributions of diazocyclopropanes

    NASA Astrophysics Data System (ADS)

    Borges, Itamar, Jr.

    Three diazo-substituted cyclopropane compounds, which have been suggested as new potential high energy compounds, were studied employing the B3LYP-DFT/6-31G(d,p) method. Geometries were optimized. Distributed multipole analysis, computed from the B3LYP-DFT/6-31G(d,p) density matrix, was used to describe the details of the molecular charge distribution of the three molecules. It was verified that electron withdrawing from the C ring atoms and charge build-up on the N atoms bonded to the ring increased with the number of diazo groups. These effects were related to increased sensitivity to impact and easiness of C bond N bond breaking in the three compounds.

  11. The sensitivity and specificity of the Middlesex Elderly Assessment of Mental State (MEAMS) for detecting cognitive impairment after stroke.

    PubMed

    Cartoni, A; Lincoln, N B

    2005-03-01

    The aim of the study was to assess the sensitivity and specificity of the MEAMS (Golding, 1989) for detecting cognitive impairment after stroke. Stroke patients admitted to hospital received a cognitive screening assessment, the MEAMS, and a detailed cognitive assessment. The information obtained from the detailed assessment was summarised in a structured written report. From the conclusions in these reports, patients were classified as "impaired" or "not impaired" in perception, memory, executive function and language. The sensitivity and specificity of the MEAMS subtests and the overall number of tests passed were determined in relation to the presence of impairment, as given in the overall conclusion of the written reports. There were 30 stroke patients, aged 58 to 92 (mean 75.80, SD 7.94) years. Of these, 17 were men and 13 were women. The sensitivity of the MEAMS subtests ranged from 11% to 100% and the specificity ranged from 69% to 100%. The sensitivity of the overall MEAMS score was 52% and the specificity was 100%, using a cut-off score of 3 or more fails to indicate impairment. Three subtests, Orientation, Naming and Unusual views had 81% sensitivity and 50% specificity for detecting problems in language, perception or memory. The MEAMS was not a sensitive screen for overall cognitive impairment or for memory, perceptual, language, or executive function problems after stroke, but it was specific. Although screening for cognitive impairment is important, the MEAMS is not recommended as the sole method, as it produces an unacceptably high false negative rate. Three subtests (Orientation, Naming and Unusual views) had 81% sensitivity and 50% specificity for detecting cognitive problems in language, perception or memory after stroke.

  12. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  13. Computational analysis of a multistage axial compressor

    NASA Astrophysics Data System (ADS)

    Mamidoju, Chaithanya

    Turbomachines are used extensively in Aerospace, Power Generation, and Oil & Gas Industries. Efficiency of these machines is often an important factor and has led to the continuous effort to improve the design to achieve better efficiency. The axial flow compressor is a major component in a gas turbine with the turbine's overall performance depending strongly on compressor performance. Traditional analysis of axial compressors involves throughflow calculations, isolated blade passage analysis, Quasi-3D blade-to-blade analysis, single-stage (rotor-stator) analysis, and multi-stage analysis involving larger design cycles. In the current study, the detailed flow through a 15 stage axial compressor is analyzed using a 3-D Navier Stokes CFD solver in a parallel computing environment. Methodology is described for steady state (frozen rotor stator) analysis of one blade passage per component. Various effects such as mesh type and density, boundary conditions, tip clearance and numerical issues such as turbulence model choice, advection model choice, and parallel processing performance are analyzed. A high sensitivity of the predictions to the above was found. Physical explanation to the flow features observed in the computational study are given. The total pressure rise verses mass flow rate was computed.

  14. Developing a methodology for the inverse estimation of root architectural parameters from field based sampling schemes

    NASA Astrophysics Data System (ADS)

    Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry

    2017-04-01

    Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.

  15. A Bayesian network meta-analysis for binary outcome: how to do it.

    PubMed

    Greco, Teresa; Landoni, Giovanni; Biondi-Zoccai, Giuseppe; D'Ascenzo, Fabrizio; Zangrillo, Alberto

    2016-10-01

    This study presents an overview of conceptual and practical issues of a network meta-analysis (NMA), particularly focusing on its application to randomised controlled trials with a binary outcome of interest. We start from general considerations on NMA to specifically appraise how to collect study data, structure the analytical network and specify the requirements for different models and parameter interpretations, with the ultimate goal of providing physicians and clinician-investigators a practical tool to understand pros and cons of NMA. Specifically, we outline the key steps, from the literature search to sensitivity analysis, necessary to perform a valid NMA of binomial data, exploiting Markov Chain Monte Carlo approaches. We also apply this analytical approach to a case study on the beneficial effects of volatile agents compared to total intravenous anaesthetics for surgery to further clarify the statistical details of the models, diagnostics and computations. Finally, datasets and models for the freeware WinBUGS package are presented for the anaesthetic agent example. © The Author(s) 2013.

  16. Reusable Rocket Engine Operability Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Komar, D. R.

    1998-01-01

    This paper describes the methodology, model, input data, and analysis results of a reusable launch vehicle engine operability study conducted with the goal of supporting design from an operations perspective. Paralleling performance analyses in schedule and method, this requires the use of metrics in a validated operations model useful for design, sensitivity, and trade studies. Operations analysis in this view is one of several design functions. An operations concept was developed given an engine concept and the predicted operations and maintenance processes incorporated into simulation models. Historical operations data at a level of detail suitable to model objectives were collected, analyzed, and formatted for use with the models, the simulations were run, and results collected and presented. The input data used included scheduled and unscheduled timeline and resource information collected into a Space Transportation System (STS) Space Shuttle Main Engine (SSME) historical launch operations database. Results reflect upon the importance not only of reliable hardware but upon operations and corrective maintenance process improvements.

  17. AVR Microcontroller-based automated technique for analysis of DC motors

    NASA Astrophysics Data System (ADS)

    Kaur, P.; Chatterji, S.

    2014-01-01

    This paper provides essential information on the development of a 'dc motor test and analysis control card' using AVR series ATMega32 microcontroller. This card can be interfaced to PC and calculates parameters like motor losses, efficiency and plot characteristics for dc motors. Presently, there are different tests and methods available to evaluate motor parameters, but a single and universal user-friendly automated set-up has been discussed in this paper. It has been accomplished by designing a data acquisition and SCR bridge firing hardware based on AVR ATMega32 microcontroller. This hardware has the capability to drive the phase-controlled rectifiers and acquire real-time values of current, voltage, temperature and speed of motor. Various analyses feasible with the designed hardware are of immense importance for dc motor manufacturers and quality-sensitive users. Authors, through this paper aim to provide details of this AVR-based hardware which can be used for dc motor parameter analysis and also for motor control applications.

  18. Transmission Spectra of HgTe-Based Quantum Wells and Films in the Far-Infrared Range

    NASA Astrophysics Data System (ADS)

    Savchenko, M. L.; Vasil'ev, N. N.; Yaroshevich, A. S.; Kozlov, D. A.; Kvon, Z. D.; Mikhailov, N. N.; Dvoretskii, S. A.

    2018-04-01

    Strained 80-nm-thick HgTe films belong to a new class of materials referred to as three-dimensional topological insulators (i.e., they have a bulk band gap and spin-nondegenerate surface states). Though there are a number of studies devoted to analysis of the properties of surface states using both transport and magnetooptical techniques in the THz range, the information about direct optical transitions between bulk and surface bands in these systems has not been reported. This study is devoted to the analysis of transmission and reflection spectra of HgTe films of different thicknesses in the far-infrared range recorded in a wide temperature range in order to detect the above interband transitions. A peculiarity at 15 meV, which is sensitive to a change in the temperature, is observed in spectra of both types. Detailed analysis of the data obtained revealed that this feature is related to absorption by HgTe optical phonons, while the interband optical transitions are suppressed.

  19. Google matrix of the world network of economic activities

    NASA Astrophysics Data System (ADS)

    Kandiah, Vivek; Escaith, Hubert; Shepelyansky, Dima L.

    2015-07-01

    Using the new data from the OECD-WTO world network of economic activities we construct the Google matrix G of this directed network and perform its detailed analysis. The network contains 58 countries and 37 activity sectors for years 1995 and 2008. The construction of G, based on Markov chain transitions, treats all countries on equal democratic grounds while the contribution of activity sectors is proportional to their exchange monetary volume. The Google matrix analysis allows to obtain reliable ranking of countries and activity sectors and to determine the sensitivity of CheiRank-PageRank commercial balance of countries in respect to price variations and labor cost in various countries. We demonstrate that the developed approach takes into account multiplicity of network links with economy interactions between countries and activity sectors thus being more efficient compared to the usual export-import analysis. The spectrum and eigenstates of G are also analyzed being related to specific activity communities of countries.

  20. Use of valence band Auger electron spectroscopy to study thin film growth: oxide and diamond-like carbon films

    NASA Astrophysics Data System (ADS)

    Steffen, H. J.

    1994-12-01

    It is demonstrated how Auger line shape analysis with factor analysis (FA), least-squares fitting and even simple peak height measurements may provide detailed information about the composition, different chemical states and also defect concentration or crystal order. Advantage is taken of the capability of Auger electron spectroscopy to give valence band structure information with high surface sensitivity and the special aspect of FA to identify and discriminate quantitatively unknown chemical species. Valence band spectra obtained from Ni, Fe, Cr and NiFe40Cr20 during oxygen exposure at room temperature reveal the oxidation process in the initial stage of the thin layer formation. Furthermore, the carbon chemical states that were formed during low energy C(+) and Ne(+) ion irradiation of graphite are delineated and the evolution of an amorphous network with sp3 bonds is disclosed. The analysis represents a unique method to quantify the fraction of sp3-hybridized carbon in diamond-like materials.

  1. Enhancing Security of Double Random Phase Encoding Based on Random S-Box

    NASA Astrophysics Data System (ADS)

    Girija, R.; Singh, Hukum

    2018-06-01

    In this paper, we propose a novel asymmetric cryptosystem for double random phase encoding (DRPE) using random S-Box. While utilising S-Box separately is not reliable and DRPE does not support non-linearity, so, our system unites the effectiveness of S-Box with an asymmetric system of DRPE (through Fourier transform). The uniqueness of proposed cryptosystem lies on employing high sensitivity dynamic S-Box for our DRPE system. The randomness and scalability achieved due to applied technique is an additional feature of the proposed solution. The firmness of random S-Box is investigated in terms of performance parameters such as non-linearity, strict avalanche criterion, bit independence criterion, linear and differential approximation probabilities etc. S-Boxes convey nonlinearity to cryptosystems which is a significant parameter and very essential for DRPE. The strength of proposed cryptosystem has been analysed using various parameters such as MSE, PSNR, correlation coefficient analysis, noise analysis, SVD analysis, etc. Experimental results are conferred in detail to exhibit proposed cryptosystem is highly secure.

  2. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  3. Analysis and implementation of a space resolving spherical crystal spectrometer for x-ray Thomson scattering experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, E. C.; Ao, T.; Bailey, J. E.

    2015-04-15

    The application of a space-resolving spectrometer to X-ray Thomson Scattering (XRTS) experiments has the potential to advance the study of warm dense matter. This has motivated the design of a spherical crystal spectrometer, which is a doubly focusing geometry with an overall high sensitivity and the capability of providing high-resolution, space-resolved spectra. A detailed analysis of the image fluence and crystal throughput in this geometry is carried out and analytical estimates of these quantities are presented. This analysis informed the design of a new spectrometer intended for future XRTS experiments on the Z-machine. The new spectrometer collects 6 keV x-raysmore » with a spherically bent Ge (422) crystal and focuses the collected x-rays onto the Rowland circle. The spectrometer was built and then tested with a foam target. The resulting high-quality spectra prove that a spherical spectrometer is a viable diagnostic for XRTS experiments.« less

  4. Status of GeoTASO Trace Gas Data Analysis for the KORUS-AQ Campaign

    NASA Astrophysics Data System (ADS)

    Janz, S. J.; Nowlan, C. R.; Lamsal, L. N.; Kowalewski, M. G.; Judd, L. M.; Wang, J.

    2017-12-01

    The Geostationary Trace gas and Aerosol Sensor Optimization (GeoTASO) instrument measures spectrally resolved backscattered solar radiation at high spatial resolution. The instrument completed 30 sorties on board the NASA LaRC UC-12 aircraft during the KORUS-AQ deployment in May-June of 2016. GeoTASO collects spatially resolved spectra with sufficient sensitivity to retrieve column amounts of the trace gas molecules NO2, SO2, H2CO, O3, and C2H2O2 as well as aerosol products. Typical product retrievals are done in 250 m2 bins with multiple overpasses of key ground sites, allowing for detailed spatio-temporal analysis. Flight patterns consisted of both contiguous overlapping grid patterns to simulate satellite observational strategies in support of future geostationary satellite algorithm development, and "race-track" sampling to perform calibration and validation with the in-situ DC-8 platform as well as ground based assets. We will summarize the status of the radiance data set as well as ongoing analysis from our co-Investigators.

  5. Analysis and implementation of a space resolving spherical crystal spectrometer for x-ray Thomson scattering experiments.

    PubMed

    Harding, E C; Ao, T; Bailey, J E; Loisel, G; Sinars, D B; Geissel, M; Rochau, G A; Smith, I C

    2015-04-01

    The application of a space-resolving spectrometer to X-ray Thomson Scattering (XRTS) experiments has the potential to advance the study of warm dense matter. This has motivated the design of a spherical crystal spectrometer, which is a doubly focusing geometry with an overall high sensitivity and the capability of providing high-resolution, space-resolved spectra. A detailed analysis of the image fluence and crystal throughput in this geometry is carried out and analytical estimates of these quantities are presented. This analysis informed the design of a new spectrometer intended for future XRTS experiments on the Z-machine. The new spectrometer collects 6 keV x-rays with a spherically bent Ge (422) crystal and focuses the collected x-rays onto the Rowland circle. The spectrometer was built and then tested with a foam target. The resulting high-quality spectra prove that a spherical spectrometer is a viable diagnostic for XRTS experiments.

  6. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  7. Long-term safety assessment of trench-type surface repository at Chernobyl, Ukraine - computer model and comparison with results from simplified models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haverkamp, B.; Krone, J.; Shybetskyi, I.

    2013-07-01

    The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, the surface repository for solid low and intermediate level waste (LILW) is still being operated but its maximum capacity is nearly reached. Long-existing plans for increasing the capacity of the facility shall be implemented in the framework of the European Commission INSC Programme (Instrument for Nuclear Safety Co-operation). Within the first phase of this project, DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safetymore » analysis report (PSAR) for a future extended facility based on the planned enlargement. In addition to a detailed mathematical model, also simplified models have been developed to verify results of the former one and enhance confidence in the results. Comparison of the results show that - depending on the boundary conditions - simplifications like modeling the multi trench repository as one generic trench might have very limited influence on the overall results compared to the general uncertainties associated with respective long-term calculations. In addition to their value in regard to verification of more complex models which is important to increase confidence in the overall results, such simplified models can also offer the possibility to carry out time consuming calculations like probabilistic calculations or detailed sensitivity analysis in an economic manner. (authors)« less

  8. Investigation on improved infrared image detail enhancement algorithm based on adaptive histogram statistical stretching and gradient filtering

    NASA Astrophysics Data System (ADS)

    Zeng, Bangze; Zhu, Youpan; Li, Zemin; Hu, Dechao; Luo, Lin; Zhao, Deli; Huang, Juan

    2014-11-01

    Duo to infrared image with low contrast, big noise and unclear visual effect, target is very difficult to observed and identified. This paper presents an improved infrared image detail enhancement algorithm based on adaptive histogram statistical stretching and gradient filtering (AHSS-GF). Based on the fact that the human eyes are very sensitive to the edges and lines, the author proposed to extract the details and textures by using the gradient filtering. New histogram could be acquired by calculating the sum of original histogram based on fixed window. With the minimum value for cut-off point, author carried on histogram statistical stretching. After the proper weights given to the details and background, the detail-enhanced results could be acquired finally. The results indicate image contrast could be improved and the details and textures could be enhanced effectively as well.

  9. Thermal analysis of conceptual designs for GPHS/FPSE power systems of 250 We and 500 We

    NASA Technical Reports Server (NTRS)

    Mccomas, Thomas J.; Dugan, Edward T.

    1991-01-01

    Thermal analyses were performed for two distinct configurations of a proposed space nuclear power system which combines General Purpose Heat Source (GPHS) modules with the state of the art Free-Piston Stirling Engines (FPSEs). The two configurations correspond to systems with power levels of 250 and 500 W(sub e). The 250 W(sub e) GPHS/FPSE power system utilizes four GPHS modules and one FPSE, and the 500 W(sub e) contains eight GPHS modules and two FPSEs. The configurations of the systems and the bases for selecting the configurations are described. Brief introductory sections are included to describe the GPHS modules and free piston Stirling engines. The primary focus of the thermal analyses is on the temperature of the iridium fuel clad within the GPHS modules. A design goal temperature of 1573 K was selected as the upper limit for the fuel clad during normal operating conditions. The basis for selecting this temperature limit is discussed in detail. Results obtained from thermal analysis of the 250 W(sub e) GPHS/FPSE power system indicate fuel clad temperatures which slightly exceed the design goal temperature of 1573 K. The results are considered favorable due to the numerous conservative assumptions used in developing the thermal model and performing the thermal analysis. To demonstrate the effects of the conservatism, a brief sensitivity analysis is performed in which a few of the key system parameters are varied to determine their effect on the fuel clad temperatures. It is concluded that thermal analysis of a more detailed thermal model would be expected to yield fuel clad temperatures below the design foal temperature limiy 1573 K.

  10. Differential hydrogen/deuterium exchange mass spectrometry analysis of protein–ligand interactions

    PubMed Central

    Chalmers, Michael J; Busby, Scott A; Pascal, Bruce D; West, Graham M; Griffin, Patrick R

    2011-01-01

    Functional regulation of ligand-activated receptors is driven by alterations in the conformational dynamics of the protein upon ligand binding. Differential hydrogen/deuterium exchange (HDX) coupled with mass spectrometry has emerged as a rapid and sensitive approach for characterization of perturbations in conformational dynamics of proteins following ligand binding. While this technique is sensitive to detecting ligand interactions and alterations in receptor dynamics, it also can provide important mechanistic insights into ligand regulation. For example, HDX has been used to determine a novel mechanism of ligand activation of the nuclear receptor peroxisome proliferator activated receptor-γ, perform detailed analyses of binding modes of ligands within the ligand-binding pocket of two estrogen receptor isoforms, providing insight into selectivity, and helped classify different types of estrogen receptor-α ligands by correlating their pharmacology with the way they interact with the receptor based solely on hierarchical clustering of receptor HDX signatures. Beyond small-molecule–receptor interactions, this technique has also been applied to study protein–protein complexes, such as mapping antibody–antigen interactions. In this article, we summarize the current state of the differential HDX approaches and the future outlook. We summarize how HDX analysis of protein–ligand interactions has had an impact on biology and drug discovery. PMID:21329427

  11. Bubbles, Gating, and Anesthetics in Ion Channels

    PubMed Central

    Roth, Roland; Gillespie, Dirk; Nonner, Wolfgang; Eisenberg, Robert E.

    2008-01-01

    We suggest that bubbles are the bistable hydrophobic gates responsible for the on-off transitions of single channel currents. In this view, many types of channels gate by the same physical mechanism—dewetting by capillary evaporation—but different types of channels use different sensors to modulate hydrophobic properties of the channel wall and thereby trigger and control bubbles and gating. Spontaneous emptying of channels has been seen in many simulations. Because of the physics involved, such phase transitions are inherently sensitive, unstable threshold phenomena that are difficult to simulate reproducibly and thus convincingly. We present a thermodynamic analysis of a bubble gate using morphometric density functional theory of classical (not quantum) mechanics. Thermodynamic analysis of phase transitions is generally more reproducible and less sensitive to details than simulations. Anesthetic actions of inert gases—and their interactions with hydrostatic pressure (e.g., nitrogen narcosis)—can be easily understood by actions on bubbles. A general theory of gas anesthesia may involve bubbles in channels. Only experiments can show whether, or when, or which channels actually use bubbles as hydrophobic gates: direct observation of bubbles in channels is needed. Existing experiments show thin gas layers on hydrophobic surfaces in water and suggest that bubbles nearly exist in bulk water. PMID:18234836

  12. Dynamical Analysis of bantam-Regulated Drosophila Circadian Rhythm Model

    NASA Astrophysics Data System (ADS)

    Li, Ying; Liu, Zengrong

    MicroRNAs (miRNAs) interact with 3‧untranslated region (UTR) elements of target genes to regulate mRNA stability or translation, and play a crucial role in regulating many different biological processes. bantam, a conserved miRNA, is involved in several functions, such as regulating Drosophila growth and circadian rhythm. Recently, it has been discovered that bantam plays a crucial role in the core circadian pacemaker. In this paper, based on experimental observations, a detailed dynamical model of bantam-regulated circadian clock system is developed to show the post-transcriptional behaviors in the modulation of Drosophila circadian rhythm, in which the regulation of bantam is incorporated into a classical model. The dynamical behaviors of the model are consistent with the experimental observations, which shows that bantam is an important regulator of Drosophila circadian rhythm. The sensitivity analysis of parameters demonstrates that with the regulation of bantam the system is more sensitive to perturbations, indicating that bantam regulation makes it easier for the organism to modulate its period against the environmental perturbations. The effectiveness in rescuing locomotor activity rhythms of mutated flies shows that bantam is necessary for strong and sustained rhythms. In addition, the biological mechanisms of bantam regulation are analyzed, which may help us more clearly understand Drosophila circadian rhythm regulated by other miRNAs.

  13. Isoschizomers and amplified fragment length polymorphism for the detection of specific cytosine methylation changes.

    PubMed

    Ruiz-García, Leonor; Cabezas, Jose Antonio; de María, Nuria; Cervera, María-Teresa

    2010-01-01

    Different molecular techniques have been developed to study either the global level of methylated cytosines or methylation at specific gene sequences. One of them is a modification of the Amplified Fragment Length Polymorphism (AFLP) technique that has been used to study methylation of anonymous CCGG sequences in different fungi, plant and animal species. The main variation of this technique is based on the use of isoschizomers with different methylation sensitivity (such as HpaII and MspI) as a frequent cutter restriction enzyme. For each sample, AFLP analysis is performed using both EcoRI/HpaII and EcoRI/MspI digested samples. Comparative analysis between EcoRI/HpaII and EcoRI/MspI fragment patterns allows the identification of two types of polymorphisms: (1) "Methylation-insensitive polymorphisms" that show common EcoRI/HpaII and EcoRI/MspI patterns but are detected as polymorphic amplified fragments among samples; and (2) "Methylation-sensitive polymorphisms" that are associated with amplified fragments differing in their presence or absence or in their intensity between EcoRI/HpaII and EcoRI/MspI patterns. This chapter describes a detailed protocol of this technique and discusses modifications that can be applied to adjust the technology to different species of interest.

  14. Derivatization reaction-based surface-enhanced Raman scattering (SERS) for detection of trace acetone.

    PubMed

    Zheng, Ying; Chen, Zhuo; Zheng, Chengbin; Lee, Yong-Ill; Hou, Xiandeng; Wu, Li; Tian, Yunfei

    2016-08-01

    A facile method was developed for determination of trace volatile acetone by coupling a derivatization reaction to surface-enhanced Raman scattering (SERS). With iodide modified Ag nanoparticles (Ag IMNPs) as the SERS substrate, acetone without obvious Raman signal could be converted to SERS-sensitive species via a chemical derivatization reaction with 2,4-dinitrophenylhydrazine (2,4-DNPH). In addition, acetone can be effectively separated from liquid phase with a purge-sampling device and then any serious interference from sample matrices can be significantly reduced. The optimal conditions for the derivatization reaction and the SERS analysis were investigated in detail, and the selectivity and reproducibility of this method were also evaluated. Under the optimal conditions, the limit of detection (LOD) for acetone was 5mgL(-1) or 0.09mM (3σ). The relative standard deviation (RSD) for 80mgL(-1) acetone (n=9) was 1.7%. This method was successfully used for the determination of acetone in artificial urine and human urine samples with spiked recoveries ranging from 92% to 110%. The present method is convenient, sensitive, selective, reliable and suitable for analysis of trace acetone, and it could have a promising clinical application in early diabetes diagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Polymer monolithic capillary microextraction combined on-line with inductively coupled plasma MS for the determination of trace rare earth elements in biological samples.

    PubMed

    Zhang, Lin; Chen, Beibei; He, Man; Hu, Bin

    2013-07-01

    A rapid and sensitive method based on polymer monolithic capillary microextraction combined on-line with microconcentric nebulization inductively coupled plasma MS has been developed for the determination of trace/ultratrace rare earth elements in biological samples. For this purpose, the iminodiacetic acid modified poly(glycidyl methacrylate-trimethylolpropane trimethacrylate) monolithic capillary was prepared and characterized by SEM and FTIR spectroscopy. Factors affecting the extraction efficiency, such as sample pH, sample flow rate, sample/eluent volume, and coexisting ions were investigated in detail. Under the optimal conditions, the LODs for rare earth elements were in the range of 0.08 (Er) to 0.97 ng/L (Nd) with a sampling frequency of 8.5 h(-1), and the RSDs were between 1.5% (Sm) and 7.4% (Nd) (c = 20 ng/L, n = 7). The proposed method was successfully applied to the analysis of trace/ultratrace rare earth elements in human urine and serum samples, and the recoveries for the spiked samples were in the range of 82-105%. The developed method was simple, rapid, sensitive, and favorable for the analysis of trace/ultratrace rare earth elements in biological samples with limited sample volume. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Polarization-polarization correlation measurement --- Experimental test of the PPCO methods

    NASA Astrophysics Data System (ADS)

    Droste, Ch.; Starosta, K.; Wierzchucka, A.; Morek, T.; Rohoziński, S. G.; Srebrny, J.; Wesolowski, E.; Bergstrem, M.; Herskind, B.

    1998-04-01

    A significant fraction of modern multidetector arrays used for "in-beam" gamma-ray spectroscopy consist of a detectors which are sensitive to linear polarization of gamma quanta. This yields the opportunity to carry out correlation measurements between the gamma rays registered in polarimeters to get information concerning spins and parities of excited nuclear states. The aim of the present work was to study the ability of the polarization- polarization correlation method (the PPCO method). The correlation between the linear polarization of one gamma quantum and the polarization of the second quantum emitted in a cascade from an oriented nucleus (due to a heavy ion reaction) was studied in detail. The appropriate formulae and methods of analysis are presented. The experimental test of the method was performed using the EUROGAM II array. The CLOVER detectors are the parts of the array used as polarimeters. The ^164Yb nucleus was produced via the ^138Ba(^30Si, 4n) reaction. It was found that the PPCO method together with the standard DCO analysis and the polarization- direction correlation method (PDCO) can be helpful for spin, parity and multipolarity assignments. The results suggest that the PPCO method can be applied to modern spectrometers in which a large number of detectors (e.g. CLOVER) are sensitive to polarization of gamma rays.

  17. Development and validation of a whole-exome sequencing test for simultaneous detection of point mutations, indels and copy-number alterations for precision cancer care

    PubMed Central

    Rennert, Hanna; Eng, Kenneth; Zhang, Tuo; Tan, Adrian; Xiang, Jenny; Romanel, Alessandro; Kim, Robert; Tam, Wayne; Liu, Yen-Chun; Bhinder, Bhavneet; Cyrta, Joanna; Beltran, Himisha; Robinson, Brian; Mosquera, Juan Miguel; Fernandes, Helen; Demichelis, Francesca; Sboner, Andrea; Kluk, Michael; Rubin, Mark A; Elemento, Olivier

    2016-01-01

    We describe Exome Cancer Test v1.0 (EXaCT-1), the first New York State-Department of Health-approved whole-exome sequencing (WES)-based test for precision cancer care. EXaCT-1 uses HaloPlex (Agilent) target enrichment followed by next-generation sequencing (Illumina) of tumour and matched constitutional control DNA. We present a detailed clinical development and validation pipeline suitable for simultaneous detection of somatic point/indel mutations and copy-number alterations (CNAs). A computational framework for data analysis, reporting and sign-out is also presented. For the validation, we tested EXaCT-1 on 57 tumours covering five distinct clinically relevant mutations. Results demonstrated elevated and uniform coverage compatible with clinical testing as well as complete concordance in variant quality metrics between formalin-fixed paraffin embedded and fresh-frozen tumours. Extensive sensitivity studies identified limits of detection threshold for point/indel mutations and CNAs. Prospective analysis of 337 cancer cases revealed mutations in clinically relevant genes in 82% of tumours, demonstrating that EXaCT-1 is an accurate and sensitive method for identifying actionable mutations, with reasonable costs and time, greatly expanding its utility for advanced cancer care. PMID:28781886

  18. Computational studies of transthoracic and transvenous defibrillation in a detailed 3-D human thorax model.

    PubMed

    Jorgenson, D B; Haynor, D R; Bardy, G H; Kim, Y

    1995-02-01

    A method for constructing and solving detailed patient-specific 3-D finite element models of the human thorax is presented for use in defibrillation studies. The method utilizes the patient's own X-ray CT scan and a simplified meshing scheme to quickly and efficiently generate a model typically composed of approximately 400,000 elements. A parameter sensitivity study on one human thorax model to examine the effects of variation in assigned tissue resistivity values, level of anatomical detail included in the model, and number of CT slices used to produce the model is presented. Of the seven tissue types examined, the average left ventricular (LV) myocardial voltage gradient was most sensitive to the values of myocardial and blood resistivity. Incorrectly simplifying the model, for example modeling the heart as a homogeneous structure by ignoring the blood in the chambers, caused the average LV myocardial voltage gradient to increase by 12%. The sensitivity of the model to variations in electrode size and position was also examined. Small changes (< 2.0 cm) in electrode position caused average LV myocardial voltage gradient values to increase by up to 12%. We conclude that patient-specific 3-D finite element modeling of human thoracic electric fields is feasible and may reduce the empiric approach to insertion of implantable defibrillators and improve transthoracic defibrillation techniques.

  19. Foil Strain Gauges Using Piezoresistive Carbon Nanotube Yarn: Fabrication and Calibration

    PubMed Central

    Góngora-Rubio, Mário R.; Kiyono, César Y.; Mello, Luis A. M.; Cardoso, Valtemar F.; Rosa, Reinaldo L. S.; Kuebler, Derek A.; Brodeur, Grace E.; Alotaibi, Amani H.; Coene, Marisa P.; Coene, Lauren M.; Jean, Elizabeth; Santiago, Rafael C.; Oliveira, Francisco H. A.; Rangel, Ricardo; Thomas, Gilles P.; Belay, Kalayu; da Silva, Luciana W.; Moura, Rafael T.; Seabra, Antonio C.; Silva, Emílio C. N.

    2018-01-01

    Carbon nanotube yarns are micron-scale fibers comprised by tens of thousands of carbon nanotubes in their cross section and exhibiting piezoresistive characteristics that can be tapped to sense strain. This paper presents the details of novel foil strain gauge sensor configurations comprising carbon nanotube yarn as the piezoresistive sensing element. The foil strain gauge sensors are designed using the results of parametric studies that maximize the sensitivity of the sensors to mechanical loading. The fabrication details of the strain gauge sensors that exhibit the highest sensitivity, based on the modeling results, are described including the materials and procedures used in the first prototypes. Details of the calibration of the foil strain gauge sensors are also provided and discussed in the context of their electromechanical characterization when bonded to metallic specimens. This characterization included studying their response under monotonic and cyclic mechanical loading. It was shown that these foil strain gauge sensors comprising carbon nanotube yarn are sensitive enough to capture strain and can replicate the loading and unloading cycles. It was also observed that the loading rate affects their piezoresistive response and that the gauge factors were all above one order of magnitude higher than those of typical metallic foil strain gauges. Based on these calibration results on the initial sensor configurations, new foil strain gauge configurations will be designed and fabricated, to increase the strain gauge factors even more. PMID:29401745

  20. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  1. Analysis of edge stability for models of heat flux width

    DOE PAGES

    Makowski, Michael A.; Lasnier, Charles J.; Leonard, Anthony W.; ...

    2017-05-12

    Detailed measurements of the n e, and T e, and T i profiles in the vicinity of the separatrix of ELMing H-mode discharges have been used to examine plasma stability at the extreme edge of the plasma and assess stability dependent models of the heat flux width. The results are strongly contrary to the critical gradient model, which posits that a ballooning instability determines a gradient scale length related to the heat flux width. The results of this analysis are not sensitive to the choice of location to evaluate stability. Significantly, it is also found that the results are completelymore » consistent with the heuristic drift model for the heat flux width. Here the edge pressure gradient scales with plasma density and is proportional to the pressure gradient inferred from the equilibrium in accordance with the predictions of that theory.« less

  2. Structure of the Upper Troposphere-Lower Stratosphere (UTLS) in GEOS-5

    NASA Technical Reports Server (NTRS)

    Pawson, Steven

    2011-01-01

    This study examines the structure of the upper troposphere and lower stratosphere in the GEOS-5 data assimilation system. Near-real time analyses, with a horizontal resolution of one-half or one quarter degree and a vertical resolution of about 1km in the tropopause region are examined with an emphasis on spatial structures at and around the tropopause. The contributions of in-situ observations of temperature and microwave and infrared radiances to the analyses are discussed, with some focus on the interplay between these types of observations. For a historical analysis (Merra) performed with GEOS-5, the impacts of changing observations on the assimilation system are examined in some detail - this documents some aspects of the time dependence of analysis that must be taken into account in the isolation of true geophysical trends. Finally, some sensitivities of the ozone analyses to input data and correlated errors between temperature and ozone are discussed.

  3. Dynamic properties of ceramic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grady, D.E.

    1995-02-01

    The present study offers new data and analysis on the transient shock strength and equation-of-state properties of ceramics. Various dynamic data on nine high strength ceramics are provided with wave profile measurements, through velocity interferometry techniques, the principal observable. Compressive failure in the shock wave front, with emphasis on brittle versus ductile mechanisms of deformation, is examined in some detail. Extensive spall strength data are provided and related to the theoretical spall strength, and to energy-based theories of the spall process. Failure waves, as a mechanism of deformation in the transient shock process, are examined. Strength and equation-of-state analysis ofmore » shock data on silicon carbide, boron carbide, tungsten carbide, silicon dioxide and aluminum nitride is presented with particular emphasis on phase transition properties for the latter two. Wave profile measurements on selected ceramics are investigated for evidence of rate sensitive elastic precursor decay in the shock front failure process.« less

  4. First results from a microwave cavity axion search at 25 μeV : Analysis

    NASA Astrophysics Data System (ADS)

    Zhong, Ling; ADMX-HF Collaboration

    2017-01-01

    ADMX-HF searches for dark matter axions via Primakoff conversion into microwave photons in the gigahertz domain. Since 2012, tremendous effort has been made to build an axion detector working in this frequency range. By operating the system in a cryogen-free dilution refrigerator (T = 127 mK) and integrating a Josephson Parametric Amplifier (JPA), we obtain a sufficiently low system noise temperature to exclude axion models with gaγγ > 2 ×10-14GeV-1 over the mass range 23 . 55 μeV

  5. Development of a GC/Quadrupole-Orbitrap Mass Spectrometer, Part I: Design and Characterization

    PubMed Central

    2015-01-01

    Identification of unknown compounds is of critical importance in GC/MS applications (metabolomics, environmental toxin identification, sports doping, petroleomics, and biofuel analysis, among many others) and remains a technological challenge. Derivation of elemental composition is the first step to determining the identity of an unknown compound by MS, for which high accuracy mass and isotopomer distribution measurements are critical. Here, we report on the development of a dedicated, applications-grade GC/MS employing an Orbitrap mass analyzer, the GC/Quadrupole-Orbitrap. Built from the basis of the benchtop Orbitrap LC/MS, the GC/Quadrupole-Orbitrap maintains the performance characteristics of the Orbitrap, enables quadrupole-based isolation for sensitive analyte detection, and includes numerous analysis modalities to facilitate structural elucidation. We detail the design and construction of the instrument, discuss its key figures-of-merit, and demonstrate its performance for the characterization of unknown compounds and environmental toxins. PMID:25208235

  6. Engine Damage to a NASA DC-8-72 Airplane From a High-Altitude Encounter With a Diffuse Volcanic Ash Cloud

    NASA Technical Reports Server (NTRS)

    Grindle, Thomas J.; Burcham, Frank W., Jr.

    2003-01-01

    The National Aeronautics and Space Administration (NASA) DC-8 airborne sciences research airplane inadvertently flew through a diffuse volcanic ash cloud of the Mt. Hekla volcano in February 2000 during a flight from Edwards Air Force Base (Edwards, California) to Kiruna, Sweden. Although the ash plume was not visible to the flight crew, sensitive research experiments and instruments detected it. In-flight performance checks and postflight visual inspections revealed no damage to the airplane or engine first-stage fan blades; subsequent detailed examination of the engines revealed clogged turbine cooling air passages. The engines were removed and overhauled. This paper presents volcanic ash plume analysis, trajectory from satellites, analysis of ash particles collected in cabin air heat exchanger filters and removed from the engines, and data from onboard instruments and engine conditions.

  7. IRIS: a novel spectral imaging system for the analysis of cultural heritage objects

    NASA Astrophysics Data System (ADS)

    Papadakis, V. M.; Orphanos, Y.; Kogou, S.; Melessanaki, K.; Pouli, P.; Fotakis, C.

    2011-06-01

    A new portable spectral imaging system is herein presented capable of acquiring images of high resolution (2MPixels) ranging from 380 nm up to 950 nm. The system consists of a digital color CCD camera, 15 interference filters covering all the sensitivity range of the detector and a robust filter changing system. The acquisition software has been developed in "LabView" programming language allowing easy handling and modification by end-users. The system has been tested and evaluated on a series of objects of Cultural Heritage (CH) value including paintings, encrusted stonework, ceramics etc. This paper aims to present the system, as well as, its application and advantages in the analysis of artworks with emphasis on the detailed compositional and structural information of layered surfaces based on reflection & fluorescence spectroscopy. Specific examples will be presented and discussed on the basis of system improvements.

  8. Determination of bromine, chlorine and iodine in environmental aqueous samples by epithermal neutron activation analysis and Compton suppression

    USGS Publications Warehouse

    Landsberger, S.; O'Kelly, D. J.; Braisted, J.; Panno, S.

    2006-01-01

    Halides, particularly Br- and Cl-, have been used as indicators of potential sources of Na+ and Cl- in surface water and groundwater with limited success. Contamination of groundwater and surface water by Na+ and Cl- is a common occurrence in growing urban areas and adversely affects municipal and private water supplies in Illinois and other states, as well as vegetation in environmentally sensitive areas. Neutron activation analysis (NAA) can be effectively used to determine these halogens, but often the elevated concentrations of sodium and chlorine in water samples can give rise to very high detection limits for bromine and iodine due to elevated backgrounds from the activation process. We present a detailed analytical scheme to determine Cl, Br and I in aqueous samples with widely varying Na and Cl concentrations using epithermal NAA in conjunction with Compton suppression. ?? 2006 Akade??miai Kiado??.

  9. Analysis of information systems for hydropower operations: Executive summary

    NASA Technical Reports Server (NTRS)

    Sohn, R. L.; Becker, L.; Estes, J.; Simonett, D.; Yeh, W.

    1976-01-01

    An analysis was performed of the operations of hydropower systems, with emphasis on water resource management, to determine how aerospace derived information system technologies can effectively increase energy output. Better utilization of water resources was sought through improved reservoir inflow forecasting based on use of hydrometeorologic information systems with new or improved sensors, satellite data relay systems, and use of advanced scheduling techniques for water release. Specific mechanisms for increased energy output were determined, principally the use of more timely and accurate short term (0-7 days) inflow information to reduce spillage caused by unanticipated dynamic high inflow events. The hydrometeorologic models used in predicting inflows were examined in detail to determine the sensitivity of inflow prediction accuracy to the many variables employed in the models, and the results were used to establish information system requirements. Sensor and data handling system capabilities were reviewed and compared to the requirements, and an improved information system concept was outlined.

  10. Translation and cross-cultural adaptation of a family booklet on comfort care in dementia: sensitive topics revised before implementation.

    PubMed

    van der Steen, Jenny T; Hertogh, Cees M P M; de Graas, Tjomme; Nakanishi, Miharu; Toscani, Franco; Arcand, Marcel

    2013-02-01

    Families of patients with dementia may need support in difficult end-of-life decision making. Such guidance may be culturally sensitive. To support families in Canada, a booklet was developed to aid decision making on palliative care issues. For reasons of cost effectiveness and promising effects, we prepared for its implementation in Italy, the Netherlands and Japan. Local teams translated and adapted the booklet to local ethical, legal and medical standards where needed, retaining guidance on palliative care. Using qualitative content analyses, we grouped and compared adaptations to understand culturally sensitive aspects. Three themes emerged: (1) relationships among patient, physician and other professionals-the authority of the physician was more explicit in adapted versions; (2) patient rights and family position-adding detail about local regulations; and (3) typology of treatments and decisions. Considerations underlying palliative care decisions were detailed (Dutch and Italian versions), and the Japanese version frequently referred to professional and legal standards, and life-prolongation was a competing goal. Text on artificial feeding or fluids and euthanasia was revised extensively. Providing artificial feeding and fluids and discussing euthanasia may be particularly sensitive topics, and guidance on these subjects needs careful consideration of ethical aspects and possible adaptations to local standards and practice. The findings may promote cross-national debate on sensitive, core issues regarding end-of-life care in dementia.

  11. Optimization of geometric characteristics to improve sensing performance of MEMS piezoresistive strain sensors

    NASA Astrophysics Data System (ADS)

    Mohammed, Ahmed A. S.; Moussa, Walied A.; Lou, Edmond

    2010-01-01

    In this paper, the design of MEMS piezoresistive strain sensor is described. ANSYS®, finite element analysis (FEA) software, was used as a tool to model the performance of the silicon-based sensor. The incorporation of stress concentration regions (SCRs), to localize stresses, was explored in detail. This methodology employs the structural design of the sensor silicon carrier. Therefore, the induced strain in the sensing chip yielded stress concentration in the vicinity of the SCRs. Hence, this concept was proved to enhance the sensor sensitivity. Another advantage of the SCRs is to reduce the sensor transverse gauge factor, which offered a great opportunity to develop a MEMS sensor with minimal cross sensitivity. Two basic SCR designs were studied. The depth of the SCRs was also investigated. Moreover, FEA simulation is utilized to investigate the effect of the sensing element depth on the sensor sensitivity. Simulation results showed that the sensor sensitivity is independent of the piezoresistors' depth. The microfabrication process flow was introduced to prototype the different sensor designs. The experiments covered operating temperature range from -50 °C to +50 °C. Finally, packaging scheme and bonding adhesive selection were discussed. The experimental results showed good agreement with the FEA simulation results. The findings of this study confirmed the feasibility of introducing SCRs in the sensor silicon carrier to improve the sensor sensitivity while using relatively high doping levels (5 × 1019 atoms cm-3). The fabricated sensors have a gauge factor about three to four times higher compared to conventional thin-foil strain gauges.

  12. Influenza A Virus Encoding Secreted Gaussia Luciferase as Useful Tool to Analyze Viral Replication and Its Inhibition by Antiviral Compounds and Cellular Proteins

    PubMed Central

    Palanisamy, Navaneethan; Goedecke, Ulrike; Jäger, Nils; Pöhlmann, Stefan; Winkler, Michael

    2014-01-01

    Reporter genes inserted into viral genomes enable the easy and rapid quantification of virus replication, which is instrumental to efficient in vitro screening of antiviral compounds or in vivo analysis of viral spread and pathogenesis. Based on a published design, we have generated several replication competent influenza A viruses carrying either fluorescent proteins or Gaussia luciferase. Reporter activity could be readily quantified in infected cultures, but the virus encoding Gaussia luciferase was more stable than viruses bearing fluorescent proteins and was therefore analyzed in detail. Quantification of Gaussia luciferase activity in the supernatants of infected culture allowed the convenient and highly sensitive detection of viral spread, and enzymatic activity correlated with the number of infectious particles released from infected cells. Furthermore, the Gaussia luciferase encoding virus allowed the sensitive quantification of the antiviral activity of the neuraminidase inhibitor (NAI) zanamivir and the host cell interferon-inducible transmembrane (IFITM) proteins 1–3, which are known to inhibit influenza virus entry. Finally, the virus was used to demonstrate that influenza A virus infection is sensitive to a modulator of endosomal cholesterol, in keeping with the concept that IFITMs inhibit viral entry by altering cholesterol levels in the endosomal membrane. In sum, we report the characterization of a novel influenza A reporter virus, which allows fast and sensitive detection of viral spread and its inhibition, and we show that influenza A virus entry is sensitive to alterations of endosomal cholesterol levels. PMID:24842154

  13. PMT waveform modeling at the Daya Bay experiment

    NASA Astrophysics Data System (ADS)

    Sören, Jetter; Dan, Dwyer; Jiang, Wen-Qi; Liu, Da-Wei; Wang, Yi-Fang; Wang, Zhi-Min; Wen, Liang-Jian

    2012-08-01

    Detailed measurements of Hamamatsu R5912 photomultiplier signals are presented, including the single photoelectron charge response, waveform shape, nonlinearity, saturation, overshoot, oscillation, prepulsing, and afterpulsing. The results were used to build a detailed model of the PMT signal characteristics over a wide range of light intensities. Including the PMT model in simulated Daya Bay particle interactions shows no significant systematic effects that are detrimental to the experimental sensitivity.

  14. Phase-sensitive X-ray imager

    DOEpatents

    Baker, Kevin Louis

    2013-01-08

    X-ray phase sensitive wave-front sensor techniques are detailed that are capable of measuring the entire two-dimensional x-ray electric field, both the amplitude and phase, with a single measurement. These Hartmann sensing and 2-D Shear interferometry wave-front sensors do not require a temporally coherent source and are therefore compatible with x-ray tubes and also with laser-produced or x-pinch x-ray sources.

  15. Tests for Serum Transglutaminase and Endomysial Antibodies Do Not Detect Most Patients With Celiac Disease and Persistent Villous Atrophy on Gluten-free Diets: a Meta-analysis.

    PubMed

    Silvester, Jocelyn A; Kurada, Satya; Szwajcer, Andrea; Kelly, Ciarán P; Leffler, Daniel A; Duerksen, Donald R

    2017-09-01

    Tests to measure serum endomysial antibodies (EMA) and antibodies to tissue transglutaminase (tTG) were developed to screen for celiac disease in patients consuming gluten. However, they are commonly used to monitor patients on a gluten-free diet (GFD). We conducted a meta-analysis to assess the sensitivity and specificity of tTG IgA and EMA IgA assays in identifying patients with celiac disease who have persistent villous atrophy despite a GFD. We searched PUBMED, EMBASE, BIOSIS, SCOPUS, clinicaltrials.gov, Science Citation Index, and Cochrane Library databases through November 2016. Inclusion criteria were studies of subjects with biopsy-confirmed celiac disease, follow-up biopsies, and measurement of serum antibodies on a GFD, biopsy performed on subjects regardless of symptoms, or antibody test results. Our analysis excluded subjects with refractory celiac disease, undergoing gluten challenge, or consuming a prescribed oats-containing GFD. Tests were considered to have positive or negative findings based on manufacturer cut-off values. Villous atrophy was defined as a Marsh 3 lesion or villous height:crypt depth ratio below 3.0. We constructed forest plots to determine the sensitivity and specificity of detection for individual studies. For the meta-analysis, a bivariate random effects model was used to jointly model sensitivity and specificity. Our search identified 5408 unique citations. Following review of abstracts, 442 articles were reviewed in detail. Only 26 studies (6 of tTG assays, 15 of EMA assays, and 5 of tTG and EMA assays) met our inclusion criteria. The most common reason studies were excluded from our analysis was inability to cross-tabulate histologic and serologic findings. The serum assays identified patients with persistent villous atrophy with high levels of specificity: 0.83 for the tTG IgA assay (95% CI, 0.79-0.87) and 0.91 for the EMA IgA assay (95% CI, 0.87-0.94). However, they detected villous atrophy with low levels of sensitivity: 0.50 for the tTG IgA assay (95% CI, 0.41-0.60) and 0.45 for the EMA IgA assay (95% CI, 0.34-0.57). The tests had similar levels of performance in pediatric and adult patients. In a meta-analysis of patients with biopsy-confirmed celiac disease undergoing follow-up biopsy on a GFD, we found that tests for serum tTG IgA and EMA IgA levels had low sensitivity (below 50%) in detection of persistent villous atrophy. We need more-accurate non-invasive markers of mucosal damage in children and adults with celiac disease who are following a GFD. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  16. Photoluminescence varied by selective excitation in BiGdWO6:Eu3+ phosphor

    NASA Astrophysics Data System (ADS)

    Pavani, K.; Graça, M. P. F.; Kumar, J. Suresh; Neves, A. J.

    2017-12-01

    Eu3+ doped bismuth gadolinium tungstate (BGW), a simplest member of Aurivillius family of layered perovskites, was synthesized by solid-state reaction method. Structural characterisation has been performed by X-Ray diffraction (XRD), Raman spectroscopy, Fourier Transform Infrared spectroscopy (FTIR) and scanning electron microscopy (SEM). Band gap of the host matrix has been calculated using reflectance and absorption spectra. Three different mechanisms were found to explain the excitation of Eu3+ ions and are described in detail. Photoluminescence (PL) spectra of the BGW phosphor doped with Eu3+ ions consist of major emission lines associated with 5D0 → 7FJ (J = 0, 1, 2, 3 and 4) of Eu3+ ion. Site selective PL excitation and emission indicates that Eu3+ ions doped in BiGdWO6 are sensitive to the excitation wavelength without change in the structure. Change in emission spectra were observed when the excitation wavelength was changed. Judd-Ofelt (J-O) parameters were determined from the indirect method to interpret the interactions between the host and dopant ions along with detailed analysis of lifetime measurements.

  17. DE 1 RIMS operational characteristics

    NASA Technical Reports Server (NTRS)

    Olsen, R. C.; Comfort, R. H.; Chandler, M. O.; Moore, T. E.; Waite, J. H., Jr.; Reasoner, D. L.; Biddle, A. P.

    1985-01-01

    The Retarding Ion Mass Spectrometer (RIMS) on the Dynamics Explorer 1 spacecraft observes both the thermal and superthermal (50 eV) ions of the ionosphere and inner magnetosphere. It is capable of measuring the detailed species distribution function of these ions in many cases. It was equipped with an integral electrometer to permit in-flight calibration of the detector sensitivities and variations thereof. A guide to understanding the RIMS data set is given. The reduction process from count rates to physical quantities is discussed in some detail. The procedure used to establish in-flight calibration is described, and results of a comparison with densities from plasma wave measurements are provided. Finally, a discussion is provided of various anomalies in the data set, including changes of channeltron efficiency with time, spin modulation of the axial sensor heads, apparent potential differences between the sensor heads, and failures of the radial head retarding potential sweep and of the -Z axial head aperture plane bias. Studies of the RIMS data set should be conducted only with a thorough awareness of the material presented here, or in collaboration with one of the scientists actively involved with RIMS data analysis.

  18. Oral cancer detection based on fluorescence polarization of blood plasma at excitation wavelength 405 nm

    NASA Astrophysics Data System (ADS)

    Pachaiappan, Rekha; Prakasarao, Aruna; Manoharan, Yuvaraj; Dornadula, Koteeswaran; Singaravelu, Ganesan

    2017-02-01

    During metabolism the metabolites such as hormones, proteins and enzymes were released in to the blood stream by the cells. These metabolites reflect any change that occurs due to any disturbances in normal metabolic function of the human system. This was well observed with the altered spectral signatures observed with fluorescence spectroscopic technique. Previously many have reported on the significance of native fluorescence spectroscopic method in the diagnosis of cancer. As fluorescence spectroscopy is sensitive and simple, it has complementary techniques such as excitation-emission matrix, synchronous and polarization. The fluorescence polarization measurement provides details about any association or binding reactions and denaturing effects that occurs due to change in the micro environment of cells and tissues. In this study, we have made an attempt in the diagnosis of oral cancer at 405 nm excitation using fluorescence polarization measurement. The fluorescence anisotropic values calculated from polarized fluorescence spectral data of normal and oral cancer subjects yielded a good accuracy when analyzed with linear discriminant analysis based artificial neural network. The results will be discussed in detail.

  19. Preliminary design report for OTEC stationkeeping subsystems (SKSS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-12-12

    Lockheed Ocean Systems with IMODCO prepared these preliminary designs for OTEC Stationkeeping Subsystems (SKSS) under contract to NOAA in support of the Department of Energy OTEC program. The results of Tasks III, V, and VI are presented in this design report. The report consists of five sections: introduction, preliminary designs for the multiple anchor leg (MAL) and tension anchor leg (TAL), costs and schedule, and conclusions. Extensive appendixes provide detailed descriptions of design methodology and include backup calculations and data to support the results presented. The objective of this effort is to complete the preliminary designs for the barge-MAL andmore » Spar-TAL SKSS. A set of drawings is provided for each which show arrangements, configuration, component details, engineering description, and deployment plan. Loads analysis, performance assessment, and sensitivity to requirements are presented, together with the methodology employed to analyze the systems and to derive the results presented. Life cycle costs and schedule are prepared and compared on a common basis. Finally, recommendations for the Commercial Plant SKSS are presented for both platform types.« less

  20. The Vip3Ag4 Insecticidal Protoxin from Bacillus thuringiensis Adopts A Tetrameric Configuration That Is Maintained on Proteolysis

    PubMed Central

    Palma, Leopoldo; Scott, David J.; Harris, Gemma; Din, Salah-Ud; Williams, Thomas L.; Roberts, Oliver J.; Young, Mark T.; Caballero, Primitivo; Berry, Colin

    2017-01-01

    The Vip3 proteins produced during vegetative growth by strains of the bacterium Bacillus thuringiensis show insecticidal activity against lepidopteran insects with a mechanism of action that may involve pore formation and apoptosis. These proteins are promising supplements to our arsenal of insecticidal proteins, but the molecular details of their activity are not understood. As a first step in the structural characterisation of these proteins, we have analysed their secondary structure and resolved the surface topology of a tetrameric complex of the Vip3Ag4 protein by transmission electron microscopy. Sites sensitive to proteolysis by trypsin are identified and the trypsin-cleaved protein appears to retain a similar structure as an octomeric complex comprising four copies each of the ~65 kDa and ~21 kDa products of proteolysis. This processed form of the toxin may represent the active toxin. The quality and monodispersity of the protein produced in this study make Vip3Ag4 a candidate for more detailed structural analysis using cryo-electron microscopy. PMID:28505109

Top