Science.gov

Sample records for process sensitivity analyses

  1. Uncertainty and Sensitivity Analyses Plan

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  2. Sensitivity in risk analyses with uncertain numbers.

    SciTech Connect

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  3. Calibration and Sensitivity Analyses of LEACHM Simulation Model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Calibration and sensitivity analyses are essential processes in evaluation and application of computer simulation models. Calibration is a process of adjusting model inputs within expected values to minimize the differences between simulated and measured data. The objective of this study was to cali...

  4. Workload analyse of assembling process

    NASA Astrophysics Data System (ADS)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  5. Making sense of global sensitivity analyses

    NASA Astrophysics Data System (ADS)

    Wainwright, Haruko M.; Finsterle, Stefan; Jung, Yoojin; Zhou, Quanlin; Birkholzer, Jens T.

    2014-04-01

    This study presents improved understanding of sensitivity analysis methods through a comparison of the local sensitivity and two global sensitivity analysis methods: the Morris and Sobol‧/Saltelli methods. We re-interpret the variance-based sensitivity indices from the Sobol‧/Saltelli method as difference-based measures. It suggests that the difference-based local and Morris methods provide the effect of each parameter including its interaction with others, similar to the total sensitivity index from the Sobol‧/Saltelli method. We also develop an alternative approximation method to efficiently compute the Sobol‧ index, using one-dimensional fitting of system responses from a Monte-Carlo simulation. For illustration, we conduct a sensitivity analysis of pressure propagation induced by fluid injection and leakage in a reservoir-aquitard-aquifer system. The results show that the three methods provide consistent parameter importance rankings in this system. Our study also reveals that the three methods can provide additional information to improve system understanding.

  6. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    SciTech Connect

    Hansen, Clifford W.; Martin, Curtis E.

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  7. The Importance of Uncertainty and Sensitivity Analyses in Process-Based Models of Carbon and Nitrogen Cycling in Terrestrial Ecosystems with Particular Emphasis on Forest Ecosystems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Many process-based models of carbon (C) and nitrogen (N) cycles have been developed for terrestrial ecosystems, including forest ecosystems. Existing models are sufficiently well advanced to help decision makers develop sustainable management policies and planning of terrestrial ecosystems, as they ...

  8. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    SciTech Connect

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  9. Sensitivity of Assimilated Tropical Tropospheric Ozone to the Meteorological Analyses

    NASA Technical Reports Server (NTRS)

    Hayashi, Hiroo; Stajner, Ivanka; Pawson, Steven; Thompson, Anne M.

    2002-01-01

    Tropical tropospheric ozone fields from two different experiments performed with an off-line ozone assimilation system developed in NASA's Data Assimilation Office (DAO) are examined. Assimilated ozone fields from the two experiments are compared with the collocated ozone profiles from the Southern Hemispheric Additional Ozonesondes (SHADOZ) network. Results are presented for 1998. The ozone assimilation system includes a chemistry-transport model, which uses analyzed winds from the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The two experiments use wind fields from different versions of GEOS DAS: an operational version of the GEOS-2 system and a prototype of the GEOS-4 system. While both versions of the DAS utilize the Physical-space Statistical Analysis System and use comparable observations, they use entirely different general circulation models and data insertion techniques. The shape of the annual-mean vertical profile of the assimilated ozone fields is sensitive to the meteorological analyses, with the GEOS-4-based ozone being closest to the observations. This indicates that the resolved transport in GEOS-4 is more realistic than in GEOS-2. Remaining uncertainties include quantification of the representation of sub-grid-scale processes in the transport calculations, which plays an important role in the locations and seasons where convection dominates the transport.

  10. Balancing data sharing requirements for analyses with data sensitivity

    USGS Publications Warehouse

    Jarnevich, C.S.; Graham, J.J.; Newman, G.J.; Crall, A.W.; Stohlgren, T.J.

    2007-01-01

    Data sensitivity can pose a formidable barrier to data sharing. Knowledge of species current distributions from data sharing is critical for the creation of watch lists and an early warning/rapid response system and for model generation for the spread of invasive species. We have created an on-line system to synthesize disparate datasets of non-native species locations that includes a mechanism to account for data sensitivity. Data contributors are able to mark their data as sensitive. This data is then 'fuzzed' in mapping applications and downloaded files to quarter-quadrangle grid cells, but the actual locations are available for analyses. We propose that this system overcomes the hurdles to data sharing posed by sensitive data. ?? 2006 Springer Science+Business Media B.V.

  11. Sensitivity analyses for parametric causal mediation effect estimation

    PubMed Central

    Albert, Jeffrey M.; Wang, Wei

    2015-01-01

    Causal mediation analysis uses a potential outcomes framework to estimate the direct effect of an exposure on an outcome and its indirect effect through an intermediate variable (or mediator). Causal interpretations of these effects typically rely on sequential ignorability. Because this assumption is not empirically testable, it is important to conduct sensitivity analyses. Sensitivity analyses so far offered for this situation have either focused on the case where the outcome follows a linear model or involve nonparametric or semiparametric models. We propose alternative approaches that are suitable for responses following generalized linear models. The first approach uses a Gaussian copula model involving latent versions of the mediator and the final outcome. The second approach uses a so-called hybrid causal-observational model that extends the association model for the final outcome, providing a novel sensitivity parameter. These models, while still assuming a randomized exposure, allow for unobserved (as well as observed) mediator-outcome confounders that are not affected by exposure. The methods are applied to data from a study of the effect of mother education on dental caries in adolescence. PMID:25395683

  12. Sensitivity analyses for parametric causal mediation effect estimation.

    PubMed

    Albert, Jeffrey M; Wang, Wei

    2015-04-01

    Causal mediation analysis uses a potential outcomes framework to estimate the direct effect of an exposure on an outcome and its indirect effect through an intermediate variable (or mediator). Causal interpretations of these effects typically rely on sequential ignorability. Because this assumption is not empirically testable, it is important to conduct sensitivity analyses. Sensitivity analyses so far offered for this situation have either focused on the case where the outcome follows a linear model or involve nonparametric or semiparametric models. We propose alternative approaches that are suitable for responses following generalized linear models. The first approach uses a Gaussian copula model involving latent versions of the mediator and the final outcome. The second approach uses a so-called hybrid causal-observational model that extends the association model for the final outcome, providing a novel sensitivity parameter. These models, while still assuming a randomized exposure, allow for unobserved (as well as observed) mediator-outcome confounders that are not affected by exposure. The methods are applied to data from a study of the effect of mother education on dental caries in adolescence. PMID:25395683

  13. Sensitivity analyses for partially observed recurrent event data.

    PubMed

    Akacha, Mouna; Ogundimu, Emmanuel O

    2016-01-01

    Recurrent events involve the occurrences of the same type of event repeatedly over time and are commonly encountered in longitudinal studies. Examples include seizures in epileptic studies or occurrence of cancer tumors. In such studies, interest lies in the number of events that occur over a fixed period of time. One considerable challenge in analyzing such data arises when a large proportion of patients discontinues before the end of the study, for example, because of adverse events, leading to partially observed data. In this situation, data are often modeled using a negative binomial distribution with time-in-study as offset. Such an analysis assumes that data are missing at random (MAR). As we cannot test the adequacy of MAR, sensitivity analyses that assess the robustness of conclusions across a range of different assumptions need to be performed. Sophisticated sensitivity analyses for continuous data are being frequently performed. However, this is less the case for recurrent event or count data. We will present a flexible approach to perform clinically interpretable sensitivity analyses for recurrent event data. Our approach fits into the framework of reference-based imputations, where information from reference arms can be borrowed to impute post-discontinuation data. Different assumptions about the future behavior of dropouts dependent on reasons for dropout and received treatment can be made. The imputation model is based on a flexible model that allows for time-varying baseline intensities. We assess the performance in a simulation study and provide an illustration with a clinical trial in patients who suffer from bladder cancer. PMID:26540016

  14. New method for analysing sensitivity distributions of electroencephalography measurements.

    PubMed

    Väisänen, Juho; Väisänen, Outi; Malmivuo, Jaakko; Hyttinen, Jari

    2008-02-01

    In this paper, we introduce a new modelling related parameter called region of interest sensitivity ratio (ROISR), which describes how well the sensitivity of an electroencephalography (EEG) measurement is concentrated within the region of interest (ROI), i.e. how specific the measurement is to the sources in ROI. We demonstrate the use of the concept by analysing the sensitivity distributions of bipolar EEG measurement. We studied the effects of interelectrode distance of a bipolar EEG lead on the ROISR with cortical and non-cortical ROIs. The sensitivity distributions of EEG leads were calculated analytically by applying a three-layer spherical head model. We suggest that the developed parameter has correlation to the signal-to-noise ratio (SNR) of a measurement, and thus we studied the correlation between ROISR and SNR with 254-channel visual evoked potential (VEP) measurements of two testees. Theoretical simulations indicate that source orientation and location have major impact on the specificity and therefore they should be taken into account when the optimal bipolar electrode configuration is selected. The results also imply that the new ROISR method bears a strong correlation to the SNR of measurement and can thus be applied in the future studies to efficiently evaluate and optimize EEG measurement setups. PMID:18189153

  15. Seeking harmony: estimands and sensitivity analyses for confirmatory clinical trials.

    PubMed

    Mehrotra, Devan V; Hemmings, Robert J; Russek-Cohen, Estelle

    2016-08-01

    In October 2014, the Steering Committee of the International Conference on Harmonization endorsed the formation of an expert working group to develop an addendum to the International Conference on Harmonization E9 guideline ("Statistical Principles for Clinical Trials"). The addendum will focus on two topics involving randomized confirmatory clinical trials: estimands and sensitivity analyses. Both topics are motivated, in part, by the need to improve the precision with which scientific questions of interest are formulated and addressed by clinical trialists and regulators, specifically in the context of post-randomization events such as use of rescue medication or missing data resulting from dropouts. Given the importance of these topics for the statistical and medical community, we articulate the reasons for the planned addendum. The resulting "ICH E9/R1" guideline will include a framework for improved trial planning, conduct, analysis, and interpretation; a draft is expected to be ready for public comment in the second half of 2016. PMID:26908545

  16. Uncertainty and Sensitivity Analyses of Model Predictions of Solute Transport

    NASA Astrophysics Data System (ADS)

    Skaggs, T. H.; Suarez, D. L.; Goldberg, S. R.

    2012-12-01

    Soil salinity reduces crop production on about 50% of irrigated lands worldwide. One roadblock to increased use of advanced computer simulation tools for better managing irrigation water and soil salinity is that the models usually do not provide an estimate of the uncertainty in model predictions, which can be substantial. In this work, we investigate methods for putting confidence bounds on HYDRUS-1D simulations of solute leaching in soils. Uncertainties in model parameters estimated with pedotransfer functions are propagated through simulation model predictions using Monte Carlo simulation. Generalized sensitivity analyses indicate which parameters are most significant for quantifying uncertainty. The simulation results are compared with experimentally observed transport variability in a number of large, replicated lysimeters.

  17. Uncertainty and Sensitivity Analyses of Duct Propagation Models

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.

    2008-01-01

    This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.

  18. Rock penetration : finite element sensitivity and probabilistic modeling analyses.

    SciTech Connect

    Fossum, Arlo Frederick

    2004-08-01

    This report summarizes numerical analyses conducted to assess the relative importance on penetration depth calculations of rock constitutive model physics features representing the presence of microscale flaws such as porosity and networks of microcracks and rock mass structural features. Three-dimensional, nonlinear, transient dynamic finite element penetration simulations are made with a realistic geomaterial constitutive model to determine which features have the most influence on penetration depth calculations. A baseline penetration calculation is made with a representative set of material parameters evaluated from measurements made from laboratory experiments conducted on a familiar sedimentary rock. Then, a sequence of perturbations of various material parameters allows an assessment to be made of the main penetration effects. A cumulative probability distribution function is calculated with the use of an advanced reliability method that makes use of this sensitivity database, probability density functions, and coefficients of variation of the key controlling parameters for penetration depth predictions. Thus the variability of the calculated penetration depth is known as a function of the variability of the input parameters. This simulation modeling capability should impact significantly the tools that are needed to design enhanced penetrator systems, support weapons effects studies, and directly address proposed HDBT defeat scenarios.

  19. Synthesis of Trigeneration Systems: Sensitivity Analyses and Resilience

    PubMed Central

    Carvalho, Monica; Lozano, Miguel A.; Ramos, José; Serra, Luis M.

    2013-01-01

    This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1) energy service demands of the hospital, (2) technical and economical characteristics of the potential technologies for installation, (3) prices of the available utilities interchanged, and (4) financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment) and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc.) at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs. PMID:24453881

  20. Accelerated safety analyses - structural analyses Phase I - structural sensitivity evaluation of single- and double-shell waste storage tanks

    SciTech Connect

    Becker, D.L.

    1994-11-01

    Accelerated Safety Analyses - Phase I (ASA-Phase I) have been conducted to assess the appropriateness of existing tank farm operational controls and/or limits as now stipulated in the Operational Safety Requirements (OSRs) and Operating Specification Documents, and to establish a technical basis for the waste tank operating safety envelope. Structural sensitivity analyses were performed to assess the response of the different waste tank configurations to variations in loading conditions, uncertainties in loading parameters, and uncertainties in material characteristics. Extensive documentation of the sensitivity analyses conducted and results obtained are provided in the detailed ASA-Phase I report, Structural Sensitivity Evaluation of Single- and Double-Shell Waste Tanks for Accelerated Safety Analysis - Phase I. This document provides a summary of the accelerated safety analyses sensitivity evaluations and the resulting findings.

  1. Entropy Analyses of Four Familiar Processes.

    ERIC Educational Resources Information Center

    Craig, Norman C.

    1988-01-01

    Presents entropy analysis of four processes: a chemical reaction, a heat engine, the dissolution of a solid, and osmosis. Discusses entropy, the second law of thermodynamics, and the Gibbs free energy function. (MVL)

  2. Uncertainty and Sensitivity Analyses Plan. Draft for Peer Review: Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy`s (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  3. Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models

    EPA Science Inventory

    Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...

  4. Structural Glycomic Analyses at High Sensitivity: A Decade of Progress

    PubMed Central

    Alley, William R.; Novotny, Milos V.

    2014-01-01

    The field of glycomics has recently advanced in response to the urgent need for structural characterization and quantification of complex carbohydrates in biologically and medically important applications. The recent success of analytical glycobiology at high sensitivity reflects numerous advances in biomolecular mass spectrometry and its instrumentation, capillary and microchip separation techniques, and microchemical manipulations of carbohydrate reactivity. The multimethodological approach appears to be necessary to gain an in-depth understanding of very complex glycomes in different biological systems. PMID:23560930

  5. Grid and aerodynamic sensitivity analyses of airplane components

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, Ideen; Smith, Robert E.; Tiwari, Surendra N.

    1993-01-01

    An algorithm is developed to obtain the grid sensitivity with respect to design parameters for aerodynamic optimization. The procedure is advocating a novel (geometrical) parameterization using spline functions such as NURBS (Non-Uniform Rational B-Splines) for defining the wing-section geometry. An interactive algebraic grid generation technique, known as Two-Boundary Grid Generation (TBGG) is employed to generate C-type grids around wing-sections. The grid sensitivity of the domain with respect to geometric design parameters has been obtained by direct differentiation of the grid equations. A hybrid approach is proposed for more geometrically complex configurations such as a wing or fuselage. The aerodynamic sensitivity coefficients are obtained by direct differentiation of the compressible two-dimensional thin-layer Navier-Stokes equations. An optimization package has been introduced into the algorithm in order to optimize the wing-section surface. Results demonstrate a substantially improved design due to maximized lift/drag ratio of the wing-section.

  6. Sobol method application in dimensional sensitivity analyses of different AFM cantilevers for biological particles

    NASA Astrophysics Data System (ADS)

    Korayem, M. H.; Taheri, M.; Ghahnaviyeh, S. D.

    2015-08-01

    Due to the more delicate nature of biological micro/nanoparticles, it is necessary to compute the critical force of manipulation. The modeling and simulation of reactions and nanomanipulator dynamics in a precise manipulation process require an exact modeling of cantilevers stiffness, especially the stiffness of dagger cantilevers because the previous model is not useful for this investigation. The stiffness values for V-shaped cantilevers can be obtained through several methods. One of them is the PBA method. In another approach, the cantilever is divided into two sections: a triangular head section and two slanted rectangular beams. Then, deformations along different directions are computed and used to obtain the stiffness values in different directions. The stiffness formulations of dagger cantilever are needed for this sensitivity analyses so the formulations have been driven first and then sensitivity analyses has been started. In examining the stiffness of the dagger-shaped cantilever, the micro-beam has been divided into two triangular and rectangular sections and by computing the displacements along different directions and using the existing relations, the stiffness values for dagger cantilever have been obtained. In this paper, after investigating the stiffness of common types of cantilevers, Sobol sensitivity analyses of the effects of various geometric parameters on the stiffness of these types of cantilevers have been carried out. Also, the effects of different cantilevers on the dynamic behavior of nanoparticles have been studied and the dagger-shaped cantilever has been deemed more suitable for the manipulation of biological particles.

  7. Genome-Facilitated Analyses of Geomicrobial Processes

    SciTech Connect

    Kenneth H. Nealson

    2012-05-02

    that makes up chitin, virtually all of the strains were in fact capable. This led to the discovery of a great many new genes involved with chitin and NAG metabolism (7). In a similar vein, a detailed study of the sugar utilization pathway revealed a major new insight into the regulation of sugar metabolism in this genus (19). Systems Biology and Comparative Genomics of the shewanellae: Several publications were put together describing the use of comparative genomics for analyses of the group Shewanella, and these were a logical culmination of our genomic-driven research (10,15,18). Eight graduate students received their Ph.D. degrees doing part of the work described here, and four postdoctoral fellows were supported. In addition, approximately 20 undergraduates took part in projects during the grant period.

  8. Peer review of HEDR uncertainty and sensitivity analyses plan

    SciTech Connect

    Hoffman, F.O.

    1993-06-01

    This report consists of a detailed documentation of the writings and deliberations of the peer review panel that met on May 24--25, 1993 in Richland, Washington to evaluate your draft report ``Uncertainty/Sensitivity Analysis Plan`` (PNWD-2124 HEDR). The fact that uncertainties are being considered in temporally and spatially varying parameters through the use of alternative time histories and spatial patterns deserves special commendation. It is important to identify early those model components and parameters that will have the most influence on the magnitude and uncertainty of the dose estimates. These are the items that should be investigated most intensively prior to committing to a final set of results.

  9. Phase sensitive Raman process with correlated seeds

    SciTech Connect

    Chen, Bing; Qiu, Cheng; Chen, L. Q. Zhang, Kai; Guo, Jinxian; Yuan, Chun-Hua; Zhang, Weiping; Ou, Z. Y.

    2015-03-16

    A phase sensitive Raman scattering was experimentally demonstrated by injecting a Stokes light seed into an atomic ensemble, whose internal state is set in such a way that it is coherent with the input Stokes seed. Such phase sensitive characteristic is a result of interference effect due to the phase correlation between the injected Stokes light field and the internal state of the atomic ensemble in the Raman process. Furthermore, the constructive interference leads to a Raman efficiency larger than other kinds of Raman processes such as stimulated Raman process with Stokes seed injection alone or uncorrelated light-atom seeding. It may find applications in precision spectroscopy, quantum optics, and precise measurement.

  10. Photogrammetry-Derived National Shoreline: Uncertainty and Sensitivity Analyses

    NASA Astrophysics Data System (ADS)

    Yao, F.; Parrish, C. E.; Calder, B. R.; Peeri, S.; Rzhanov, Y.

    2013-12-01

    Tidally-referenced shoreline data serve a multitude of purposes, ranging from nautical charting, to coastal change analysis, wetland migration studies, coastal planning, resource management and emergency management. To assess the suitability of the shoreline for a particular application, end users need not only the best available shoreline, but also reliable estimates of the uncertainty in the shoreline position. NOAA's National Geodetic Survey (NGS) is responsible for mapping the national shoreline depicted on NOAA nautical charts. Previous studies have focused on modeling the uncertainty in NGS shoreline derived from airborne lidar data, but, to date, these methods have not been extended to aerial imagery and photogrammetric shoreline extraction methods, which remain the primary shoreline mapping methods used by NGS. The aim of this study is to develop a rigorous total propagated uncertainty (TPU) model for shoreline compiled from both tide-coordinated and non-tide-coordinated aerial imagery and compiled using photogrammetric methods. The project site encompasses the strait linking Dennys Bay, Whiting Bay and Cobscook Bay in the 'Downeast' Maine coastal region. This area is of interest, due to the ecosystem services it provides, as well as its complex geomorphology. The region is characterized by a large tide range, strong tidal currents, numerous embayments, and coarse-sediment pocket beaches. Statistical methods were used to assess the uncertainty of shoreline in this site mapped using NGS's photogrammetric workflow, as well as to analyze the sensitivity of the mapped shoreline position to a variety of parameters, including elevation gradient in the intertidal zone. The TPU model developed in this work can easily be extended to other areas and may be facilitate estimation of uncertainty in inundation models and marsh migration models.

  11. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    SciTech Connect

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-07-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  12. Do lipids influence the allergic sensitization process?

    PubMed Central

    Bublin, Merima; Eiwegger, Thomas; Breiteneder, Heimo

    2014-01-01

    Allergic sensitization is a multifactorial process that is not only influenced by the allergen and its biological function per se but also by other small molecular compounds, such as lipids, that are directly bound as ligands by the allergen or are present in the allergen source. Several members of major allergen families bind lipid ligands through hydrophobic cavities or electrostatic or hydrophobic interactions. These allergens include certain seed storage proteins, Bet v 1–like and nonspecific lipid transfer proteins from pollens and fruits, certain inhalant allergens from house dust mites and cockroaches, and lipocalins. Lipids from the pollen coat and furry animals and the so-called pollen-associated lipid mediators are codelivered with the allergens and can modulate the immune responses of predisposed subjects by interacting with the innate immune system and invariant natural killer T cells. In addition, lipids originating from bacterial members of the pollen microbiome contribute to the outcome of the sensitization process. Dietary lipids act as adjuvants and might skew the immune response toward a TH2-dominated phenotype. In addition, the association with lipids protects food allergens from gastrointestinal degradation and facilitates their uptake by intestinal cells. These findings will have a major influence on how allergic sensitization will be viewed and studied in the future. PMID:24880633

  13. Further investigation of EUV process sensitivities for wafer track processing

    NASA Astrophysics Data System (ADS)

    Bradon, Neil; Nafus, K.; Shite, H.; Kitano, J.; Kosugi, H.; Goethals, M.; Cheng, S.; Hermans, J.; Hendrickx, E.; Baudemprez, B.; Van Den Heuvel, D.

    2010-04-01

    As Extreme ultraviolet (EUV) lithography technology shows promising results below 40nm feature sizes, TOKYO ELECTRON LTD.(TEL) is committed to understanding the fundamentals needed to improve our technology, thereby enabling customers to meet roadmap expectations. TEL continues collaboration with imec for evaluation of Coater/Developer processing sensitivities using the ASML Alpha Demo Tool for EUV exposures. The results from the collaboration help develop the necessary hardware for EUV Coater/Developer processing. In previous work, processing sensitivities of the resist materials were investigated to determine the impact on critical dimension (CD) uniformity and defectivity. In this work, new promising resist materials have been studied and more information pertaining to EUV exposures was obtained. Specifically, post exposure bake (PEB) impact to CD is studied in addition to dissolution characteristics and resist material hydrophobicity. Additionally, initial results show the current status of CDU and defectivity with the ADT/CLEAN TRACK ACTTM 12 lithocluster. Analysis of a five wafer batch of CDU wafers shows within wafer and wafer to wafer contribution from track processing. A pareto of a patterned wafer defectivity test gives initial insight into the process defects with the current processing conditions. From analysis of these data, it's shown that while improvements in processing are certainly possible, the initial results indicate a manufacturable process for EUV.

  14. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    NASA Astrophysics Data System (ADS)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  15. Systematic Processing of Clementine Data for Scientific Analyses

    NASA Technical Reports Server (NTRS)

    Mcewen, A. S.

    1993-01-01

    If fully successful, the Clementine mission will return about 3,000,000 lunar images and more than 5000 images of Geographos. Effective scientific analyses of such large datasets require systematic processing efforts. Concepts for two such efforts are described: glogal multispectral imaging of the moon; and videos of Geographos.

  16. Ground water flow modeling with sensitivity analyses to guide field data collection in a mountain watershed

    USGS Publications Warehouse

    Johnson, Raymond H.

    2007-01-01

    In mountain watersheds, the increased demand for clean water resources has led to an increased need for an understanding of ground water flow in alpine settings. In Prospect Gulch, located in southwestern Colorado, understanding the ground water flow system is an important first step in addressing metal loads from acid-mine drainage and acid-rock drainage in an area with historical mining. Ground water flow modeling with sensitivity analyses are presented as a general tool to guide future field data collection, which is applicable to any ground water study, including mountain watersheds. For a series of conceptual models, the observation and sensitivity capabilities of MODFLOW-2000 are used to determine composite scaled sensitivities, dimensionless scaled sensitivities, and 1% scaled sensitivity maps of hydraulic head. These sensitivities determine the most important input parameter(s) along with the location of observation data that are most useful for future model calibration. The results are generally independent of the conceptual model and indicate recharge in a high-elevation recharge zone as the most important parameter, followed by the hydraulic conductivities in all layers and recharge in the next lower-elevation zone. The most important observation data in determining these parameters are hydraulic heads at high elevations, with a depth of less than 100 m being adequate. Evaluation of a possible geologic structure with a different hydraulic conductivity than the surrounding bedrock indicates that ground water discharge to individual stream reaches has the potential to identify some of these structures. Results of these sensitivity analyses can be used to prioritize data collection in an effort to reduce time and money spend by collecting the most relevant model calibration data.

  17. Convection sensitivity and thermal analyses for indium and indium-lead mixing experiment (74-18)

    NASA Technical Reports Server (NTRS)

    Bourgeois, S. V.; Doty, J. P.

    1976-01-01

    Sounding rocket Experiment 74-18 was designed to demonstrate the effects of the Black Brandt rocket acceleration levels (during the low-g coast phase of its flight) on the motion of a liquid metal system to assist in preflight design. Some post flight analyses were also conducted. Preflight studies consisted of heat transfer analysis and convection sensitivity and convection modeling analyses which aided in the: (1) final selection of fluid materials (indium-lead melts rather than paraffins); (2) design and timing of heater and quench system; and (3) preflight predictions of the degree of lead penetration into the pure indium segment of the fluid. Postflight studies involved: (1) updating the convection sensitivity calculations by utilizing actual flight gravity levels; and (2) modeling the mixing in the flight samples.

  18. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    SciTech Connect

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  19. Visualization tools for uncertainty and sensitivity analyses on thermal-hydraulic transients

    NASA Astrophysics Data System (ADS)

    Popelin, Anne-Laure; Iooss, Bertrand

    2014-06-01

    In nuclear engineering studies, uncertainty and sensitivity analyses of simulation computer codes can be faced to the complexity of the input and/or the output variables. If these variables represent a transient or a spatial phenomenon, the difficulty is to provide tool adapted to their functional nature. In this paper, we describe useful visualization tools in the context of uncertainty analysis of model transient outputs. Our application involves thermal-hydraulic computations for safety studies of nuclear pressurized water reactors.

  20. Review of Approximate Analyses of Sheet Forming Processes

    NASA Astrophysics Data System (ADS)

    Weiss, Matthias; Rolfe, Bernard; Yang, Chunhui; de Souza, Tim; Hodgson, Peter

    2011-08-01

    Approximate models are often used for the following purposes: • in on-line control systems of metal forming processes where calculation speed is critical; • to obtain quick, quantitative information on the magnitude of the main variables in the early stages of process design; • to illustrate the role of the major variables in the process; • as an initial check on numerical modelling; and • as a basis for quick calculations on processes in teaching and training packages. The models often share many similarities; for example, an arbitrary geometric assumption of deformation giving a simplified strain distribution, simple material property descriptions—such as an elastic, perfectly plastic law—and mathematical short cuts such as a linear approximation of a polynomial expression. In many cases, the output differs significantly from experiment and performance or efficiency factors are developed by experience to tune the models. In recent years, analytical models have been widely used at Deakin University in the design of experiments and equipment and as a pre-cursor to more detailed numerical analyses. Examples that are reviewed in this paper include deformation of sandwich material having a weak, elastic core, load prediction in deep drawing, bending of strip (particularly of ageing steel where kinking may occur), process analysis of low-pressure hydroforming of tubing, analysis of the rejection rates in stamping, and the determination of constitutive models by an inverse method applied to bending tests.

  1. Sensitivity Analysis of a process based erosion model using FAST

    NASA Astrophysics Data System (ADS)

    Gabelmann, Petra; Wienhöfer, Jan; Zehe, Erwin

    2015-04-01

    Erosion, sediment redistribution and related particulate transport are severe problems in agro-ecosystems with highly erodible loess soils. They are controlled by various factors, for example rainfall intensity, topography, initial wetness conditions, spatial patterns of soil hydraulic parameters, land use and tillage practice. The interplay between those factors is not well understood. A number of models were developed to indicate those complex interactions and to estimate the amount of sediment which will be removed, transported and accumulated. In order to make use of physical-based models to provide insight on the physical system under study it is necessary to understand the interactions of parameters and processes in the model domain. Sensitivity analyses give insight in the relative importance of model parameters, which in addition is useful for judging where the greatest efforts have to be spent in acquiring or calibrating input parameters. The objective of this study was to determine the sensitivity of the erosion-related parameters in the CATFLOW model. We analysed simulations from the Weiherbach catchment, where good matches of observed hydrological response and erosion dynamics had been obtained in earlier studies. The Weiherbach catchment is located in an intensively cultivated loess region in southwest Germany and due to the hilly landscape and the highly erodible loess soils, erosion is a severe environmental problem. CATFLOW is a process-based hydrology and erosion model that can operate on catchment and hillslope scales. Soil water dynamics are described by the Richards equation including effective approaches for preferential flow. Evapotranspiration is simulated using an approach based on the Penman-Monteith equation. The model simulates overland flow using the diffusion wave equation. Soil detachment is related to the attacking forces of rainfall and overland flow, and the erosion resistance of the soil. Sediment transport capacity and sediment

  2. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    NASA Astrophysics Data System (ADS)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  3. Quasi-Static Probabilistic Structural Analyses Process and Criteria

    NASA Technical Reports Server (NTRS)

    Goldberg, B.; Verderaime, V.

    1999-01-01

    Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.

  4. CHILDREN AS A SENSITIVE SUBPOPULATION FOR THE RISK ASSESSMENT PROCESS

    EPA Science Inventory

    Children as a sensitive subpopulation for the risk assessment process
    Abstract
    For cancer risk assessment purposes, it is necessary to consider how to incorporate sensitive subpopulations into the process to ensure that they are appropriately protected. Children represent o...

  5. Cross-frequency Doppler sensitive signal processing

    NASA Astrophysics Data System (ADS)

    Wagstaff, Ronald A.

    2005-04-01

    When there is relative motion between an acoustic source and a receiver, a signal can be Doppler shifted in frequency and enter or leave the processing bins of the conventional signal processor. The amount of the shift is determined by the frequency and the rate of change in the distance between the source and the receiver. This frequency Doppler shifting can cause severe reductions in the processors performance. Special cross-frequency signal processing algorithms have recently been developed to mitigate the effects of Doppler. They do this by using calculation paths that cut across frequency bins in order to follow signals during frequency shifting. Cross-frequency spectral grams of a fast-flying sound source were compared to conventional grams, to evaluate the performance of this new signal processing method. The Doppler shifts in the data ranged up to 70 contiguous frequency bins. The resulting cross-frequency grams showed that three paths provided small to no improvement. Four paths showed improvements for either up-frequency or down-frequency shifting, but not for both. Two paths showed substantial improvement for both up-frequency and down-frequency shifting. The cross-frequency paths will be defined, and comparisons between conventional and cross-frequency grams will be presented. [Work supported by Miltec Corporation.

  6. Hospital Standardized Mortality Ratios: Sensitivity Analyses on the Impact of Coding

    PubMed Central

    Bottle, Alex; Jarman, Brian; Aylin, Paul

    2011-01-01

    Introduction Hospital standardized mortality ratios (HSMRs) are derived from administrative databases and cover 80 percent of in-hospital deaths with adjustment for available case mix variables. They have been criticized for being sensitive to issues such as clinical coding but on the basis of limited quantitative evidence. Methods In a set of sensitivity analyses, we compared regular HSMRs with HSMRs resulting from a variety of changes, such as a patient-based measure, not adjusting for comorbidity, not adjusting for palliative care, excluding unplanned zero-day stays ending in live discharge, and using more or fewer diagnoses. Results Overall, regular and variant HSMRs were highly correlated (ρ > 0.8), but differences of up to 10 points were common. Two hospitals were particularly affected when palliative care was excluded from the risk models. Excluding unplanned stays ending in same-day live discharge had the least impact despite their high frequency. The largest impacts were seen when capturing postdischarge deaths and using just five high-mortality diagnosis groups. Conclusions HSMRs in most hospitals changed by only small amounts from the various adjustment methods tried here, though small-to-medium changes were not uncommon. However, the position relative to funnel plot control limits could move in a significant minority even with modest changes in the HSMR. PMID:21790587

  7. Sensitivity and first-step uncertainty analyses for the preferential flow model MACRO.

    PubMed

    Dubus, Igor G; Brown, Colin D

    2002-01-01

    Sensitivity analyses for the preferential flow model MACRO were carried out using one-at-a-time and Monte Carlo sampling approaches. Four different scenarios were generated by simulating leaching to depth of two hypothetical pesticides in a sandy loam and a more structured clay loam soil. Sensitivity of the model was assessed using the predictions for accumulated water percolated at a 1-m depth and accumulated pesticide losses in percolation. Results for simulated percolation were similar for the two soils. Predictions of water volumes percolated were found to be only marginally affected by changes in input parameters and the most influential parameter was the water content defining the boundary between micropores and macropores in this dual-porosity model. In contrast, predictions of pesticide losses were found to be dependent on the scenarios considered and to be significantly affected by variations in input parameters. In most scenarios, predictions for pesticide losses by MACRO were most influenced by parameters related to sorption and degradation. Under specific circumstances, pesticide losses can be largely affected by changes in hydrological properties of the soil. Since parameters were varied within ranges that approximated their uncertainty, a first-step assessment of uncertainty for the predictions of pesticide losses was possible. Large uncertainties in the predictions were reported, although these are likely to have been overestimated by considering a large number of input parameters in the exercise. It appears desirable that a probabilistic framework accounting for uncertainty is integrated into the estimation of pesticide exposure for regulatory purposes. PMID:11837426

  8. Accounting for management costs in sensitivity analyses of matrix population models.

    PubMed

    Baxter, Peter W J; McCarthy, Michael A; Possingham, Hugh P; Menkhorst, Peter W; McLean, Natasha

    2006-06-01

    Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency. PMID

  9. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    PubMed

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on

  10. Sensitivity study for s process nucleosynthesis in AGB stars

    NASA Astrophysics Data System (ADS)

    Koloczek, A.; Thomas, B.; Glorius, J.; Plag, R.; Pignatari, M.; Reifarth, R.; Ritter, C.; Schmidt, S.; Sonnabend, K.

    2016-03-01

    In this paper we present a large-scale sensitivity study of reaction rates in the main component of the s process. The aim of this study is to identify all rates, which have a global effect on the s process abundance distribution and the three most important rates for the production of each isotope. We have performed a sensitivity study on the radiative 13C-pocket and on the convective thermal pulse, sites of the s process in AGB stars. We identified 22 rates, which have the highest impact on the s-process abundances in AGB stars.

  11. LCF on turbogenerator rotors and coil retaining rings: material characterization and sensitivity analyses

    NASA Astrophysics Data System (ADS)

    Olmi, G.; Freddi, A.

    2010-06-01

    Turbogenerator rotors and coil retaining rings (CRRs) are highly loaded components typically subjected to LCF at any machine switch-on and switch-off. The present study aims at LCF characterization of two widely applied steels, 26 NiCrMoV 14 5 (for rotor manufacturing) and 18Mn18Cr (for CRR). Material anisotropy is also considered by performing an extended experimental campaign on specimens machined along different (tangential and radial) directions from trial components. The experimental tests, carried out with the use of a novel testing-constraining device for misalignment auto-compensation and with an original methodology for strain controlling, led to the determination of static, cyclic and fatigue curves for all the investigated cases. The research was completed by sensitivity analyses on the adopted models, thus determining fatigue curve tolerance bands, and by a statistical Analysis of Variance to compare the LCF performance of the different materials along the two considered machining directions. Results showed a significantly better performance of 18Mn18Cr and a weak anisotropy effect, remarkable just at the highest strain values, on a reduced portion of the LCF life range.

  12. An optimisation model for regional integrated solid waste management II. Model application and sensitivity analyses.

    PubMed

    Najm, M Abou; El-Fadel, M; Ayoub, G; El-Taha, M; Al-Awar, F

    2002-02-01

    Increased environmental concerns and the emphasis on material and energy recovery are gradually changing the orientation of MSW management and planning. In this context, the application of optimisation techniques have been introduced to design the least cost solid waste management systems, considering the variety of management processes (recycling, composting, anaerobic digestion, incineration, and landfilling), and the existence of uncertainties associated with the number of system components and their interrelations. This study presents a model that was developed and applied to serve as a solid waste decision support system for MSW management taking into account both socio-economic and environmental considerations. The model accounts for solid waste generation rates, composition, collection, treatment, disposal as well as potential environmental impacts of various MSW management techniques. The model follows a linear programming formulation with the framework of dynamic optimisation. The model can serve as a tool to evaluate various MSW management alternatives and obtain the optimal combination of technologies for the handling, treatment and disposal of MSW in an economic and environmentally sustainable way. The sensitivity of various waste management policies is also addressed. The work is presented in a series of two papers: (I) model formulation, and (II) model application and sensitivity analysis. PMID:12020095

  13. Uncertainty and Sensitivity Analyses of a Two-Parameter Impedance Prediction Model

    NASA Technical Reports Server (NTRS)

    Jones, M. G.; Parrott, T. L.; Watson, W. R.

    2008-01-01

    This paper presents comparisons of predicted impedance uncertainty limits derived from Monte-Carlo-type simulations with a Two-Parameter (TP) impedance prediction model and measured impedance uncertainty limits based on multiple tests acquired in NASA Langley test rigs. These predicted and measured impedance uncertainty limits are used to evaluate the effects of simultaneous randomization of each input parameter for the impedance prediction and measurement processes. A sensitivity analysis is then used to further evaluate the TP prediction model by varying its input parameters on an individual basis. The variation imposed on the input parameters is based on measurements conducted with multiple tests in the NASA Langley normal incidence and grazing incidence impedance tubes; thus, the input parameters are assigned uncertainties commensurate with those of the measured data. These same measured data are used with the NASA Langley impedance measurement (eduction) processes to determine the corresponding measured impedance uncertainty limits, such that the predicted and measured impedance uncertainty limits (95% confidence intervals) can be compared. The measured reactance 95% confidence intervals encompass the corresponding predicted reactance confidence intervals over the frequency range of interest. The same is true for the confidence intervals of the measured and predicted resistance at near-resonance frequencies, but the predicted resistance confidence intervals are lower than the measured resistance confidence intervals (no overlap) at frequencies away from resonance. A sensitivity analysis indicates the discharge coefficient uncertainty is the major contributor to uncertainty in the predicted impedances for the perforate-over-honeycomb liner used in this study. This insight regarding the relative importance of each input parameter will be used to guide the design of experiments with test rigs currently being brought on-line at NASA Langley.

  14. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    PubMed Central

    Curtis, Janelle M.R.

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  15. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    PubMed

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  16. The Effect of Nonadopted Analyses on Sentence Processing

    ERIC Educational Resources Information Center

    Cai, Zhenguang G.; Sturt, Patrick; Pickering, Martin J.

    2012-01-01

    Are comprehenders affected by an alternative analysis that they do not adopt (a nonadopted analysis) in case of syntactic ambiguity? If the processor only considers and maintains the preferred analysis at a given time, an alternative analysis is then not considered and will hence not affect processing. In two experiments, we examined the…

  17. Microfiltration of thin stillage: Process simulation and economic analyses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In plant scale operations, multistage membrane systems have been adopted for cost minimization. We considered design optimization and operation of a continuous microfiltration (MF) system for the corn dry grind process. The objectives were to develop a model to simulate a multistage MF system, optim...

  18. Thermodynamics of Gases: Combustion Processes, Analysed in Slow Motion

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2013-01-01

    We present a number of simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature relatively slow combustion processes of pure hydrogen as well as fast reactions involving oxy-hydrogen in a stoichiometric mixture. (Contains 4 figures.)

  19. Criticality Safety and Sensitivity Analyses of PWR Spent Nuclear Fuel Repository Facilities

    SciTech Connect

    Maucec, Marko; Glumac, Bogdan

    2005-01-15

    Monte Carlo criticality safety and sensitivity calculations of pressurized water reactor (PWR) spent nuclear fuel repository facilities for the Slovenian nuclear power plant Krsko are presented. The MCNP4C code was deployed to model and assess the neutron multiplication parameters of pool-based storage and dry transport containers under various loading patterns and moderating conditions. To comply with standard safety requirements, fresh 4.25% enriched nuclear fuel was assumed. The impact of potential optimum moderation due to water steam or foam formation as well as of different interpretations, of neutron multiplication through varying the system boundary conditions was elaborated. The simulations indicate that in the case of compact (all rack locations filled with fresh fuel) single or 'double tiering' loading, the supercriticality can occur under the conditions of enhanced neutron moderation, due to accidentally reduced density of cooling water. Under standard operational conditions the effective multiplication factor (k{sub eff}) of pool-based storage facility remains below the specified safety limit of 0.95. The nuclear safety requirements are fulfilled even when the fuel elements are arranged at a minimal distance, which can be initiated, for example, by an earthquake. The dry container in its recommended loading scheme with 26 fuel elements represents a safe alternative for the repository of fresh fuel. Even in the case of complete water flooding, the k{sub eff} remains below the specified safety level of 0.98. The criticality safety limit may however be exceeded with larger amounts of loaded fuel assemblies (i.e., 32). Additional Monte Carlo criticality safety analyses are scheduled to consider the 'burnup credit' of PWR spent nuclear fuel, based on the ongoing calculation of typical burnup activities.

  20. Pre-study feasibility and identifying sensitivity analyses for protocol pre-specification in comparative effectiveness research.

    PubMed

    Girman, Cynthia J; Faries, Douglas; Ryan, Patrick; Rotelli, Matt; Belger, Mark; Binkowitz, Bruce; O'Neill, Robert

    2014-05-01

    The use of healthcare databases for comparative effectiveness research (CER) is increasing exponentially despite its challenges. Researchers must understand their data source and whether outcomes, exposures and confounding factors are captured sufficiently to address the research question. They must also assess whether bias and confounding can be adequately minimized. Many study design characteristics may impact on the results; however, minimal if any sensitivity analyses are typically conducted, and those performed are post hoc. We propose pre-study steps for CER feasibility assessment and to identify sensitivity analyses that might be most important to pre-specify to help ensure that CER produces valid interpretable results. PMID:24969153

  1. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    SciTech Connect

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  2. Analyses of Initial Geomorphic Processes by Microdrone-based photogrammetry

    NASA Astrophysics Data System (ADS)

    Gerwin, W.; Raab, T. A.; Seiffert, T.; Nenov, R.; Maurer, T.; Veste, M.; Hüttl, R. F.

    2009-12-01

    In the last ten years digital photogrammetric measurement became very important for monitoring ecosystem development. For the monitoring of structures and processes of the artificial water catchment “Chicken Creek” the generating of automatic Digital Elevation Models (DEM) became a significant tool in order to measure and visualize landscape change, complex-shaped soil surfaces and water interaction processes. Within these key studies a sub catchment of the whole research area was reviewed by photogrammetric measurements. Instead of airplane based aerial photographs, here the option of a Microdrone-based aerial photography was applied by using a commercial digital camera. This affords a maximal terrestrial resolution of less then 1 square centimetres per pixel depending on the operating altitude. Due to the given camera settings it was necessary to choose an operational altitude of maximal 80m to get DEMs with a high spatial and temporal resolution. The aerial photographs were made with an overlap of at least 60%. For camera calibration and identifying of homologues points out of the overlapping images by image matching we used the photogrammetry-software “LISA FOTO”. The resulting DEMs visualize the soil surface of the artificial water catchment with a grid resolution of 10 cm. Due to the restricted geometric stability and calibration of the camera and other notable ascendancies a precision of 1 - 10cm in height was obtained for each DEM. Thus, this method allows a more detailed analysis of the soil surface and is important for ecohydrological modelling.

  3. Process analyses of ITER Toroidal Field Structure cooling scheme

    NASA Astrophysics Data System (ADS)

    Maekawa, R.; Takami, S.; Iwamoto, A.; Chang, H. S.; Forgeas, A.; Chalifour, M.; Serio, L.

    2014-09-01

    Process studies for Toroidal Field Structure (TF ST) system with a dedicated Auxiliary Cold Box (ACB-ST) have been conducted under 15 MA baseline, including plasma disruptions. ACB-ST consists of two heat exchangers immersed in the Liquid Helium (LHe) subcooler, which are placed at the inlet/outlet of a Supercritical Helium (SHe) cold circulator (centrifugal pump). Robustness of ACB-ST is a key to achieve the stability of TF coil operation since it provides the thermal barrier at the interface of the TF Winding Pack (WP) with ST. The paper discusses the control logic for the nominal plasma operating scenario and for Mitigation to regulate the dynamic heat loads on ST. In addition, the operation field of a cold circulator is described in the case of plasma disruptions. The required performance of heat exchangers in the ACB-ST is assessed based on the expected operating conditions.

  4. Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Becker, D. A.

    1977-01-01

    Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.

  5. Sensitivities of bipolar junction transistor electrical parameters to processing variables

    NASA Astrophysics Data System (ADS)

    Abdulkarim, H. S.

    1980-03-01

    Variations and sensitivities of bipolar junction transistor (BJT) electrical parameters to processing variables were examined. The functional dependence of these sensitivities on the processing schedule employed was estimated. Some design criteria or guidelines that should be followed to reduce the sensitivities of electrical parameters and to minimize yield loss were determined. The BJT parameters considered were electrical parameters of the Ebers-Moll and hybrid-pi models, as well as some device parameters that were useful for the characterization of processing results. The processing variables considered were time and temperature for each of the processing steps of the double diffusion method, physical constants that influence the impurity distribution in silicon, and device dimensions. In evaluating the impurity atom distribution, the diffusion coefficient was assumed to be independent of impurity concentration and the superposition model was assumed for the interaction of the two oppositely charged impurities. In evaluating the electrical parameters, use of a one dimensional model and the modified Moll-Ross relations were assumed to be adequate in relating variations in electrical characteristics to variations in processing variables and physical properties.

  6. Addressing ten questions about conceptual rainfall-runoff models with global sensitivity analyses in R

    NASA Astrophysics Data System (ADS)

    Shin, Mun-Ju; Guillaume, Joseph H. A.; Croke, Barry F. W.; Jakeman, Anthony J.

    2013-10-01

    Sensitivity analysis (SA) is generally recognized as a worthwhile step to diagnose and remedy difficulties in identifying model parameters, and indeed in discriminating between model structures. An analysis of papers in three journals indicates that SA is a standard omission in hydrological modeling exercises. We provide some answers to ten reasonably generic questions using the Morris and Sobol SA methods, including to what extent sensitivities are dependent on parameter ranges selected, length of data period, catchment response type, model structures assumed and climatic forcing. Results presented demonstrate the sensitivity of four target functions to parameter variations of four rainfall-runoff models of varying complexity (4-13 parameters). Daily rainfall, streamflow and pan evaporation data are used from four 10-year data sets and from five catchments in the Australian Capital Territory (ACT) region. Similar results are obtained using the Morris and Sobol methods. It is shown how modelers can easily identify parameters that are insensitive, and how they might improve identifiability. Using a more complex objective function, however, may not result in all parameters becoming sensitive. Crucially, the results of the SA can be influenced by the parameter ranges selected. The length of data period required to characterize the sensitivities assuredly is a minimum of five years. The results confirm that only the simpler models have well-identified parameters, but parameter sensitivities vary between catchments. Answering these ten questions in other case studies is relatively easy using freely available software with the Hydromad and Sensitivity packages in R.

  7. Deterministic vs. probabilistic analyses to identify sensitive parameters in dose assessment using RESRAD.

    PubMed

    Kamboj, Sunita; Cheng, Jing-Jy; Yu, Charley

    2005-05-01

    The dose assessments for sites containing residual radioactivity usually involve the use of computer models that employ input parameters describing the physical conditions of the contaminated and surrounding media and the living and consumption patterns of the receptors in analyzing potential doses to the receptors. The precision of the dose results depends on the precision of the input parameter values. The identification of sensitive parameters that have great influence on the dose results would help set priorities in research and information gathering for parameter values so that a more precise dose assessment can be conducted. Two methods of identifying site-specific sensitive parameters, deterministic and probabilistic, were compared by applying them to the RESRAD computer code for analyzing radiation exposure for a residential farmer scenario. The deterministic method has difficulty in evaluating the effect of simultaneous changes in a large number of input parameters on the model output results. The probabilistic method easily identified the most sensitive parameters, but the sensitivity measure of other parameters was obscured. The choice of sensitivity analysis method would depend on the availability of site-specific data. Generally speaking, the deterministic method would identify the same set of sensitive parameters as the probabilistic method when 1) the baseline values used in the deterministic method were selected near the mean or median value of each parameter and 2) the selected range of parameter values used in the deterministic method was wide enough to cover the 5th to 95th percentile values from the distribution of that parameter. PMID:15824576

  8. Sensitivity studies for the weak r process: neutron capture rates

    SciTech Connect

    Surman, R.; Mumpower, M.; Sinclair, R.; Jones, K. L.; Hix, W. R.; McLaughlin, G. C.

    2014-04-15

    Rapid neutron capture nucleosynthesis involves thousands of nuclear species far from stability, whose nuclear properties need to be understood in order to accurately predict nucleosynthetic outcomes. Recently sensitivity studies have provided a deeper understanding of how the r process proceeds and have identified pieces of nuclear data of interest for further experimental or theoretical study. A key result of these studies has been to point out the importance of individual neutron capture rates in setting the final r-process abundance pattern for a ‘main’ (A ∼ 130 peak and above) r process. Here we examine neutron capture in the context of a ‘weak’ r process that forms primarily the A ∼ 80 r-process abundance peak. We identify the astrophysical conditions required to produce this peak region through weak r-processing and point out the neutron capture rates that most strongly influence the final abundance pattern.

  9. A novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China

    PubMed Central

    Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing

    2016-01-01

    Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108

  10. A novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China

    NASA Astrophysics Data System (ADS)

    Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing

    2016-04-01

    Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs.

  11. A novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China.

    PubMed

    Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing

    2016-01-01

    Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108

  12. Grid and design variables sensitivity analyses for NACA four-digit wing-sections

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, Ideen; Smith, Robert E.; Tiwari, Surendra N.

    1993-01-01

    Two distinct parameterization procedures are developed for investigating the grid sensitivity with respect to design parameters of a wing-section example. The first procedure is based on traditional (physical) relations defining NACA four-digit wing-sections. The second is advocating a novel (geometrical) parameterization using spline functions such as NURBS (Non-Uniform Rational B-Splines) for defining the wing-section geometry. An interactive algebraic grid generation technique, known as Hermite Cubic Interpolation, is employed to generate C-type grids around wing-sections. The grid sensitivity of the domain with respect to design and grid parameters has been obtained by direct differentiation of the grid equations. A hybrid approach is proposed for more geometrically complex configurations. A comparison of the sensitivity coefficients with those obtained using a finite-difference approach has been made to verify the feasibility of the approach. The aerodynamic sensitivity coefficients are obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.

  13. Genome-wide functional genomic and transcriptomic analyses for genes regulating sensitivity to vorinostat

    PubMed Central

    Falkenberg, Katrina J; Gould, Cathryn M; Johnstone, Ricky W; Simpson, Kaylene J

    2014-01-01

    Identification of mechanisms of resistance to histone deacetylase inhibitors, such as vorinostat, is important in order to utilise these anticancer compounds more efficiently in the clinic. Here, we present a dataset containing multiple tiers of stringent siRNA screening for genes that when knocked down conferred sensitivity to vorinostat-induced cell death. We also present data from a miRNA overexpression screen for miRNAs contributing to vorinostat sensitivity. Furthermore, we provide transcriptomic analysis using massively parallel sequencing upon knockdown of 14 validated vorinostat-resistance genes. These datasets are suitable for analysis of genes and miRNAs involved in cell death in the presence and absence of vorinostat as well as computational biology approaches to identify gene regulatory networks. PMID:25977774

  14. Dye-sensitized solar cells using laser processing techniques

    NASA Astrophysics Data System (ADS)

    Kim, Heungsoo; Pique, Alberto; Kushto, Gary P.; Auyeung, Raymond C. Y.; Lee, S. H.; Arnold, Craig B.; Kafafi, Zakia H.

    2004-07-01

    Laser processing techniques, such as laser direct-write (LDW) and laser sintering, have been used to deposit mesoporous nanocrystalline TiO2 (nc-TiO2) films for use in dye-sensitized solar cells. LDW enables the fabrication of conformal structures containing metals, ceramics, polymers and composites on rigid and flexible substrates without the use of masks or additional patterning techniques. The transferred material maintains a porous, high surface area structure that is ideally suited for dye-sensitized solar cells. In this experiment, a pulsed UV laser (355nm) is used to forward transfer a paste of commercial TiO2 nanopowder (P25) onto transparent conducting electrodes on flexible polyethyleneterephthalate (PET) and rigid glass substrates. For the cells based on flexible PET substrates, the transferred TiO2 layers were sintered using an in-situ laser to improve electron paths without damaging PET substrates. In this paper, we demonstrate the use of laser processing techniques to produce nc-TiO2 films (~10 μm thickness) on glass for use in dye-sensitized solar cells (Voc = 690 mV, Jsc = 8.7 mA/cm2, ff = 0.67, η = 4.0 % at 100 mW/cm2). This work was supported by the Office of Naval Research.

  15. Control of a mechanical aeration process via topological sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Abdelwahed, M.; Hassine, M.; Masmoudi, M.

    2009-06-01

    The topological sensitivity analysis method gives the variation of a criterion with respect to the creation of a small hole in the domain. In this paper, we use this method to control the mechanical aeration process in eutrophic lakes. A simplified model based on incompressible Navier-Stokes equations is used, only considering the liquid phase, which is the dominant one. The injected air is taken into account through local boundary conditions for the velocity, on the injector holes. A 3D numerical simulation of the aeration effects is proposed using a mixed finite element method. In order to generate the best motion in the fluid for aeration purposes, the optimization of the injector location is considered. The main idea is to carry out topological sensitivity analysis with respect to the insertion of an injector. Finally, a topological optimization algorithm is proposed and some numerical results, showing the efficiency of our approach, are presented.

  16. Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event

    DOE PAGESBeta

    Strydom, Gerhard

    2013-01-01

    The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less

  17. Boundary Sensitivities for Diffusion Processes in Time Dependent Domains

    SciTech Connect

    Costantini, C. Gobet, E. Karoui, N. El

    2006-09-15

    We study the sensitivity, with respect to a time dependent domain D{sub s}, of expectations of functionals of a diffusion process stopped at the exit from D{sub s} or normally reflected at the boundary of D{sub s}. We establish a differentiability result and give an explicit expression for the gradient that allows the gradient to be computed by Monte Carlo methods. Applications to optimal stopping problems and pricing of American options, to singular stochastic control and others are discussed.

  18. Sensitivity studies for the main r process: nuclear masses

    SciTech Connect

    Aprahamian, A.; Mumpower, M.; Bentley, I.; Surman, R.

    2014-04-15

    The site of the rapid neutron capture process (r process) is one of the open challenges in all of physics today. The r process is thought to be responsible for the creation of more than half of all elements beyond iron. The scientific challenges to understanding the origin of the heavy elements beyond iron lie in both the uncertainties associated with astrophysical conditions that are needed to allow an r process to occur and a vast lack of knowledge about the properties of nuclei far from stability. One way is to disentangle the nuclear and astrophysical components of the question. On the nuclear physics side, there is great global competition to access and measure the most exotic nuclei that existing facilities can reach, while simultaneously building new, more powerful accelerators to make even more exotic nuclei. On the astrophysics side, various astrophysical scenarios for the production of the heaviest elements have been proposed but open questions remain. This paper reports on a sensitivity study of the r process to determine the most crucial nuclear masses to measure using an r-process simulation code, several mass models (FRDM, Duflo-Zuker, and HFB-21), and three potential astrophysical scenarios.

  19. Sensitivity and uncertainty analyses for thermo-hydraulic calculation of research reactor

    SciTech Connect

    Hartini, Entin; Andiwijayakusuma, Dinan; Isnaeni, Muh Darwis

    2013-09-09

    The sensitivity and uncertainty analysis of input parameters on thermohydraulic calculations for a research reactor has successfully done in this research. The uncertainty analysis was carried out on input parameters for thermohydraulic calculation of sub-channel analysis using Code COOLOD-N. The input parameters include radial peaking factor, the increase bulk coolant temperature, heat flux factor and the increase temperature cladding and fuel meat at research reactor utilizing plate fuel element. The input uncertainty of 1% - 4% were used in nominal power calculation. The bubble detachment parameters were computed for S ratio (the safety margin against the onset of flow instability ratio) which were used to determine safety level in line with the design of 'Reactor Serba Guna-G. A. Siwabessy' (RSG-GA Siwabessy). It was concluded from the calculation results that using the uncertainty input more than 3% was beyond the safety margin of reactor operation.

  20. Sensitivity and uncertainty analyses for thermo-hydraulic calculation of research reactor

    NASA Astrophysics Data System (ADS)

    Hartini, Entin; Andiwijayakusuma, Dinan; Isnaeni, Muh Darwis

    2013-09-01

    The sensitivity and uncertainty analysis of input parameters on thermohydraulic calculations for a research reactor has successfully done in this research. The uncertainty analysis was carried out on input parameters for thermohydraulic calculation of sub-channel analysis using Code COOLOD-N. The input parameters include radial peaking factor, the increase bulk coolant temperature, heat flux factor and the increase temperature cladding and fuel meat at research reactor utilizing plate fuel element. The input uncertainty of 1% - 4% were used in nominal power calculation. The bubble detachment parameters were computed for S ratio (the safety margin against the onset of flow instability ratio) which were used to determine safety level in line with the design of "Reactor Serba Guna-G. A. Siwabessy" (RSG-GA Siwabessy). It was concluded from the calculation results that using the uncertainty input more than 3% was beyond the safety margin of reactor operation.

  1. Chronic Beryllium Disease and Sensitization at a Beryllium Processing Facility

    PubMed Central

    Rosenman, Kenneth; Hertzberg, Vicki; Rice, Carol; Reilly, Mary Jo; Aronchick, Judith; Parker, John E.; Regovich, Jackie; Rossman, Milton

    2005-01-01

    We conducted a medical screening for beryllium disease of 577 former workers from a beryllium processing facility. The screening included a medical and work history questionnaire, a chest radiograph, and blood lymphocyte proliferation testing for beryllium. A task exposure and a job exposure matrix were constructed to examine the association between exposure to beryllium and the development of beryllium disease. More than 90% of the cohort completed the questionnaire, and 74% completed the blood and radiograph component of the screening. Forty-four (7.6%) individuals had definite or probable chronic beryllium disease (CBD), and another 40 (7.0%) were sensitized to beryllium. The prevalence of CBD and sensitization in our cohort was greater than the prevalence reported in studies of other beryllium-exposed cohorts. Various exposure measures evaluated included duration; first decade worked; last decade worked; cumulative, mean, and highest job; and highest task exposure to beryllium (to both soluble and nonsoluble forms). Soluble cumulative and mean exposure levels were lower in individuals with CBD. Sensitized individuals had shorter duration of exposure, began work later, last worked longer ago, and had lower cumulative and peak exposures and lower nonsoluble cumulative and mean exposures. A possible explanation for the exposure–response findings of our study may be an interaction between genetic predisposition and a decreased permanence of soluble beryllium in the body. Both CBD and sensitization occurred in former workers whose mean daily working lifetime average exposures were lower than the current allowable Occupational Safety and Health Administration workplace air level of 2 μg/m3 and the Department of Energy guideline of 0.2 μg/m3. PMID:16203248

  2. Demonstrations of safeguards process monitoring sensitivities. Consolidated Fuel Reprocessing Program

    SciTech Connect

    Ehinger, M.H.; Wachter, J.W.

    1986-09-01

    Can process-monitoring information be incorporated into safeguards tests. What level of sensitivity to removals of materials can be achieved with process monitoring tests. These questions are being answered by a series of tests in US facilities. These tests involve full-scale facilities that simulate operating reprocessing plant conditions with natural or depleted uranium solutions as surrogate feed materials. Safeguards systems are in place to detect loss or unauthorized removals of solutions. As part of the tests, actual removals of material from the operating facilities are made. Removals have ranged from several kilograms down to a few hundred grams of uranium. For purposes of the tests, uranium is considered to be plutonium and is the focus of safeguards concerns.

  3. Using ambulatory care sensitive hospitalisations to analyse the effectiveness of primary care services in Mexico.

    PubMed

    Lugo-Palacios, David G; Cairns, John

    2015-11-01

    Ambulatory care sensitive hospitalisations (ACSH) have been widely used to study the quality and effectiveness of primary care. Using data from 248 general hospitals in Mexico during 2001-2011 we identify 926,769 ACSHs in 188 health jurisdictions before and during the health insurance expansion that took place in this period, and estimate a fixed effects model to explain the association of the jurisdiction ACSH rate with patient and community factors. National ACSH rate increased by 50%, but trends and magnitude varied at the jurisdiction and state level. We find strong associations of the ACSH rate with socioeconomic conditions, health care supply and health insurance coverage even after controlling for potential endogeneity in the rolling out of the insurance programme. We argue that the traditional focus on the increase/decrease of the ACSH rate might not be a valid indicator to assess the effectiveness of primary care in a health insurance expansion setting, but that the ACSH rate is useful when compared between and within states once the variation in insurance coverage is taken into account as it allows the identification of differences in the provision of primary care. The high heterogeneity found in the ACSH rates suggests important state and jurisdiction differences in the quality and effectiveness of primary care in Mexico. PMID:26387080

  4. Water sensitivity, antimicrobial, and physicochemical analyses of edible films based on HPMC and/or chitosan.

    PubMed

    Sebti, Issam; Chollet, Emilie; Degraeve, Pascal; Noel, Claude; Peyrol, Eric

    2007-02-01

    Several properties of chitosan films associated or not with hydroxypropylmethylcellulose polymer (HPMC) and HPMC films incorporating or not nisin and/or milk fat were studied. Nisin addition at a level of 250 microg mL-1 and likewise chitosan at 1% (w/v) concentration were efficient for total inhibiting Aspergillus niger and Kocuria rhizophila food deterioration microorganisms. HPMC and chitosan films were transparent, whereas nisin and/or fat incorporation induced a 2-fold lightness parameter increase and, consequently, involved more white films. Measurements of tensile strength, as well as ultimate elongation, showed that chitosan and HPMC initial films were elastic and flexible. High thermal treatments and additive incorporation induced less elastic and more plastic films. Water vapor transmission as far as total water desorption rates suggested that chitosan films were slightly sensitive to water. Water transfer was decreased by <60% as compared with other biopolymer films. Regarding its hydrophobic property, the capacity of fat to improve film water barrier was very limited. PMID:17263462

  5. A coding scheme for analysing problem-solving processes of first-year engineering students

    NASA Astrophysics Data System (ADS)

    Grigg, Sarah J.; Benson, Lisa C.

    2014-11-01

    This study describes the development and structure of a coding scheme for analysing solutions to well-structured problems in terms of cognitive processes and problem-solving deficiencies for first-year engineering students. A task analysis approach was used to assess students' problem solutions using the hierarchical structure from a theoretical framework from mathematics research. The coding scheme comprises 54 codes within the categories of knowledge access, knowledge generation, self-management, conceptual errors, mechanical errors, management errors, approach strategies and solution accuracy, and was demonstrated to be both dependable and credible for analysing problems typical of topics in first-year engineering courses. The problem-solving processes were evaluated in terms of time, process elements, errors committed and self-corrected errors. Therefore, problem-solving performance can be analysed in terms of both accuracy and efficiency of processing, pinpointing areas meriting further study from a cognitive perspective, and for documenting processes for research purposes.

  6. Uncertainty and sensitivity analyses of groundwater travel time in a two-dimensional variably-saturated fractured geologic medium

    SciTech Connect

    Gureghian, A.B.; Sagar, B.

    1993-12-31

    This paper presents a method for sensitivity and uncertainty analyses of a hypothetical nuclear waste repository located in a layer and fractured unconfined aquifer. Groundwater travel time (GWTT) has been selected as the performance measure. The repository is located in the unsaturated zone, and the source of aquifer recharge is due solely to steady infiltration impinging uniformly over the surface area that is to be modeled. The equivalent porous media concept is adopted to model the fractured zone in the flow field. The evaluation of pathlines and travel time of water particles in the flow domain is performed based on a Lagrangian concept. The Bubnov-Galerkin finite-element method is employed to solve the primary flow problem (non-linear), the equation of motion, and the adjoint sensitivity equations. The matrix equations are solved with a Gaussian elimination technique using sparse matrix solvers. The sensitivity measure corresponds to the first derivative of the performance measure (GWTT) with respect to the parameters of the system. The uncertainty in the computed GWTT is quantified by using the first-order second-moment (FOSM) approach, a probabilistic method that relies on the mean and variance of the system parameters and the sensitivity of the performance measure with respect to these parameters. A test case corresponding to a layered and fractured, unconfined aquifer is then presented to illustrate the various features of the method.

  7. Sensitivity analyses of the theoretical equations used in point velocity probe (PVP) data interpretation.

    PubMed

    Devlin, J F

    2016-09-01

    Point velocity probes (PVPs) are dedicated, relatively low-cost instruments for measuring groundwater speed and direction in non-cohesive, unconsolidated porous media aquifers. They have been used to evaluate groundwater velocity in groundwater treatment zones, glacial outwash aquifers, and within streambanks to assist with the assessment of groundwater-surfaced water exchanges. Empirical evidence of acceptable levels of uncertainty for these applications has come from both laboratory and field trials. This work extends previous assessments of the method by examining the inherent uncertainties arising from the equations used to interpret PVP datasets. PVPs operate by sensing tracer movement on the probe surface, producing apparent velocities from two detectors. Sensitivity equations were developed for the estimation of groundwater speed, v∞, and flow direction, α, as a function of the apparent velocities of water on the probe surface and the α angle itself. The resulting estimations of measurement uncertainty, which are inherent limitations of the method, apply to idealized, homogeneous porous media, which on the local scale of a PVP measurement may be approached. This work does not address experimental sources of error that may arise from the presence of cohesive sediments that prevent collapse around the probe, the effects of centimeter-scale aquifer heterogeneities, or other complications related to borehole integrity or operator error, which could greatly exceed the inherent sources of error. However, the findings reported here have been shown to be in agreement with the previous empirical work. On this basis, properly installed and functioning PVPs should be expected to produce estimates of groundwater speed with uncertainties less than ±15%, with the most accurate values of groundwater speed expected when horizontal flow is incident on the probe surface at about 50° from the active injection port. Directions can be measured with uncertainties less than

  8. Sensitivity analyses of the theoretical equations used in point velocity probe (PVP) data interpretation

    NASA Astrophysics Data System (ADS)

    Devlin, J. F.

    2016-09-01

    Point velocity probes (PVPs) are dedicated, relatively low-cost instruments for measuring groundwater speed and direction in non-cohesive, unconsolidated porous media aquifers. They have been used to evaluate groundwater velocity in groundwater treatment zones, glacial outwash aquifers, and within streambanks to assist with the assessment of groundwater-surfaced water exchanges. Empirical evidence of acceptable levels of uncertainty for these applications has come from both laboratory and field trials. This work extends previous assessments of the method by examining the inherent uncertainties arising from the equations used to interpret PVP datasets. PVPs operate by sensing tracer movement on the probe surface, producing apparent velocities from two detectors. Sensitivity equations were developed for the estimation of groundwater speed, v∞, and flow direction, α, as a function of the apparent velocities of water on the probe surface and the α angle itself. The resulting estimations of measurement uncertainty, which are inherent limitations of the method, apply to idealized, homogeneous porous media, which on the local scale of a PVP measurement may be approached. This work does not address experimental sources of error that may arise from the presence of cohesive sediments that prevent collapse around the probe, the effects of centimeter-scale aquifer heterogeneities, or other complications related to borehole integrity or operator error, which could greatly exceed the inherent sources of error. However, the findings reported here have been shown to be in agreement with the previous empirical work. On this basis, properly installed and functioning PVPs should be expected to produce estimates of groundwater speed with uncertainties less than ± 15%, with the most accurate values of groundwater speed expected when horizontal flow is incident on the probe surface at about 50° from the active injection port. Directions can be measured with uncertainties less than

  9. Sensitivity of rainfall-runoff processes in the Hydrological Open Air Laboratory

    NASA Astrophysics Data System (ADS)

    Széles, Borbála; Parajka, Juraj; Blöschl, Günter; Oismüller, Markus; Hajnal, Géza

    2016-04-01

    The objective of the present study was to simulate the rainfall response and analyse the sensitivity of rainfall-runoff processes of the Hydrological Open Air Laboratory (HOAL) in Petzenkirchen, a small experimental watershed (66 ha) located in the western part of Lower Austria and dominated by agricultural land use. Due to the extensive monitoring network in the HOAL, the spatial and temporal heterogeneity of hydro-meteorological elements are exceptionally well represented on the catchment scale. The study aimed to exploit the facilities of the available database collected by innovative sensing techniques to advance the understanding of various rainfall-runoff processes. The TUWmodel, a lumped, conceptual hydrological model, following the structure of the HBV model was implemented on the catchment. In addition to the surface runoff at the catchment outlet, several different runoff generation mechanisms (tile drainage flow, saturation excess runoff from wetlands and groundwater discharge from springs) were also simulated, which gave an opportunity to describe the spatial distribution of model parameters in the study area. This helped to proceed from the original lumped model concept towards a spatially distributed one. The other focus of this work was to distinguish the dominant model parameters from the less sensitive ones for each tributary with different runoff type by applying two different sensitivity analysis methods, the simple local perturbation and the global Latin-Hypercube-One-Factor-At-a-Time (LH-OAT) tools. Moreover, the impacts of modifying the initial parameters of the LH-OAT method and the applied objective functions were also taken into consideration. The results and findings of the model and sensitivity analyses were summarized and future development perspectives were outlined. Key words: spatial heterogeneity of rainfall-runoff mechanisms, sensitivity analysis, lumped conceptual hydrological model

  10. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  11. Descriptive and sensitivity analyses of WATBALI: A dynamic soil water model

    NASA Technical Reports Server (NTRS)

    Hildreth, W. W. (Principal Investigator)

    1981-01-01

    A soil water computer model that uses the IBM Continuous System Modeling Program III to solve the dynamic equations representing the soil, plant, and atmospheric physical or physiological processes considered is presented and discussed. Using values describing the soil-plant-atmosphere characteristics, the model predicts evaporation, transpiration, drainage, and soil water profile changes from an initial soil water profile and daily meteorological data. The model characteristics and simulations that were performed to determine the nature of the response to controlled variations in the input are described the results of the simulations are included and a change that makes the response of the model more closely represent the observed characteristics of evapotranspiration and profile changes for dry soil conditions is examined.

  12. Sensitivity and uncertainty analyses of unsaturated flow travel time in the CHnz unit of Yucca Mountain, Nevada

    SciTech Connect

    Nichols, W.E.; Freshley, M.D.

    1991-10-01

    This report documents the results of sensitivity and uncertainty analyses conducted to improve understanding of unsaturated zone ground-water travel time distribution at Yucca Mountain, Nevada. The US Department of Energy (DOE) is currently performing detailed studies at Yucca Mountain to determine its suitability as a host for a geologic repository for the containment of high-level nuclear wastes. As part of these studies, DOE is conducting a series of Performance Assessment Calculational Exercises, referred to as the PACE problems. The work documented in this report represents a part of the PACE-90 problems that addresses the effects of natural barriers of the site that will stop or impede the long-term movement of radionuclides from the potential repository to the accessible environment. In particular, analyses described in this report were designed to investigate the sensitivity of the ground-water travel time distribution to different input parameters and the impact of uncertainty associated with those input parameters. Five input parameters were investigated in this study: recharge rate, saturated hydraulic conductivity, matrix porosity, and two curve-fitting parameters used for the van Genuchten relations to quantify the unsaturated moisture-retention and hydraulic characteristics of the matrix. 23 refs., 20 figs., 10 tabs.

  13. Face-Sensitive Processes One Hundred Milliseconds after Picture Onset

    PubMed Central

    Dering, Benjamin; Martin, Clara D.; Moro, Sancho; Pegna, Alan J.; Thierry, Guillaume

    2011-01-01

    The human face is the most studied object category in visual neuroscience. In a quest for markers of face processing, event-related potential (ERP) studies have debated whether two peaks of activity – P1 and N170 – are category-selective. Whilst most studies have used photographs of unaltered images of faces, others have used cropped faces in an attempt to reduce the influence of features surrounding the “face–object” sensu stricto. However, results from studies comparing cropped faces with unaltered objects from other categories are inconsistent with results from studies comparing whole faces and objects. Here, we recorded ERPs elicited by full front views of faces and cars, either unaltered or cropped. We found that cropping artificially enhanced the N170 whereas it did not significantly modulate P1. In a second experiment, we compared faces and butterflies, either unaltered or cropped, matched for size and luminance across conditions, and within a narrow contrast bracket. Results of Experiment 2 replicated the main findings of Experiment 1. We then used face–car morphs in a third experiment to manipulate the perceived face-likeness of stimuli (100% face, 70% face and 30% car, 30% face and 70% car, or 100% car) and the N170 failed to differentiate between faces and cars. Critically, in all three experiments, P1 amplitude was modulated in a face-sensitive fashion independent of cropping or morphing. Therefore, P1 is a reliable event sensitive to face processing as early as 100 ms after picture onset. PMID:21954382

  14. Comparative analyses of fungicide sensitivity and SSR marker variations indicate a low risk of developing azoxystrobin resistance in Phytophthora infestans.

    PubMed

    Qin, Chun-Fang; He, Meng-Han; Chen, Feng-Ping; Zhu, Wen; Yang, Li-Na; Wu, E-Jiao; Guo, Zheng-Liang; Shang, Li-Ping; Zhan, Jiasui

    2016-01-01

    Knowledge of the evolution of fungicide resistance is important in securing sustainable disease management in agricultural systems. In this study, we analyzed and compared the spatial distribution of genetic variation in azoxystrobin sensitivity and SSR markers in 140 Phytophthora infestans isolates sampled from seven geographic locations in China. Sensitivity to azoxystrobin and its genetic variation in the pathogen populations was measured by the relative growth rate (RGR) at four fungicide concentrations and determination of the effective concentration for 50% inhibition (EC50). We found that all isolates in the current study were sensitive to azoxystrobin and their EC50 was similar to that detected from a European population about 20 years ago, suggesting the risk of developing azoxystrobin resistance in P. infestans populations is low. Further analyses indicate that reduced genetic variation and high fitness cost in resistant mutations are the likely causes for the low evolutionary likelihood of developing azoxystrobin resistance in the pathogen. We also found a negative correlation between azoxystrobin tolerance in P. infestans populations and the mean annual temperature of collection sites, suggesting that global warming may increase the efficiency of using the fungicide to control the late blight. PMID:26853908

  15. Sensitivity of Middle Atmospheric Analyses to the Representation of Gravity-Wave Drag in the DAO's Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Li, Shuhua; Chern, Jiundar; Joiner, Joanna; Lin, Shian-Jiann; Pawson, Steven; daSilva, Arlindo; Atlas, Robert (Technical Monitor)

    2002-01-01

    The damping of mesoscale gravity waves has important effects on the global circulation, structure, and composition of the atmosphere. A number of assimilation and forecast experiments have been conducted to examine the sensitivity of meteorological analyses and forecasts to the representation of gravity wave impacts in a data assimilation system (DAS). The experiments were conducted with the Finite-Volume (FV) DAS developed at NASA's Data Assimilation Office (DAO), The main purpose of this research is to determine the optimal combination of wave number, phase speed, wavelength, etc. for representing gravity-wave drag (GWD) in FVDAS. The GWD included in FVDAS includes a spectrum of waves, as would be forced by topography and transient motions (e.g., convection) in the troposphere. The sensitivity experiments are performed by modifying several parameters, such as the number of waves allowed, their wavelength, the background stress amplitude, etc. The results show that the assimilated fields are very sensitive to the number of gravity waves represented in the system, especially at high latitudes of the middle and upper stratosphere and mesosphere in winter. The analyzed stratopause temperature varies by up to 10K when the GWD scheme is modified from a multiple-wave scheme (using a stationary wave and waves with phase speeds of 10, 20, 30 and 40 m/s in each direction) to a single, stationary wave. Insight into the reality of the various versions of the GWD can be obtained by examining the "Observation minus Forecast" residuals from the FVDAS.

  16. Cell-Line Selectivity Improves the Predictive Power of Pharmacogenomic Analyses and Helps Identify NADPH as Biomarker for Ferroptosis Sensitivity.

    PubMed

    Shimada, Kenichi; Hayano, Miki; Pagano, Nen C; Stockwell, Brent R

    2016-02-18

    Precision medicine in oncology requires not only identification of cancer-associated mutations but also effective drugs for each cancer genotype, which is still a largely unsolved problem. One approach for the latter challenge has been large-scale testing of small molecules in genetically characterized cell lines. We hypothesized that compounds with high cell-line-selective lethality exhibited consistent results across such pharmacogenomic studies. We analyzed the compound sensitivity data of 6,259 lethal compounds from the NCI-60 project. A total of 2,565 cell-line-selective lethal compounds were identified and grouped into 18 clusters based on their median growth inhibitory GI50 profiles across the 60 cell lines, which were shown to represent distinct mechanisms of action. Further transcriptome analysis revealed a biomarker, NADPH abundance, for predicting sensitivity to ferroptosis-inducing compounds, which we experimentally validated. In summary, incorporating cell-line-selectivity filters improves the predictive power of pharmacogenomic analyses and enables discovery of biomarkers that predict the sensitivity of cells to specific cell death inducers. PMID:26853626

  17. Comparative analyses of fungicide sensitivity and SSR marker variations indicate a low risk of developing azoxystrobin resistance in Phytophthora infestans

    PubMed Central

    Qin, Chun-Fang; He, Meng-Han; Chen, Feng-Ping; Zhu, Wen; Yang, Li-Na; Wu, E-Jiao; Guo, Zheng-Liang; Shang, Li-Ping; Zhan, Jiasui

    2016-01-01

    Knowledge of the evolution of fungicide resistance is important in securing sustainable disease management in agricultural systems. In this study, we analyzed and compared the spatial distribution of genetic variation in azoxystrobin sensitivity and SSR markers in 140 Phytophthora infestans isolates sampled from seven geographic locations in China. Sensitivity to azoxystrobin and its genetic variation in the pathogen populations was measured by the relative growth rate (RGR) at four fungicide concentrations and determination of the effective concentration for 50% inhibition (EC50). We found that all isolates in the current study were sensitive to azoxystrobin and their EC50 was similar to that detected from a European population about 20 years ago, suggesting the risk of developing azoxystrobin resistance in P. infestans populations is low. Further analyses indicate that reduced genetic variation and high fitness cost in resistant mutations are the likely causes for the low evolutionary likelihood of developing azoxystrobin resistance in the pathogen. We also found a negative correlation between azoxystrobin tolerance in P. infestans populations and the mean annual temperature of collection sites, suggesting that global warming may increase the efficiency of using the fungicide to control the late blight. PMID:26853908

  18. Early sensory processing deficits predict sensitivity to distraction in schizophrenia.

    PubMed

    Smucny, Jason; Olincy, Ann; Eichman, Lindsay C; Lyons, Emma; Tregellas, Jason R

    2013-06-01

    Patients with schizophrenia frequently report difficulties paying attention during important tasks, because they are distracted by noise in the environment. The neurobiological mechanism underlying this problem is, however, poorly understood. The goal of this study was to determine if early sensory processing deficits contribute to sensitivity to distracting noise in schizophrenia. To that end, we examined the effect of environmentally relevant distracting noise on performance of an attention task in 19 patients with schizophrenia and 22 age and gender-matched healthy comparison subjects. Using electroencephalography, P50 auditory gating ratios also were measured in the same subjects and were examined for their relationship to noise-induced changes in performance on the attention task. Positive symptoms also were evaluated in patients. Distracting noise caused a greater increase in reaction time in patients, relative to comparison subjects, on the attention task. Higher P50 auditory gating ratios also were observed in patients. P50 gating ratio significantly correlated with the magnitude of noise-induced increase in reaction time. Noise-induced increase in reaction time was associated with delusional thoughts in patients. P50 ratios were associated with delusional thoughts and hallucinations in patients. In conclusion, the observation of noise effects on attention in patients is consistent with subjective reports from patients. The observed relationship between noise effects on reaction time and P50 auditory gating supports the hypothesis that early inhibitory processing deficits may contribute to susceptibility to distraction in the illness. PMID:23590872

  19. Children's Writing Processes when Using Computers: Insights Based on Combining Analyses of Product and Process

    ERIC Educational Resources Information Center

    Gnach, Aleksandra; Wiesner, Esther; Bertschi-Kaufmann, Andrea; Perrin, Daniel

    2007-01-01

    Children and young people are increasingly performing a variety of writing tasks using computers, with word processing programs thus becoming their natural writing environment. The development of keystroke logging programs enables us to track the process of writing, without changing the writing environment for the writers. In the myMoment schools…

  20. Detection of dominant modelled nitrate processes with a high temporally resolved parameter sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Haas, Marcelo; Guse, Björn; Pfannerstill, Matthias; Fohrer, Nicola

    2015-04-01

    The river systems in the catchment are impacted by nutrient inputs from different sources of the landscape. The input of nitrate from agricultural areas into the river systems is related to numerous processes which occur simultaneously and influence each other permanently. These complex nitrate processes are represented in eco-hydrological models. To obtain reliable future predictions of nitrate concentrations in rivers, the nitrogen cycle needs to be reproduced accurately in these models. For complex research questions dealing with nitrate impacts, it is thus essential to better understand the nitrate process dynamics in models and to reduce the uncertainties in water quality predictions. This study aims to improve the understanding of nitrate process dynamics by using a temporal parameter sensitivity analysis, which is applied on an eco-hydrological model. With this method, the dominant model parameters are detected for each day. Thus, by deriving temporal variations in dominant model parameters, the nitrate process dynamic is investigated for phases with different conditions for nitrate transport and transformations. The results show that the sensitivity of different nitrate parameters varies temporally. These temporal dynamics in dominant parameters are explained by temporal variations in nitrate transport and plant uptake processes. An extended view on the dynamics of the temporal parameter sensitivity is obtained by analysing different modelled runoff components and nitrate pathways. Thereby, the interpretation of seasonal variations in dominant nitrate pathways is assisted and a better understanding of the role of nitrate in the environment is achieved. We conclude that this method improves the reliability of modelled nitrate processes. In this way, a better basis for recent and future scenarios of nitrate loads management is provided.

  1. Reduction of Large Detailed Chemical Kinetic Mechanisms for Autoignition Using Joint Analyses of Reaction Rates and Sensitivities

    SciTech Connect

    Saylam, A; Ribaucour, M; Pitz, W J; Minetti, R

    2006-11-29

    A new technique of reduction of detailed mechanisms for autoignition, which is based on two analysis methods is described. An analysis of reaction rates is coupled to an analysis of reaction sensitivity for the detection of redundant reactions. Thresholds associated with the two analyses have a great influence on the size and efficiency of the reduced mechanism. Rules of selection of the thresholds are defined. The reduction technique has been successfully applied to detailed autoignition mechanisms of two reference hydrocarbons: n-heptane and iso-octane. The efficiency of the technique and the ability of the reduced mechanisms to reproduce well the results generated by the full mechanism are discussed. A speedup of calculations by a factor of 5.9 for n-heptane mechanism and by a factor of 16.7 for iso-octane mechanism is obtained without losing accuracy of the prediction of autoignition delay times and concentrations of intermediate species.

  2. Analysing Learning Processes and Quality of Knowledge Construction in Networked Learning

    ERIC Educational Resources Information Center

    Veldhuis-Diermanse, A. E.; Biemans, H. J. A.; Mulder, M.; Mahdizadeh, H.

    2006-01-01

    Networked learning aims to foster students' knowledge construction processes as well as the quality of knowledge construction. In this respect, it is crucial to be able to analyse both aspects of networked learning. Based on theories on networked learning and the empirical work of relevant authors in this domain, two coding schemes are presented…

  3. A Coding Scheme for Analysing Problem-Solving Processes of First-Year Engineering Students

    ERIC Educational Resources Information Center

    Grigg, Sarah J.; Benson, Lisa C.

    2014-01-01

    This study describes the development and structure of a coding scheme for analysing solutions to well-structured problems in terms of cognitive processes and problem-solving deficiencies for first-year engineering students. A task analysis approach was used to assess students' problem solutions using the hierarchical structure from a…

  4. Sensitive assay of glycogen phosphorylase activity by analysing the chain-lengthening action on a Fluorogenic [corrected] maltooligosaccharide derivative.

    PubMed

    Makino, Yasushi; Omichi, Kaoru

    2009-07-01

    The action of glycogen phosphorylase (GP) is essentially reversible, although GP is generally classified as a glycogen-degrading enzyme. In this study, we developed a highly sensitive and convenient assay for GP activity by analysing its chain-lengthening action on a fluorogenic maltooligosaccharide derivative in a glucose-1-phosphate-rich medium. Characterization of the substrate specificity of GP using pyridylaminated (PA-) maltooligosaccharides of various sizes revealed that a maltotetraosyl (Glc(4)) residue comprising the non-reducing-end of a PA-maltooligosaccharide is indispensable for the chain-lengthening action of GP, and PA-maltohexaose is the most suitable substrate for the purpose of this study. By using a high-performance liquid chromatograph equipped with a fluorescence spectrophotometer, PA-maltoheptaose produced by the chain elongation of PA-maltohexaose could be isolated and quantified at 10 fmol. This method was used to measure the GP activities of crude and purified GP preparations, and was demonstrated to have about 1,000 times greater sensitivity than the spectrophotometric orthophosphate assay. PMID:19279194

  5. Uncertainty and sensitivity analyses for gas and brine migration at the Waste Isolation Pilot Plant, May 1992

    SciTech Connect

    Helton, J.C.; Bean, J.E.; Butcher, B.M.; Garner, J.W.; Vaughn, P.; Schreiber, J.D.; Swift, P.N.

    1993-08-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis, stepwise regression analysis and examination of scatterplots are used in conjunction with the BRAGFLO model to examine two phase flow (i.e., gas and brine) at the Waste Isolation Pilot Plant (WIPP), which is being developed by the US Department of Energy as a disposal facility for transuranic waste. The analyses consider either a single waste panel or the entire repository in conjunction with the following cases: (1) fully consolidated shaft, (2) system of shaft seals with panel seals, and (3) single shaft seal without panel seals. The purpose of this analysis is to develop insights on factors that are potentially important in showing compliance with applicable regulations of the US Environmental Protection Agency (i.e., 40 CFR 191, Subpart B; 40 CFR 268). The primary topics investigated are (1) gas production due to corrosion of steel, (2) gas production due to microbial degradation of cellulosics, (3) gas migration into anhydrite marker beds in the Salado Formation, (4) gas migration through a system of shaft seals to overlying strata, and (5) gas migration through a single shaft seal to overlying strata. Important variables identified in the analyses include initial brine saturation of the waste, stoichiometric terms for corrosion of steel and microbial degradation of cellulosics, gas barrier pressure in the anhydrite marker beds, shaft seal permeability, and panel seal permeability.

  6. The power of liking: Highly sensitive aesthetic processing for guiding us through the world

    PubMed Central

    Faerber, Stella J.; Carbon, Claus-Christian

    2012-01-01

    Assessing liking is one of the most intriguing and influencing types of processing we experience day by day. We can decide almost instantaneously what we like and are highly consistent in our assessments, even across cultures. Still, the underlying mechanism is not well understood and often neglected by vision scientists. Several potential predictors for liking are discussed in the literature, among them very prominently typicality. Here, we analysed the impact of subtle changes of two perceptual dimensions (shape and colour saturation) of three-dimensional models of chairs on typicality and liking. To increase the validity of testing, we utilized a test-adaptation–retest design for extracting sensitivity data of both variables from a static (test only) as well as from a dynamic perspective (test–retest). We showed that typicality was only influenced by shape properties, whereas liking combined processing of shape plus saturation properties, indicating more complex and integrative processing. Processing the aesthetic value of objects, persons, or scenes is an essential and sophisticated mechanism, which seems to be highly sensitive to the slightest variations of perceptual input. PMID:23145310

  7. Dual sensitivity mode system for monitoring processes and sensors

    DOEpatents

    Wilks, Alan D.; Wegerich, Stephan W.; Gross, Kenneth C.

    2000-01-01

    A method and system for analyzing a source of data. The system and method involves initially training a system using a selected data signal, calculating at least two levels of sensitivity using a pattern recognition methodology, activating a first mode of alarm sensitivity to monitor the data source, activating a second mode of alarm sensitivity to monitor the data source and generating a first alarm signal upon the first mode of sensitivity detecting an alarm condition and a second alarm signal upon the second mode of sensitivity detecting an associated alarm condition. The first alarm condition and second alarm condition can be acted upon by an operator and/or analyzed by a specialist or computer program.

  8. To analyse a trace or not? Evaluating the decision-making process in the criminal investigation.

    PubMed

    Bitzer, Sonja; Ribaux, Olivier; Albertini, Nicola; Delémont, Olivier

    2016-05-01

    In order to broaden our knowledge and understanding of the decision steps in the criminal investigation process, we started by evaluating the decision to analyse a trace and the factors involved in this decision step. This decision step is embedded in the complete criminal investigation process, involving multiple decision and triaging steps. Considering robbery cases occurring in a geographic region during a 2-year-period, we have studied the factors influencing the decision to submit biological traces, directly sampled on the scene of the robbery or on collected objects, for analysis. The factors were categorised into five knowledge dimensions: strategic, immediate, physical, criminal and utility and decision tree analysis was carried out. Factors in each category played a role in the decision to analyse a biological trace. Interestingly, factors involving information available prior to the analysis are of importance, such as the fact that a positive result (a profile suitable for comparison) is already available in the case, or that a suspect has been identified through traditional police work before analysis. One factor that was taken into account, but was not significant, is the matrix of the trace. Hence, the decision to analyse a trace is not influenced by this variable. The decision to analyse a trace first is very complex and many of the tested variables were taken into account. The decisions are often made on a case-by-case basis. PMID:26942272

  9. Using Uncertainty and Sensitivity Analyses in Socioecological Agent-Based Models to Improve Their Analytical Performance and Policy Relevance

    PubMed Central

    Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764

  10. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    PubMed

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764

  11. Feedback processing in children and adolescents: Is there a sensitivity for processing rewarding feedback?

    PubMed

    Ferdinand, Nicola K; Becker, Aljoscha M W; Kray, Jutta; Gehring, William J

    2016-02-01

    Developmental studies indicate that children rely more on external feedback than adults. Some of these studies claim that they additionally show higher sensitivity toward positive feedback, while others find they preferably use negative feedback for learning. However, these studies typically did not disentangle feedback valence and expectancy, which might contribute to the controversial results. The present study aimed at examining the neurophysiological correlates of feedback processing in children (8-10 years) and adolescents (12-14 years) in a time estimation paradigm that allows separating the contribution of valence and expectancy. Our results show that in the feedback-related negativity (FRN), an event-related potential (ERP) reflecting the fast initial processing of feedback stimuli, children and adolescents did not differentiate between unexpected positive and negative feedback. Thus, they did not show higher sensitivity to positive feedback. The FRN did also not differentiate between expected and unexpected feedback, as found for adults. In contrast, in a later processing stage mirrored in the P300 component of the ERP, children and adolescents processed the feedback's unexpectedness. Interestingly, adolescents with better behavioral adaptation (high-performers) also had a more frontal P300 expectancy effect. Thus, the recruitment of additional frontal brain regions might lead to better learning from feedback in adolescents. PMID:26772145

  12. Low-temperature oxidation of magnetite - a humidity sensitive process?

    NASA Astrophysics Data System (ADS)

    Appel, Erwin; Fang, Xiaomin; Herb, Christian; Hu, Shouyun

    2015-04-01

    Extensive multi-parameter palaeoclimate records were obtained from two long-term lacustrine archives at the Tibetan Plateau: the Qaidam basin (2.69-0.08 Ma) and Heqing basin (0.90-0.03 Ma). At present the region of the Qaidam site has an arid climate (<100 mm mean annual precipitation) while the Heqing site is located in the sub-tropical region with monsoonal rainfall. Magnetic properties play a prominent role for palaeoclimate interpretation in both records. Several parameters show a 100 kyr eccentricity cyclicity; in the Qaidam record also the Mid-Pleistocene Transition is seen. Both magnetic records are controlled by different absolute and relative contributions of magnetite and its altered (maghemitized) phases as well as hematite. Weathering conditions likely cause a systematic variation of magnetic mineralogy due to low-temperature oxidation (LTO). Maghemitization is well recognized as an alteration process in submarine basalts but about its relevance for climate-induced weathering in continental environments little is known. Various factors i.e., humidity, temperature, seasonality, duration of specific weathering conditions, and bacterial activity could be responsible for maghemitization (LTO) and transformation to hematite (or goethite) when a critical degree of LTO is reached. These factors may lead to a complex interplay, but one has to note that water acts as an electrolyte for Fe(II) to Fe(III) oxidation at the crystal surface and due to maghemitization-induced lattice shrinking a larger internal particle surface area becomes exposed to oxidation. We suggest that humidity is the most crucial driver for the two studied archives - for the following reasons: (1) The overall parameter variations and catchment conditions are well in agreement with an LTO scenario. (2) In the Qaidam record we observe a direct relationship of a humidity sensitive pollen Ratio with magnetic susceptibility (reflecting the degree of alteration by LTO). (3) In the Heqing record

  13. Numerical Analyses on Transient Thermal Process of Gas - Cooled Current Leads in BEPC II

    NASA Astrophysics Data System (ADS)

    Zhang, X. B.; Yao, Z. L.; Wang, L.; Jia, L. X.

    2004-06-01

    A pair of high current leads will be used for the superconducting detector solenoid magnet and six pairs of low current leads will be used for the superconducting interaction quadruple magnets in the Beijing Electron-Positron Collider Upgrade (BEPC II). This paper reports the numerical analyses on the thermal processes in the current leads, including the power charging process and overloaded current case as well as the transient characteristic of the leads once the helium cooling is interrupted. The design parameters of the current leads are studied for the stable and unstable conditions.

  14. A 4D-Var CO2 inversion system with NICAM-TM: development and sensitivity analyses

    NASA Astrophysics Data System (ADS)

    Niwa, Y.; Fujii, Y.; Sawa, Y.; Ito, A.; Iida, Y.; Tomita, H.; Masaki, S.; Imasu, R.; Matsuda, H.; Machida, T.; Saigusa, N.

    2014-12-01

    Our understanding of the global carbon cycle and its feedback mechanism to climate changes is limited due to high uncertainties in estimates of regional CO2 fluxes at the earth surface. Recently, a large amount of CO2 concentration data are becoming available from high-frequency aircraft measurements (e.g., CONTRAIL) and satellite measurements (e.g., GOSAT and OCO-2), in addition to expansion of surface measurement networks. To exploit those observational data, a new inversion system has been developed with the four-dimensional variational (4D-Var) method. The system is based on Nonhydrostatic ICosahedral Atmospheric Model-based Transport Model (NICAM-TM), which consists of forward and adjoint transport modes. For the a priori fluxes at terrestrial biospheres and oceans, CO2 flux data from Vegetation Integrative SImulator for Trace Gases (VISIT) and the diagnostic model of Japan Meteorological Agency (JMA) are respectively used. In the iterative calculation, the quasi-Newton method of Preconditioned Optimizing Utility for Large-dimensional analyses (POpULar) is used. In this study, we present the structure of the newly developed system and performances for CO2 flux estimates in ideal twin experiments. By the twin experiments, sensitivities to prior error covariance, numerical algorithms, and observational networks are investigated. Acknowledgment: This study is supported by the Environment Research and Technology Development Fund (2-1401) of the Ministry of the Environment, Japan.

  15. Sensitivity analyses for clustered data: an illustration from a large-scale clustered randomized controlled trial in education.

    PubMed

    Abe, Yasuyo; Gee, Kevin A

    2014-12-01

    In this paper, we demonstrate the importance of conducting well-thought-out sensitivity analyses for handling clustered data (data in which individuals are grouped into higher order units, such as students in schools) that arise from cluster randomized controlled trials (RCTs). This is particularly relevant given the rise in rigorous impact evaluations that use cluster randomized designs across various fields including education, public health and social welfare. Using data from a recently completed cluster RCT of a school-based teacher professional development program, we demonstrate our use of four commonly applied methods for analyzing clustered data. These methods include: (1) hierarchical linear modeling (HLM); (2) feasible generalized least squares (FGLS); (3) generalized estimating equations (GEE); and (4) ordinary least squares (OLS) regression with cluster-robust (Huber-White) standard errors. We compare our findings across each method, showing how inconsistent results - in terms of both effect sizes and statistical significance - emerged across each method and our analytic approach to resolving such inconsistencies. PMID:25090223

  16. Sensitivity Analyses in Small Break LOCA with HPI-Failure: Effect of Break-Size in Secondary-Side Depressurization

    NASA Astrophysics Data System (ADS)

    Kinoshita, Ikuo; Torige, Toshihide; Yamada, Minoru

    2014-06-01

    In the case of total failure of the high pressure injection (HPI) system following small break loss of coolant accident (SBLOCA) in pressurized water reactor (PWR), the break size is so small that the primary system does not depressurize to the accumulator (ACC) injection pressure before the core is uncovered extensively. Therefore, steam generator (SG) secondary-side depressurization is necessary as an accident management in order to grant accumulator system actuation and core reflood. A thermal-hydraulic analysis using RELAP5/MOD3 was made on SBLOCA with HPI-failure for Oi Units 3/4 operated by Kansai Electoric Power Co., which are conventional 4 loop PWR plants. The effectiveness of SG secondary-side depressurization procedure was investigated for the real plant design and operational characteristics. The sensitivity analyses using RELAP5/MOD3.2 showed that the accident management was effective for a wide range of break sizes, various orientations and positions. The critical break can be 3 inch cold-leg bottom break.

  17. Assessing uncertainty in ecological systems using global sensitivity analyses: a case example of simulated wolf reintroduction effects on elk

    USGS Publications Warehouse

    Fieberg, J.; Jenkins, Kurt J.

    2005-01-01

    Often landmark conservation decisions are made despite an incomplete knowledge of system behavior and inexact predictions of how complex ecosystems will respond to management actions. For example, predicting the feasibility and likely effects of restoring top-level carnivores such as the gray wolf (Canis lupus) to North American wilderness areas is hampered by incomplete knowledge of the predator-prey system processes and properties. In such cases, global sensitivity measures, such as Sobola?? indices, allow one to quantify the effect of these uncertainties on model predictions. Sobola?? indices are calculated by decomposing the variance in model predictions (due to parameter uncertainty) into main effects of model parameters and their higher order interactions. Model parameters with large sensitivity indices can then be identified for further study in order to improve predictive capabilities. Here, we illustrate the use of Sobola?? sensitivity indices to examine the effect of parameter uncertainty on the predicted decline of elk (Cervus elaphus) population sizes following a hypothetical reintroduction of wolves to Olympic National Park, Washington, USA. The strength of density dependence acting on survival of adult elk and magnitude of predation were the most influential factors controlling elk population size following a simulated wolf reintroduction. In particular, the form of density dependence in natural survival rates and the per-capita predation rate together accounted for over 90% of variation in simulated elk population trends. Additional research on wolf predation rates on elk and natural compensations in prey populations is needed to reliably predict the outcome of predatora??prey system behavior following wolf reintroductions.

  18. Nonlinear optical signal processing on multiwavelength sensitive materials.

    PubMed

    Azimipour, Mehdi; Pashaie, Ramin

    2013-11-01

    Exploiting salient features in the photodynamics of specific types of light sensitive materials, a new approach is presented for realization of parallel nonlinear operations with optics. We briefly review the quantum structure and mathematical models offered for the photodynamics of two multiwavelength sensitive materials, doped crystals of lithium niobate and thick layers of bacteriorhodopsin. Next, a special mode of these dynamics in each material is investigated and a graphical design procedure is offered to produce highly nonlinear optical responses that can be dynamically reshaped via applying minimum changes in the optical setup. PMID:24177084

  19. A Sensitive and Robust HPLC Assay with Fluorescence Detection for the Quantification of Pomalidomide in Human Plasma for Pharmacokinetic Analyses

    PubMed Central

    Shahbazi, Shandiz; Peer, Cody J.; Polizzotto, Mark N.; Uldrick, Thomas S.; Roth, Jeffrey; Wyvill, Kathleen M.; Aleman, Karen; Zeldis, Jerome B.; Yarchoan, Robert; Figg, William D.

    2014-01-01

    Pomalidomide is a second generation IMiD (immunomodulatory agent) that has recently been granted approval by the Food and Drug Administration for treatment of relapsed multiple myeloma after prior treatment with two antimyeloma agents, including lenalidomide and bortezomib. A simple and robust HPLC assay with fluorescence detection for pomalidomide over the range of 1–500 ng/mL has been developed for application to pharmacokinetic studies in ongoing clinical trials in various other malignancies. A liquid-liquid extraction from human plasma alone or pre-stabilized with 0.1% HCl was performed, using propyl paraben as the internal standard. From plasma either pre-stabilized with 0.1% HCl or not, the assay was shown to be selective, sensitive, accurate, precise, and have minimal matrix effects (<20%). Pomalidomide was stable in plasma through 4 freeze-thaw cycles (<12% change), in plasma at room temperature for up to 2 hr for samples not pre-stabilized with 0.1% HCl and up to 8 hr in samples pre-stabilized with 0.1% HCl, 24 hr post-preparation at 4 °C (<2% change), and showed excellent extraction recovery (~90%). This is the first reported description of the freeze/thaw and plasma stability of pomalidomide in plasma either pre-stabilized with 0.1% HCl or not. The information presented in this manuscript is important when performing pharmacokinetic analyses. The method was used to analyze clinical pharmacokinetics samples obtained after a 5 mg oral dose of pomalidomide. This relatively simple HPLC-FL assay allows a broader range of laboratories to measure pomalidomide for application to clinical pharmacokinetics. PMID:24486861

  20. A sensitive and robust HPLC assay with fluorescence detection for the quantification of pomalidomide in human plasma for pharmacokinetic analyses.

    PubMed

    Shahbazi, Shandiz; Peer, Cody J; Polizzotto, Mark N; Uldrick, Thomas S; Roth, Jeffrey; Wyvill, Kathleen M; Aleman, Karen; Zeldis, Jerome B; Yarchoan, Robert; Figg, William D

    2014-04-01

    Pomalidomide is a second generation IMiD (immunomodulatory agent) that has recently been granted approval by the Food and Drug Administration for treatment of relapsed multiple myeloma after prior treatment with two antimyeloma agents, including lenalidomide and bortezomib. A simple and robust HPLC assay with fluorescence detection for pomalidomide over the range of 1-500ng/mL has been developed for application to pharmacokinetic studies in ongoing clinical trials in various other malignancies. A liquid-liquid extraction from human plasma alone or pre-stabilized with 0.1% HCl was performed, using propyl paraben as the internal standard. From plasma either pre-stabilized with 0.1% HCl or not, the assay was shown to be selective, sensitive, accurate, precise, and have minimal matrix effects (<20%). Pomalidomide was stable in plasma through 4 freeze-thaw cycles (<12% change), in plasma at room temperature for up to 2h for samples not pre-stabilized with 0.1% HCl and up to 8h in samples pre-stabilized with 0.1% HCl, 24h post-preparation at 4°C (<2% change), and showed excellent extraction recovery (∼90%). This is the first reported description of the freeze/thaw and plasma stability of pomalidomide in plasma either pre-stabilized with 0.1% HCl or not. The information presented in this manuscript is important when performing pharmacokinetic analyses. The method was used to analyze clinical pharmacokinetics samples obtained after a 5mg oral dose of pomalidomide. This relatively simple HPLC-FL assay allows a broader range of laboratories to measure pomalidomide for application to clinical pharmacokinetics. PMID:24486861

  1. Phenotypic and Genetic Analyses of the Varroa Sensitive Hygienic Trait in Russian Honey Bee (Hymenoptera: Apidae) Colonies

    PubMed Central

    Kirrane, Maria J.; de Guzman, Lilia I.; Holloway, Beth; Frake, Amanda M.; Rinderer, Thomas E.; Whelan, Pádraig M.

    2015-01-01

    Varroa destructor continues to threaten colonies of European honey bees. General hygiene, and more specific Varroa Sensitive Hygiene (VSH), provide resistance towards the Varroa mite in a number of stocks. In this study, 32 Russian (RHB) and 14 Italian honey bee colonies were assessed for the VSH trait using two different assays. Firstly, colonies were assessed using the standard VSH behavioural assay of the change in infestation of a highly infested donor comb after a one-week exposure. Secondly, the same colonies were assessed using an “actual brood removal assay” that measured the removal of brood in a section created within the donor combs as a potential alternative measure of hygiene towards Varroa-infested brood. All colonies were then analysed for the recently discovered VSH quantitative trait locus (QTL) to determine whether the genetic mechanisms were similar across different stocks. Based on the two assays, RHB colonies were consistently more hygienic toward Varroa-infested brood than Italian honey bee colonies. The actual number of brood cells removed in the defined section was negatively correlated with the Varroa infestations of the colonies (r2 = 0.25). Only two (percentages of brood removed and reproductive foundress Varroa) out of nine phenotypic parameters showed significant associations with genotype distributions. However, the allele associated with each parameter was the opposite of that determined by VSH mapping. In this study, RHB colonies showed high levels of hygienic behaviour towards Varroa -infested brood. The genetic mechanisms are similar to those of the VSH stock, though the opposite allele associates in RHB, indicating a stable recombination event before the selection of the VSH stock. The measurement of brood removal is a simple, reliable alternative method of measuring hygienic behaviour towards Varroa mites, at least in RHB stock. PMID:25909856

  2. Phenotypic and genetic analyses of the varroa sensitive hygienic trait in Russian honey bee (hymenoptera: apidae) colonies.

    PubMed

    Kirrane, Maria J; de Guzman, Lilia I; Holloway, Beth; Frake, Amanda M; Rinderer, Thomas E; Whelan, Pádraig M

    2014-01-01

    Varroa destructor continues to threaten colonies of European honey bees. General hygiene, and more specific Varroa Sensitive Hygiene (VSH), provide resistance towards the Varroa mite in a number of stocks. In this study, 32 Russian (RHB) and 14 Italian honey bee colonies were assessed for the VSH trait using two different assays. Firstly, colonies were assessed using the standard VSH behavioural assay of the change in infestation of a highly infested donor comb after a one-week exposure. Secondly, the same colonies were assessed using an "actual brood removal assay" that measured the removal of brood in a section created within the donor combs as a potential alternative measure of hygiene towards Varroa-infested brood. All colonies were then analysed for the recently discovered VSH quantitative trait locus (QTL) to determine whether the genetic mechanisms were similar across different stocks. Based on the two assays, RHB colonies were consistently more hygienic toward Varroa-infested brood than Italian honey bee colonies. The actual number of brood cells removed in the defined section was negatively correlated with the Varroa infestations of the colonies (r2 = 0.25). Only two (percentages of brood removed and reproductive foundress Varroa) out of nine phenotypic parameters showed significant associations with genotype distributions. However, the allele associated with each parameter was the opposite of that determined by VSH mapping. In this study, RHB colonies showed high levels of hygienic behaviour towards Varroa -infested brood. The genetic mechanisms are similar to those of the VSH stock, though the opposite allele associates in RHB, indicating a stable recombination event before the selection of the VSH stock. The measurement of brood removal is a simple, reliable alternative method of measuring hygienic behaviour towards Varroa mites, at least in RHB stock. PMID:25909856

  3. The quest for the best: The impact of different EPI sequences on the sensitivity of random effect fMRI group analyses

    PubMed Central

    Kirilina, Evgeniya; Lutti, Antoine; Poser, Benedikt A.; Blankenburg, Felix; Weiskopf, Nikolaus

    2016-01-01

    We compared the sensitivity of standard single-shot 2D echo planar imaging (EPI) to three advanced EPI sequences, i.e., 2D multi-echo EPI, 3D high resolution EPI and 3D dual-echo fast EPI in fixed effect and random effects group level fMRI analyses at 3 T. The study focused on how well the variance reduction in fixed effect analyses achieved by advanced EPI sequences translates into increased sensitivity in the random effects group level analysis. The sensitivity was estimated in a functional MRI experiment of an emotional learning and a reward based learning tasks in a group of 24 volunteers. Each experiment was acquired with the four different sequences. The task-related response amplitude, contrast level and respective t-value were proxies for the functional sensitivity across the brain. All three advanced EPI methods increased the sensitivity in the fixed effects analyses, but standard single-shot 2D EPI provided a comparable performance in random effects group analysis when whole brain coverage and moderate resolution are required. In this experiment inter-subject variability determined the sensitivity of the random effects analysis for most brain regions, making the impact of EPI pulse sequence improvements less relevant or even negligible for random effects analyses. An exception concerns the optimization of EPI reducing susceptibility-related signal loss that translates into an enhanced sensitivity e.g. in the orbitofrontal cortex for multi-echo EPI. Thus, future optimization strategies may best aim at reducing inter-subject variability for higher sensitivity in standard fMRI group studies at moderate spatial resolution. PMID:26515905

  4. The quest for the best: The impact of different EPI sequences on the sensitivity of random effect fMRI group analyses.

    PubMed

    Kirilina, Evgeniya; Lutti, Antoine; Poser, Benedikt A; Blankenburg, Felix; Weiskopf, Nikolaus

    2016-02-01

    We compared the sensitivity of standard single-shot 2D echo planar imaging (EPI) to three advanced EPI sequences, i.e., 2D multi-echo EPI, 3D high resolution EPI and 3D dual-echo fast EPI in fixed effect and random effects group level fMRI analyses at 3T. The study focused on how well the variance reduction in fixed effect analyses achieved by advanced EPI sequences translates into increased sensitivity in the random effects group level analysis. The sensitivity was estimated in a functional MRI experiment of an emotional learning and a reward based learning tasks in a group of 24 volunteers. Each experiment was acquired with the four different sequences. The task-related response amplitude, contrast level and respective t-value were proxies for the functional sensitivity across the brain. All three advanced EPI methods increased the sensitivity in the fixed effects analyses, but standard single-shot 2D EPI provided a comparable performance in random effects group analysis when whole brain coverage and moderate resolution are required. In this experiment inter-subject variability determined the sensitivity of the random effects analysis for most brain regions, making the impact of EPI pulse sequence improvements less relevant or even negligible for random effects analyses. An exception concerns the optimization of EPI reducing susceptibility-related signal loss that translates into an enhanced sensitivity e.g. in the orbitofrontal cortex for multi-echo EPI. Thus, future optimization strategies may best aim at reducing inter-subject variability for higher sensitivity in standard fMRI group studies at moderate spatial resolution. PMID:26515905

  5. Sensitive, time-resolved, broadband spectroscopy of single transient processes

    NASA Astrophysics Data System (ADS)

    Fjodorow, Peter; Baev, Ivan; Hellmig, Ortwin; Sengstock, Klaus; Baev, Valery M.

    2015-09-01

    Intracavity absorption spectroscopy with a broadband Er3+-doped fiber laser is applied to time-resolved measurements of transient gain and absorption in electrically excited Xe and Kr plasmas. The achieved time resolution for broadband spectral recording of a single process is 25 µs. For pulsed-periodic processes, the time resolution is limited by the laser pulse duration, which is set here to 3 µs. This pulse duration also predefines the effective absorption path length, which amounts to 900 m. The presented technique can be applied to multicomponent analysis of single transient processes such as shock tube experiments, pulse detonation engines, or explosives.

  6. Sensitive Analysis of Observation Model for Human Tracking Using a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Nakanishi, W.; Fuse, T.

    2014-06-01

    This paper aims at obtaining basic knowledge about characteristics of observation models for human tracking method as a stochastic process. As human tracking in actual cases are complicated, we cannot always use the same observation models for every situation. Thus in most cases observation models are set empirically so far. In order to achieve an efficient choice of models and parameters, understanding some advantages and disadvantages of such models regarding to observation conditions is important. In this paper we conduct a sensitive analysis on some types of observation models. In particular, we obtain both colour and range information at a railway station. We prepare six predictive distributions as well as six models and parameters for both colour and range observation models. We calculate posterior distributions of each pattern, namely 36 patterns for both colour and range models. As a sensitive analysis we compare a value of a ground truth and an expected value of posteriors. We also compare variances of predictive and posterior distributions. Through this experimental results, we confirm our analysis method is efficient to obtain information about observation models. In fact, all models analysed are good in whole. One suggestive result is that colour models can deal with a predictive error in mean values, while range models in variances. Another is that under occlusions range models show a good performance.

  7. Sensitivity to masses in the r-process

    NASA Astrophysics Data System (ADS)

    Brett, Sam; Aprahamian, Ani

    2009-10-01

    The rapid neutron capture process is thought to produce over 50% of the elements beyond iron and still remains, in many ways, a mystery. Questions about the site, conditions and whether it is a single process are outstanding open questions. The process is affected by the astrophysics of the scenario and the nuclear physics of the nuclei involved in the process. Simulations of the r-process require large sets of data such as cross sections, separation energies and decay rates. Clearly, it would be desirable if all of these data sets to be observed and experimentally proven, but since we are looking toward extremely neutron rich nuclei, perilously close to the drip line, we must use many theoretical values. Using an r-process simulation written by Bradley Meyer in 1993, we have been able to see the effects of changing the mass models (and therefore the separation energies) on the final abundances. The input includes the Finite Range Droplet Model, the ETFSI, Duflo-Zucker, and F0 models. By comparing these theoretical models against each other and against known masses, we hope to be able to suggest key regions for further mass measurements.

  8. Temperament trait of sensory processing sensitivity moderates cultural differences in neural response

    PubMed Central

    Ketay, Sarah; Hedden, Trey; Aron, Elaine N.; Rose Markus, Hazel; Gabrieli, John D. E.

    2010-01-01

    This study focused on a possible temperament-by-culture interaction. Specifically, it explored whether a basic temperament/personality trait (sensory processing sensitivity; SPS), perhaps having a genetic component, might moderate a previously established cultural difference in neural responses when making context-dependent vs context-independent judgments of simple visual stimuli. SPS has been hypothesized to underlie what has been called inhibitedness or reactivity in infants, introversion in adults, and reactivity or responsivness in diverse animal species. Some biologists view the trait as one of two innate strategies—observing carefully before acting vs being first to act. Thus the central characteristic of SPS is hypothesized to be a deep processing of information. Here, 10 European-Americans and 10 East Asians underwent functional magnetic resonance imaging while performing simple visuospatial tasks emphasizing judgments that were either context independent (typically easier for Americans) or context dependent (typically easier for Asians). As reported elsewhere, each group exhibited greater activation for the culturally non-preferred task in frontal and parietal regions associated with greater effort in attention and working memory. However, further analyses, reported here for the first time, provided preliminary support for moderation by SPS. Consistent with the careful-processing theory, high-SPS individuals showed little cultural difference; low-SPS, strong culture differences. PMID:20388694

  9. Genome-Wide Gene-Sodium Interaction Analyses on Blood Pressure: The Genetic Epidemiology Network of Salt-Sensitivity Study.

    PubMed

    Li, Changwei; He, Jiang; Chen, Jing; Zhao, Jinying; Gu, Dongfeng; Hixson, James E; Rao, Dabeeru C; Jaquish, Cashell E; Gu, Charles C; Chen, Jichun; Huang, Jianfeng; Chen, Shufeng; Kelly, Tanika N

    2016-08-01

    We performed genome-wide analyses to identify genomic loci that interact with sodium to influence blood pressure (BP) using single-marker-based (1 and 2 df joint tests) and gene-based tests among 1876 Chinese participants of the Genetic Epidemiology Network of Salt-Sensitivity (GenSalt) study. Among GenSalt participants, the average of 3 urine samples was used to estimate sodium excretion. Nine BP measurements were taken using a random zero sphygmomanometer. A total of 2.05 million single-nucleotide polymorphisms were imputed using Affymetrix 6.0 genotype data and the Chinese Han of Beijing and Japanese of Tokyo HapMap reference panel. Promising findings (P<1.00×10(-4)) from GenSalt were evaluated for replication among 775 Chinese participants of the Multi-Ethnic Study of Atherosclerosis (MESA). Single-nucleotide polymorphism and gene-based results were meta-analyzed across the GenSalt and MESA studies to determine genome-wide significance. The 1 df tests identified interactions for UST rs13211840 on diastolic BP (P=3.13×10(-9)). The 2 df tests additionally identified associations for CLGN rs2567241 (P=3.90×10(-12)) and LOC105369882 rs11104632 (P=4.51×10(-8)) with systolic BP. The CLGN variant rs2567241 was also associated with diastolic BP (P=3.11×10(-22)) and mean arterial pressure (P=2.86×10(-15)). Genome-wide gene-based analysis identified MKNK1 (P=6.70×10(-7)), C2orf80 (P<1.00×10(-12)), EPHA6 (P=2.88×10(-7)), SCOC-AS1 (P=4.35×10(-14)), SCOC (P=6.46×10(-11)), CLGN (P=3.68×10(-13)), MGAT4D (P=4.73×10(-11)), ARHGAP42 (P≤1.00×10(-12)), CASP4 (P=1.31×10(-8)), and LINC01478 (P=6.75×10(-10)) that were associated with at least 1 BP phenotype. In summary, we identified 8 novel and 1 previously reported BP loci through the examination of single-nucleotide polymorphism and gene-based interactions with sodium. PMID:27271309

  10. Microstructure Sensitive Design and Processing in Solid Oxide Electrolyzer Cell

    SciTech Connect

    Dr. Hamid Garmestani; Dr. Stephen Herring

    2009-06-12

    The aim of this study was to develop and inexpensive manufacturing process for deposition of functionally graded thin films of LSM oxides with porosity graded microstructures for use as IT-SOFCs cathode. The spray pyrolysis method was chosen as a low-temperature processing technique for deposition of porous LSM films onto dense YXZ substrates. The effort was directed toward the optimization of the processing conditions for deposition of high quality LSM films with variety of morphologies in the range of dense to porous microstructures. Results of optimization studies of spray parameters revealed that the substrate surface temperature is the most critical parameter influencing the roughness and morphology, porosity, cracking and crystallinity of the film.

  11. Sensitivity of Mycobacterium bovis to common beef processing interventions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Introduction. Cattle infected with Mycobacterium bovis, the causative agent of bovine tuberculosis and a relevant zoonosis to humans, may be sent to slaughter before diagnosis of infection because of slow multiplication of the pathogen. Purpose. This study evaluates multiple processing interventi...

  12. Variable high pressure processing sensitivities for GII human noroviruses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Human norovirus (HuNoV) is the leading cause of foodborne diseases worldwide. High pressure processing (HPP) is one of the most promising non-thermal technologies for decontamination of viral pathogens in foods. However, the survival of HuNoVs by HPP is poorly understood because these viruses cann...

  13. SENSITIVITY OF RADM TO POINT SOURCE EMISSIONS PROCESSING

    EPA Science Inventory

    The Regional Acid Deposition Model (RADM) and associated Engineering Model have been developed to study episodic source-receptor relationships on a regional scale. he RADM includes transport, chemical transformation, and deposition processes as well as input of emissions into the...

  14. Sensitivity of membranes to their environment. Role of stochastic processes.

    PubMed Central

    Offner, F F

    1984-01-01

    Ionic flow through biomembranes often exhibits a sensitivity to the environment, which is difficult to explain by classical theory, that usually assumes that the free energy available to change the membrane permeability results from the environmental change acting directly on the permeability control mechanism. This implies, for example, that a change delta V in the trans-membrane potential can produce a maximum free energy change, delta V X q, on a gate (control mechanism) carrying a charge q. The analysis presented here shows that when stochastic fluctuations are considered, under suitable conditions (gate cycle times rapid compared with the field relaxation time within a channel), the change in free energy is limited, not by the magnitude of the stimulus, but by the electrochemical potential difference across the membrane, which may be very much greater. Conformational channel gates probably relax more slowly than the field within the channel; this would preclude appreciable direct amplification of the stimulus. It is shown, however, that the effect of impermeable cations such as Ca++ is to restore the amplification of the stimulus through its interaction with the electric field. The analysis predicts that the effect of Ca++ should be primarily to affect the number of channels that are open, while only slightly affecting the conductivity of an open channel. PMID:6093903

  15. Upconversion processes in Yb-sensitized Tm:ZBLAN

    SciTech Connect

    Carrig, T.J.; Cockroft, N.J.

    1996-10-01

    A spectroscopic study of 22 rare-earth-ion doped ZBLAN (fluorozirconate) glass was done to study feasibility of sensitizing Tm:ZBLAN with Yb to facilitate development of an efficient, conveniently pumped blue upconversion fiber laser. it was found that, under single-color pumping, 480 nm emission from Tm{sup 3+} was strongest when Yb,Tm:ZBLAN is excited at 975 nm; the strongest blue blue emission was obtained from a glass sample with 2.0 wt% Yb + 0.3 wt% Tm. Also, for weak 975 nm pump intensities, strength of blue upconversion emission can be greatly enhanced by simultaneously pumping at 785 nm. This increased upconversion efficiency is due to reduced number of energy transfer steps needed to populate the Tm{sup 3+} {sup 1}G{sub 4} energy level. Measurements of fluorescence lifetimes vs dopant concentration were also made for Yb{sup 3+}, Tm{sup 3+}, and Pr{sup 3+} transitions in ZBLAN in order to better characterize concentration quenching effects. Energy transfer between Tm{sup 3+} and Pr{sup 3+} in ZBLAN is also described.

  16. In situ analyses on negative ions in the indium-gallium-zinc oxide sputtering process

    SciTech Connect

    Jia, Junjun; Torigoshi, Yoshifumi; Shigesato, Yuzo

    2013-07-01

    The origin of negative ions in the dc magnetron sputtering process using a ceramic indium-gallium-zinc oxide target has been investigated by in situ analyses. The observed negative ions are mainly O{sup -} with energies corresponding to the target voltage, which originates from the target and barely from the reactive gas (O{sub 2}). Dissociation of ZnO{sup -}, GaO{sup -}, ZnO{sub 2}{sup -}, and GaO{sub 2}{sup -} radicals also contributes to the total negative ion flux. Furthermore, we find that some sputtering parameters, such as the type of sputtering gas (Ar or Kr), sputtering power, total gas pressure, and magnetic field strength at the target surface, can be used to control the energy distribution of the O{sup -} ion flux.

  17. Thermal threshold and sensitivity of the only symbiotic Mediterranean gorgonian Eunicella singularis by morphometric and genotypic analyses.

    PubMed

    Pey, Alexis; Catanéo, Jérôme; Forcioli, Didier; Merle, Pierre-Laurent; Furla, Paola

    2013-07-01

    The only symbiotic Mediterranean gorgonian, Eunicella singularis, has faced several mortality events connected to abnormal high temperatures. Since thermotolerance data remain scarce, heat-induced necrosis was monitored in aquarium by morphometric analysis. Gorgonian tips were sampled at two sites: Medes (Spain) and Riou (France) Islands, and at two depths: -15 m and-35 m. Although coming from contrasting thermal regimes, seawater above 28 °C led to rapid and complete tissue necrosis for all four populations. However, at 27 °C, the time length leading to 50% tissue necrosis allowed us to classify samples within three classes of thermal sensitivity. Irrespectively of the depth, Medes specimens were either very sensitive or resistant, while Riou fragments presented a medium sensitivity. Microsatellite analysis revealed that host and symbiont were genetically differentiated between sites, but not between depths. Finally, these genetic differentiations were not directly correlated to a specific thermal sensitivity whose molecular bases remain to be discovered. PMID:23932253

  18. IMPROVING PARTICULATE MATTER SOURCE APPORTIONMENT FOR HEALTH STUDIES: A TRAINED RECEPTOR MODELING APPROACH WITH SENSITIVITY, UNCERTAINTY AND SPATIAL ANALYSES

    EPA Science Inventory

    An approach for conducting PM source apportionment will be developed, tested, and applied that directly addresses limitations in current SA methods, in particular variability, biases, and intensive resource requirements. Uncertainties in SA results and sensitivities to SA inpu...

  19. Assessing the risk of bluetongue to UK livestock: uncertainty and sensitivity analyses of a temperature-dependent model for the basic reproduction number.

    PubMed

    Gubbins, Simon; Carpenter, Simon; Baylis, Matthew; Wood, James L N; Mellor, Philip S

    2008-03-01

    Since 1998 bluetongue virus (BTV), which causes bluetongue, a non-contagious, insect-borne infectious disease of ruminants, has expanded northwards in Europe in an unprecedented series of incursions, suggesting that there is a risk to the large and valuable British livestock industry. The basic reproduction number, R(0), provides a powerful tool with which to assess the level of risk posed by a disease. In this paper, we compute R(0) for BTV in a population comprising two host species, cattle and sheep. Estimates for each parameter which influences R(0) were obtained from the published literature, using those applicable to the UK situation wherever possible. Moreover, explicit temperature dependence was included for those parameters for which it had been quantified. Uncertainty and sensitivity analyses based on Latin hypercube sampling and partial rank correlation coefficients identified temperature, the probability of transmission from host to vector and the vector to host ratio as being most important in determining the magnitude of R(0). The importance of temperature reflects the fact that it influences many processes involved in the transmission of BTV and, in particular, the biting rate, the extrinsic incubation period and the vector mortality rate. PMID:17638649

  20. High precision laser processing of sensitive materials by Microjet

    NASA Astrophysics Data System (ADS)

    Sibailly, Ochelio D.; Wagner, Frank R.; Mayor, Laetitia; Richerzhagen, Bernold

    2003-11-01

    Material laser cutting is well known and widely used in industrial processes, including micro fabrication. An increasing number of applications require nevertheless a superior machining quality than can be achieved using this method. A possibility to increase the cut quality is to opt for the water-jet guided laser technology. In this technique the laser is conducted to the work piece by total internal reflection in a thin stable water-jet, comparable to the core of an optical fiber. The water jet guided laser technique was developed originally in order to reduce the heat damaged zone near the cut, but in fact many other advantages were observed due to the usage of a water-jet instead of an assist gas stream applied in conventional laser cutting. In brief, the advantages are three-fold: the absence of divergence due to light guiding, the efficient melt expulsion, and optimum work piece cooling. In this presentation we will give an overview on several industrial applications of the water-jet guided laser technique. These applications range from the cutting of CBN or ferrite cores to the dicing of thin wafers and the manufacturing of stencils, each illustrates the important impact of the water-jet usage.

  1. ANION ANALYSES BY ION CHROMATOGRAPHY FOR THE ALTERNATE REDUCTANT DEMONSTRATION FOR THE DEFENSE WASTE PROCESSING FACILITY

    SciTech Connect

    Best, D.

    2010-08-04

    The Process Science Analytical Laboratory (PSAL) at the Savannah River National Laboratory was requested by the Defense Waste Processing Facility (DWPF) to develop and demonstrate an Ion Chromatography (IC) method for the analysis of glycolate, in addition to eight other anions (fluoride, formate, chloride, nitrite, nitrate, sulfate, oxalate and phosphate) in Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) samples. The method will be used to analyze anions for samples generated from the Alternate Reductant Demonstrations to be performed for the DWPF at the Aiken County Technology Laboratory (ACTL). The method is specific to the characterization of anions in the simulant flowsheet work. Additional work will be needed for the analyses of anions in radiological samples by Analytical Development (AD) and DWPF. The documentation of the development and demonstration of the method fulfills the third requirement in the TTQAP, SRNL-RP-2010-00105, 'Task Technical and Quality Assurance Plan for Glycolic-Formic Acid Flowsheet Development, Definition and Demonstrations Tasks 1-3'.

  2. Linking databases to plant drawings saves time and money in process hazard analyses

    SciTech Connect

    Lancaster, C. )

    1993-07-01

    Part of OSHA regulation 29 CFR 1910.119 requires process hazards analyses (PHAs) to be performed for certain chemical operations. A PHA -- also known as a hazardous operations analysis, or HAZOP -- is an organized, systematic effort to identify and analyze the significance of potential hazards associated with processing or handling highly hazardous chemicals. The problem is, most chemical and petrochemical plants have been designed using manual drafting methods. In many cases, these paper drawings are deteriorating with age, and their information is outdated. Thus, many companies updating their drawings to satisfy PHA requirements are converting to computer-aided plant engineering methods. The latest generation of PC-based, computer-aided plant engineering systems links information databases and adds them to drawings in minimal time. This method creates a self-documenting plant, and saves time when performing the PHA and generating other safety- or efficiency-related information. While the computer-aided capability has been available for years on mainframe computers, only recently has it migrated to the more cost-effective PC level.

  3. Assessing cognitive processes with diffusion model analyses: a tutorial based on fast-dm-30

    PubMed Central

    Voss, Andreas; Voss, Jochen; Lerche, Veronika

    2015-01-01

    Diffusion models can be used to infer cognitive processes involved in fast binary decision tasks. The model assumes that information is accumulated continuously until one of two thresholds is hit. In the analysis, response time distributions from numerous trials of the decision task are used to estimate a set of parameters mapping distinct cognitive processes. In recent years, diffusion model analyses have become more and more popular in different fields of psychology. This increased popularity is based on the recent development of several software solutions for the parameter estimation. Although these programs make the application of the model relatively easy, there is a shortage of knowledge about different steps of a state-of-the-art diffusion model study. In this paper, we give a concise tutorial on diffusion modeling, and we present fast-dm-30, a thoroughly revised and extended version of the fast-dm software (Voss and Voss, 2007) for diffusion model data analysis. The most important improvement of the fast-dm version is the possibility to choose between different optimization criteria (i.e., Maximum Likelihood, Chi-Square, and Kolmogorov-Smirnov), which differ in applicability for different data sets. PMID:25870575

  4. A knowledge acquisition process to analyse operational problems in solid waste management facilities.

    PubMed

    Dokas, Ioannis M; Panagiotakopoulos, Demetrios C

    2006-08-01

    The available expertise on managing and operating solid waste management (SWM) facilities varies among countries and among types of facilities. Few experts are willing to record their experience, while few researchers systematically investigate the chains of events that could trigger operational failures in a facility; expertise acquisition and dissemination, in SWM, is neither popular nor easy, despite the great need for it. This paper presents a knowledge acquisition process aimed at capturing, codifying and expanding reliable expertise and propagating it to non-experts. The knowledge engineer (KE), the person performing the acquisition, must identify the events (or causes) that could trigger a failure, determine whether a specific event could trigger more than one failure, and establish how various events are related among themselves and how they are linked to specific operational problems. The proposed process, which utilizes logic diagrams (fault trees) widely used in system safety and reliability analyses, was used for the analysis of 24 common landfill operational problems. The acquired knowledge led to the development of a web-based expert system (Landfill Operation Management Advisor, http://loma.civil.duth.gr), which estimates the occurrence possibility of operational problems, provides advice and suggests solutions. PMID:16941992

  5. A miniaturised laser ablation/ionisation analyser for investigation of elemental/isotopic composition with the sub-ppm detection sensitivity

    NASA Astrophysics Data System (ADS)

    Tulej, M.; Riedo, A.; Meyer, S.; Iakovleva, M.; Neuland, M.; Wurz, P.

    2012-04-01

    Detailed knowledge of the elemental and isotopic composition of solar system objects imposes critical constraints on models describing the origin of our solar system and can provide insight to chemical and physical processes taking place during the planetary evolution. So far, the investigation of chemical composition of planetary surfaces could be conducted almost exclusively by remotely controlled spectroscopic instruments from orbiting spacecraft, landers or rovers. With some exceptions, the sensitivity of these techniques is, however, limited and often only abundant elements can be investigated. Nevertheless, the spectroscopic techniques proved to be successful for global chemical mapping of entire planetary objects such as the Moon, Mars and asteroids. A combined afford of the measurements from orbit, landers and rovers can also yield the determination of local mineralogy. New instruments including Laser Induced Breakdown Spectroscopy (LIBS) and Laser Ablation/Ionisation Mass Spectrometer (LIMS), have been recently included for several landed missions. LIBS is thought to improve flexibility of the investigations and offers a well localised chemical probing from distances up to 10-13 m. Since LIMS is a mass spectrometric technique it allows for very sensitive measurements of elements and isotopes. We will demonstrate the results of the current performance tests obtained by application of a miniaturised laser ablation/ionisation mass spectrometer, a LIMS instrument, developed in Bern for the chemical analysis of solids. So far, the only LIMS instrument on a spacecraft is the LAZMA instrument. This spectrometer was a part of the payload for PHOBOS-GRUNT mission and is also currently selected for LUNA-RESURCE and LUNA-GLOB missions to the lunar south poles (Managadze et al., 2011). Our LIMS instrument has the dimensions of 120 x Ø60 mm and with a weight of about 1.5 kg (all electronics included), it is the lightest mass analyser designed for in situ chemical

  6. A damage identification technique based on embedded sensitivity analysis and optimization processes

    NASA Astrophysics Data System (ADS)

    Yang, Chulho; Adams, Douglas E.

    2014-07-01

    A vibration based structural damage identification method, using embedded sensitivity functions and optimization algorithms, is discussed in this work. The embedded sensitivity technique requires only measured or calculated frequency response functions to obtain the sensitivity of system responses to each component parameter. Therefore, this sensitivity analysis technique can be effectively used for the damage identification process. Optimization techniques are used to minimize the difference between the measured frequency response functions of the damaged structure and those calculated from the baseline system using embedded sensitivity functions. The amount of damage can be quantified directly in engineering units as changes in stiffness, damping, or mass. Various factors in the optimization process and structural dynamics are studied to enhance the performance and robustness of the damage identification process. This study shows that the proposed technique can improve the accuracy of damage identification with less than 2 percent error of estimation.

  7. Combination of optical and electrical loss analyses for a Si-phthalocyanine dye-sensitized solar cell.

    PubMed

    Lin, Keng-Chu; Wang, Lili; Doane, Tennyson; Kovalsky, Anton; Pejic, Sandra; Burda, Clemens

    2014-12-11

    In order to promote the development of solar cells with varying types of sensitizers including dyes and quantum dots, it is crucial to establish a general experimental analysis that accounts for all important optical and electrical losses resulting from interfacial phenomena. All of these varying types of solar cells share common features where a mesoporous scaffold is used as a sensitizer loading support as well as an electron transport material, which may result in light scattering. The loss of efficiency at interfaces of the sensitizer, the mesoporous TiO2 nanoparticle films, the FTO conductive layer, and the supportive glass substrate should be considered in addition to the photoinduced electron transport properties within a cell. On the basis of optical parameters, one can obtain the internal quantum efficiency (IQE) of a solar cell, an important parameter that cannot be directly measured but must be derived from several key experiments. By integrating an optical loss model with an electrical loss model, many solar cell parameters could be characterized from electro-optical observables including reflectance, transmittance, and absorptance of the dye sensitizer, the electron injection efficiency, and the charge collection efficiency. In this work, an integrated electro-optical approach has been applied to SiPc (Pc 61) dye-sensitized solar cells for evaluating the parameters affecting the overall power conversion efficiency. The absorptance results of the Pc 61 dye-sensitized solar cell provide evidence that the adsorbed Pc 61 forms noninjection layers on TiO2 surfaces when the dye immersion time exceeds 120 min, resulting in shading light from the active layer rather than an increase in photoelectric current efficiency. PMID:24922464

  8. Alternative Toxicity Testing: Analyses on Skin Sensitization, ToxCast Phases I and II, and Carcinogenicity Provide Indications on How to Model Mechanisms Linked to Adverse Outcome Pathways.

    PubMed

    Benigni, Romualdo; Battistelli, Chiara Laura; Bossa, Cecilia; Giuliani, Alessandro; Tcheremenskaia, Olga

    2015-01-01

    This article studies alternative toxicological approaches, with new (skin sensitization, ToxCast) and previous (carcinogenicity) analyses. Quantitative modeling of rate-limiting steps in skin sensitization and carcinogenicity predicts the majority of toxicants. Similarly, successful (Quantitative) Structure-Activity Relationships models exploit the quantification of only one, or few rate-limiting steps. High-throughput assays within ToxCast point to promising associations with endocrine disruption, whereas markers for pathways intermediate events have limited correlation with most endpoints. Since the pathways may be very different (often not simple linear chains of events), quantitative analysis is necessary to identify the type of mechanism and build the appropriate model. PMID:26398111

  9. Pre-waste-emplacement ground-water travel time sensitivity and uncertainty analyses for Yucca Mountain, Nevada; Yucca Mountain Site Characterization Project

    SciTech Connect

    Kaplan, P.G.

    1993-01-01

    Yucca Mountain, Nevada is a potential site for a high-level radioactive-waste repository. Uncertainty and sensitivity analyses were performed to estimate critical factors in the performance of the site with respect to a criterion in terms of pre-waste-emplacement ground-water travel time. The degree of failure in the analytical model to meet the criterion is sensitive to the estimate of fracture porosity in the upper welded unit of the problem domain. Fracture porosity is derived from a number of more fundamental measurements including fracture frequency, fracture orientation, and the moisture-retention characteristic inferred for the fracture domain.

  10. Developing Sensitivity to Subword Combinatorial Orthographic Regularity (SCORe): A Two-Process Framework

    ERIC Educational Resources Information Center

    Mano, Quintino R.

    2016-01-01

    Accumulating evidence suggests that literacy acquisition involves developing sensitivity to the statistical regularities of the textual environment. To organize accumulating evidence and help guide future inquiry, this article integrates data from disparate fields of study and formalizes a new two-process framework for developing sensitivity to…

  11. The highly sensitive brain: an fMRI study of sensory processing sensitivity and response to others' emotions

    PubMed Central

    Acevedo, Bianca P; Aron, Elaine N; Aron, Arthur; Sangster, Matthew-Donald; Collins, Nancy; Brown, Lucy L

    2014-01-01

    Background Theory and research suggest that sensory processing sensitivity (SPS), found in roughly 20% of humans and over 100 other species, is a trait associated with greater sensitivity and responsiveness to the environment and to social stimuli. Self-report studies have shown that high-SPS individuals are strongly affected by others' moods, but no previous study has examined neural systems engaged in response to others' emotions. Methods This study examined the neural correlates of SPS (measured by the standard short-form Highly Sensitive Person [HSP] scale) among 18 participants (10 females) while viewing photos of their romantic partners and of strangers displaying positive, negative, or neutral facial expressions. One year apart, 13 of the 18 participants were scanned twice. Results Across all conditions, HSP scores were associated with increased brain activation of regions involved in attention and action planning (in the cingulate and premotor area [PMA]). For happy and sad photo conditions, SPS was associated with activation of brain regions involved in awareness, integration of sensory information, empathy, and action planning (e.g., cingulate, insula, inferior frontal gyrus [IFG], middle temporal gyrus [MTG], and PMA). Conclusions As predicted, for partner images and for happy facial photos, HSP scores were associated with stronger activation of brain regions involved in awareness, empathy, and self-other processing. These results provide evidence that awareness and responsiveness are fundamental features of SPS, and show how the brain may mediate these traits. PMID:25161824

  12. Phenotypic and genetic analyses of the Varroa Sensitive Hygienic trait in Russian Honey Bee (Hymenoptera: Apidae) colonies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Varroa destructor continues to threaten colonies of European honey bees. General hygiene and more specific VarroaVarroa Sensitive Hygiene (VSH) provide resistance toward the Varroa mite in a number of stocks. In this study, Russian (RHB) and Italian honey bees were assessed for the VSH trait. Two...

  13. Compound-specific isotopic analyses: a novel tool for reconstruction of ancient biogeochemical processes

    NASA Technical Reports Server (NTRS)

    Hayes, J. M.; Freeman, K. H.; Popp, B. N.; Hoham, C. H.

    1990-01-01

    Patterns of isotopic fractionation in biogeochemical processes are reviewed and it is suggested that isotopic fractionations will be small when substrates are large. If so, isotopic compositions of biomarkers will reflect those of their biosynthetic precursors. This prediction is tested by consideration of results of analyses of geoporphyrins and geolipids from the Greenhorn Formation (Cretaceous, Western Interior Seaway of North America) and the Messel Shale (Eocene, lacustrine, southern Germany). It is shown (i) that isotopic compositions of porphyrins that are related to a common source, but which have been altered structurally, cluster tightly and (ii) that isotopic differences between geolipids and porphyrins related to a common source are equal to those observed in modern biosynthetic products. Both of these observations are consistent with preservation of biologically controlled isotopic compositions during diagenesis. Isotopic compositions of individual compounds can thus be interpreted in terms of biogeochemical processes in ancient depositional environments. In the Cretaceous samples, isotopic compositions of n-alkanes are covariant with those of total organic carbon, while delta values for pristane and phytane are covariant with those of porphyrins. In this unit representing an open marine environment, the preserved acyclic polyisoprenoids apparently derive mainly from primary material, while the extractable, n-alkanes derive mainly from lower levels of the food chain. In the Messel Shale, isotopic compositions of individual biomarkers range from -20.9 to -73.4% vs PDB. Isotopic compositions of specific compounds can be interpreted in terms of origin from methylotrophic, chemautotrophic, and chemolithotrophic microorganisms as well as from primary producers that lived in the water column and sediments of this ancient lake.

  14. Compound-specific isotopic analyses: a novel tool for reconstruction of ancient biogeochemical processes.

    PubMed

    Hayes, J M; Freeman, K H; Popp, B N; Hoham, C H

    1990-01-01

    Patterns of isotopic fractionation in biogeochemical processes are reviewed and it is suggested that isotopic fractionations will be small when substrates are large. If so, isotopic compositions of biomarkers will reflect those of their biosynthetic precursors. This prediction is tested by consideration of results of analyses of geoporphyrins and geolipids from the Greenhorn Formation (Cretaceous, Western Interior Seaway of North America) and the Messel Shale (Eocene, lacustrine, southern Germany). It is shown (i) that isotopic compositions of porphyrins that are related to a common source, but which have been altered structurally, cluster tightly and (ii) that isotopic differences between geolipids and porphyrins related to a common source are equal to those observed in modern biosynthetic products. Both of these observations are consistent with preservation of biologically controlled isotopic compositions during diagenesis. Isotopic compositions of individual compounds can thus be interpreted in terms of biogeochemical processes in ancient depositional environments. In the Cretaceous samples, isotopic compositions of n-alkanes are covariant with those of total organic carbon, while delta values for pristane and phytane are covariant with those of porphyrins. In this unit representing an open marine environment, the preserved acyclic polyisoprenoids apparently derive mainly from primary material, while the extractable, n-alkanes derive mainly from lower levels of the food chain. In the Messel Shale, isotopic compositions of individual biomarkers range from -20.9 to -73.4% vs PDB. Isotopic compositions of specific compounds can be interpreted in terms of origin from methylotrophic, chemautotrophic, and chemolithotrophic microorganisms as well as from primary producers that lived in the water column and sediments of this ancient lake. PMID:11540919

  15. Microstructural analyses of Cr(VI) speciation in chromite ore processing Residue (COPR)

    SciTech Connect

    CHRYSOCHOOU, MARIA; FAKRA, SIRINE C .; Marcus, Matthew A.; Moon, Deok Hyun; Dermatas, Dimitris

    2010-03-01

    The speciation and distribution of Cr(VI) in the solid phase was investigated for two types of chromite ore processing residue (COPR) found at two deposition sites in the United States: gray-black (GB) granular and hard brown (HB) cemented COPR. COPR chemistry and mineralogy were investigated using micro-X-ray absorption spectroscopy and micro-X-ray diffraction, complemented by laboratory analyses. GB COPR contained 30percent of its total Cr(VI) (6000 mg/kg) as large crystals(>20 ?m diameter) of a previously unreported Na-rich analog of calcium aluminum chromate hydrates. These Cr(VI)-rich phases are thought to be vulnerable to reductive and pH treatments. More than 50percent of the Cr(VI) was located within nodules, not easily accessible to dissolved reductants, and bound to Fe-rich hydrogarnet, hydrotalcite, and possibly brucite. These phases are stable over a large pH range, thus harder to dissolve. Brownmilleritewasalso likely associated with physical entrapment of Cr(VI) in the interior of nodules. HB COPR contained no Cr(VI)-rich phases; all Cr(VI) was diffuse within the nodules and absent from the cementing matrix, with hydrogarnet and hydrotalcite being the main Cr(VI) binding phases. Treatment ofHBCOPRis challenging in terms of dissolving the acidity-resistant, inaccessible Cr(VI) compounds; the same applies to ~;;50percent of Cr(VI) in GB COPR.

  16. Antecedents of maternal sensitivity during distressing tasks: integrating attachment, social information processing, and psychobiological perspectives.

    PubMed

    Leerkes, Esther M; Supple, Andrew J; O'Brien, Marion; Calkins, Susan D; Haltigan, John D; Wong, Maria S; Fortuna, Keren

    2015-01-01

    Predictors of maternal sensitivity to infant distress were examined among 259 primiparous mothers. The Adult Attachment Interview, self-reports of personality and emotional functioning, and measures of physiological, emotional, and cognitive responses to videotapes of crying infants were administered prenatally. Maternal sensitivity was observed during three distress-eliciting tasks when infants were 6 months old. Coherence of mind was directly associated with higher maternal sensitivity to distress. Mothers' heightened emotional risk was indirectly associated with lower sensitivity via mothers' self-focused and negative processing of infant cry cues. Likewise, high physiological arousal accompanied by poor physiological regulation in response to infant crying was indirectly associated with lower maternal sensitivity to distress through mothers' self-focused and negative processing of infant cry cues. PMID:25209221

  17. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  18. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    PubMed Central

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  19. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation.

    PubMed

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-07-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  20. Sensitivity of LDEF foil analyses using ultra-low background germanium vs. large NaI(Tl) multidimensional spectrometers

    NASA Technical Reports Server (NTRS)

    Reeves, James H.; Arthur, Richard J.; Brodzinski, Ronald L.

    1993-01-01

    Cobalt foils and stainless steel samples were analyzed for induced Co-60 activity with both an ultra-low background germanium gamma-ray spectrometer and with a large NaI(Tl) multidimensional spectrometer, both of which use electronic anticoincidence shielding to reduce background counts resulting from cosmic rays. Aluminum samples were analyzed for Na-22. The results, in addition to the relative sensitivities and precisions afforded by the two methods, are presented.

  1. Compact variant-rich customized sequence database and a fast and sensitive database search for efficient proteogenomic analyses.

    PubMed

    Park, Heejin; Bae, Junwoo; Kim, Hyunwoo; Kim, Sangok; Kim, Hokeun; Mun, Dong-Gi; Joh, Yoonsung; Lee, Wonyeop; Chae, Sehyun; Lee, Sanghyuk; Kim, Hark Kyun; Hwang, Daehee; Lee, Sang-Won; Paek, Eunok

    2014-12-01

    In proteogenomic analysis, construction of a compact, customized database from mRNA-seq data and a sensitive search of both reference and customized databases are essential to accurately determine protein abundances and structural variations at the protein level. However, these tasks have not been systematically explored, but rather performed in an ad-hoc fashion. Here, we present an effective method for constructing a compact database containing comprehensive sequences of sample-specific variants--single nucleotide variants, insertions/deletions, and stop-codon mutations derived from Exome-seq and RNA-seq data. It, however, occupies less space by storing variant peptides, not variant proteins. We also present an efficient search method for both customized and reference databases. The separate searches of the two databases increase the search time, and a unified search is less sensitive to identify variant peptides due to the smaller size of the customized database, compared to the reference database, in the target-decoy setting. Our method searches the unified database once, but performs target-decoy validations separately. Experimental results show that our approach is as fast as the unified search and as sensitive as the separate searches. Our customized database includes mutation information in the headers of variant peptides, thereby facilitating the inspection of peptide-spectrum matches. PMID:25316439

  2. Proteomic analyses reveal differences in cold acclimation mechanisms in freezing-tolerant and freezing-sensitive cultivars of alfalfa

    PubMed Central

    Chen, Jing; Han, Guiqing; Shang, Chen; Li, Jikai; Zhang, Hailing; Liu, Fengqi; Wang, Jianli; Liu, Huiying; Zhang, Yuexue

    2015-01-01

    Cold acclimation in alfalfa (Medicago sativa L.) plays a crucial role in cold tolerance to harsh winters. To examine the cold acclimation mechanisms in freezing-tolerant alfalfa (ZD) and freezing-sensitive alfalfa (W5), holoproteins, and low-abundance proteins (after the removal of RuBisCO) from leaves were extracted to analyze differences at the protein level. A total of 84 spots were selected, and 67 spots were identified. Of these, the abundance of 49 spots and 24 spots in ZD and W5, respectively, were altered during adaptation to chilling stress. Proteomic results revealed that proteins involved in photosynthesis, protein metabolism, energy metabolism, stress and redox and other proteins were mobilized in adaptation to chilling stress. In ZD, a greater number of changes were observed in proteins, and autologous metabolism and biosynthesis were slowed in response to chilling stress, thereby reducing consumption, allowing for homeostasis. The capability for protein folding and protein biosynthesis in W5 was enhanced, which allows protection against chilling stress. The ability to perceive low temperatures was more sensitive in freezing-tolerant alfalfa compared to freezing-sensitive alfalfa. This proteomics study provides new insights into the cold acclimation mechanism in alfalfa. PMID:25774161

  3. Punishment sensitivity modulates the processing of negative feedback but not error-induced learning

    PubMed Central

    Unger, Kerstin; Heintz, Sonja; Kray, Jutta

    2012-01-01

    Accumulating evidence suggests that individual differences in punishment and reward sensitivity are associated with functional alterations in neural systems underlying error and feedback processing. In particular, individuals highly sensitive to punishment have been found to be characterized by larger mediofrontal error signals as reflected in the error negativity/error-related negativity (Ne/ERN) and the feedback-related negativity (FRN). By contrast, reward sensitivity has been shown to relate to the error positivity (Pe). Given that Ne/ERN, FRN, and Pe have been functionally linked to flexible behavioral adaptation, the aim of the present research was to examine how these electrophysiological reflections of error and feedback processing vary as a function of punishment and reward sensitivity during reinforcement learning. We applied a probabilistic learning task that involved three different conditions of feedback validity (100%, 80%, and 50%). In contrast to prior studies using response competition tasks, we did not find reliable correlations between punishment sensitivity and the Ne/ERN. Instead, higher punishment sensitivity predicted larger FRN amplitudes, irrespective of feedback validity. Moreover, higher reward sensitivity was associated with a larger Pe. However, only reward sensitivity was related to better overall learning performance and higher post-error accuracy, whereas highly punishment sensitive participants showed impaired learning performance, suggesting that larger negative feedback-related error signals were not beneficial for learning or even reflected maladaptive information processing in these individuals. Thus, although our findings indicate that individual differences in reward and punishment sensitivity are related to electrophysiological correlates of error and feedback processing, we found less evidence for influences of these personality characteristics on the relation between performance monitoring and feedback-based learning. PMID

  4. Process Mining Techniques for Analysing Patterns and Strategies in Students' Self-Regulated Learning

    ERIC Educational Resources Information Center

    Bannert, Maria; Reimann, Peter; Sonnenberg, Christoph

    2014-01-01

    Referring to current research on self-regulated learning, we analyse individual regulation in terms of a set of specific sequences of regulatory activities. Successful students perform regulatory activities such as analysing, planning, monitoring and evaluating cognitive and motivational aspects during learning not only with a higher frequency…

  5. Sensitivity Analyses of Exposure Estimates from a Quantitative Job-exposure Matrix (SYN-JEM) for Use in Community-based Studies

    PubMed Central

    Peters, Susan

    2013-01-01

    Objectives: We describe the elaboration and sensitivity analyses of a quantitative job-exposure matrix (SYN-JEM) for respirable crystalline silica (RCS). The aim was to gain insight into the robustness of the SYN-JEM RCS estimates based on critical decisions taken in the elaboration process. Methods: SYN-JEM for RCS exposure consists of three axes (job, region, and year) based on estimates derived from a previously developed statistical model. To elaborate SYN-JEM, several decisions were taken: i.e. the application of (i) a single time trend; (ii) region-specific adjustments in RCS exposure; and (iii) a prior job-specific exposure level (by the semi-quantitative DOM-JEM), with an override of 0 mg/m3 for jobs a priori defined as non-exposed. Furthermore, we assumed that exposure levels reached a ceiling in 1960 and remained constant prior to this date. We applied SYN-JEM to the occupational histories of subjects from a large international pooled community-based case–control study. Cumulative exposure levels derived with SYN-JEM were compared with those from alternative models, described by Pearson correlation (Rp) and differences in unit of exposure (mg/m3-year). Alternative models concerned changes in application of job- and region-specific estimates and exposure ceiling, and omitting the a priori exposure ranking. Results: Cumulative exposure levels for the study subjects ranged from 0.01 to 60 mg/m3-years, with a median of 1.76 mg/m3-years. Exposure levels derived from SYN-JEM and alternative models were overall highly correlated (Rp > 0.90), although somewhat lower when omitting the region estimate (Rp = 0.80) or not taking into account the assigned semi-quantitative exposure level (Rp = 0.65). Modification of the time trend (i.e. exposure ceiling at 1950 or 1970, or assuming a decline before 1960) caused the largest changes in absolute exposure levels (26–33% difference), but without changing the relative ranking (Rp = 0.99). Conclusions: Exposure estimates

  6. A Case Study Analysing the Process of Analogy-Based Learning in a Teaching Unit about Simple Electric Circuits

    ERIC Educational Resources Information Center

    Paatz, Roland; Ryder, James; Schwedes, Hannelore; Scott, Philip

    2004-01-01

    The purpose of this case study is to analyse the learning processes of a 16-year-old student as she learns about simple electric circuits in response to an analogy-based teaching sequence. Analogical thinking processes are modelled by a sequence of four steps according to Gentner's structure mapping theory (activate base domain, postulate local…

  7. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 5, Uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance

    SciTech Connect

    Not Available

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.

  8. Three-Dimensional Simulation And Design Sensitivity Analysis Of The Injection Molding Process

    NASA Astrophysics Data System (ADS)

    Ilinca, Florin; Hétu, Jean-François

    2004-06-01

    Getting the proper combination of different process parameters such as injection speed, melt temperature and mold temperature is important in getting a part that minimizes warpage and has the desired mechanical properties. Very often a successful design in injection molding comes at the end of a long trial and error process. Design Sensitivity Analysis (DSA) can help molders improve the design and can produce substantial investment savings in both time and money. This paper investigates the ability of the sensitivity analysis to drive an optimization tool in order to improve the quality of the injected part. The paper presents the solution of the filling stage of the injection molding process by a 3D finite element solution algorithm. The sensitivity of the solution with respect to different process parameters is computed using the continuous sensitivity equation method. Solutions are shown for the non-isothermal filling of a rectangular plate with a polymer melt behaving as a non-Newtonian fluid. The paper presents the equations for the sensitivity of the velocity, pressure and temperature and their solution by finite elements. Sensitivities of the solution with respect to the injection speed, the melt and mold temperatures are shown.

  9. Quasi-laminar stability and sensitivity analyses for turbulent flows: Prediction of low-frequency unsteadiness and passive control

    NASA Astrophysics Data System (ADS)

    Mettot, Clément; Sipp, Denis; Bézard, Hervé

    2014-04-01

    This article presents a quasi-laminar stability approach to identify in high-Reynolds number flows the dominant low-frequencies and to design passive control means to shift these frequencies. The approach is based on a global linear stability analysis of mean-flows, which correspond to the time-average of the unsteady flows. Contrary to the previous work by Meliga et al. ["Sensitivity of 2-D turbulent flow past a D-shaped cylinder using global stability," Phys. Fluids 24, 061701 (2012)], we use the linearized Navier-Stokes equations based solely on the molecular viscosity (leaving aside any turbulence model and any eddy viscosity) to extract the least stable direct and adjoint global modes of the flow. Then, we compute the frequency sensitivity maps of these modes, so as to predict before hand where a small control cylinder optimally shifts the frequency of the flow. In the case of the D-shaped cylinder studied by Parezanović and Cadot [J. Fluid Mech. 693, 115 (2012)], we show that the present approach well captures the frequency of the flow and recovers accurately the frequency control maps obtained experimentally. The results are close to those already obtained by Meliga et al., who used a more complex approach in which turbulence models played a central role. The present approach is simpler and may be applied to a broader range of flows since it is tractable as soon as mean-flows — which can be obtained either numerically from simulations (Direct Numerical Simulation (DNS), Large Eddy Simulation (LES), unsteady Reynolds-Averaged-Navier-Stokes (RANS), steady RANS) or from experimental measurements (Particle Image Velocimetry - PIV) — are available. We also discuss how the influence of the control cylinder on the mean-flow may be more accurately predicted by determining an eddy-viscosity from numerical simulations or experimental measurements. From a technical point of view, we finally show how an existing compressible numerical simulation code may be used in

  10. High-Resolution Linkage Analyses to Identify Genes That Influence Varroa Sensitive Hygiene Behavior in Honey Bees

    PubMed Central

    Tsuruda, Jennifer M.; Harris, Jeffrey W.; Bourgeois, Lanie; Danka, Robert G.; Hunt, Greg J.

    2012-01-01

    Varroa mites (V. destructor) are a major threat to honey bees (Apis melilfera) and beekeeping worldwide and likely lead to colony decline if colonies are not treated. Most treatments involve chemical control of the mites; however, Varroa has evolved resistance to many of these miticides, leaving beekeepers with a limited number of alternatives. A non-chemical control method is highly desirable for numerous reasons including lack of chemical residues and decreased likelihood of resistance. Varroa sensitive hygiene behavior is one of two behaviors identified that are most important for controlling the growth of Varroa populations in bee hives. To identify genes influencing this trait, a study was conducted to map quantitative trait loci (QTL). Individual workers of a backcross family were observed and evaluated for their VSH behavior in a mite-infested observation hive. Bees that uncapped or removed pupae were identified. The genotypes for 1,340 informative single nucleotide polymorphisms were used to construct a high-resolution genetic map and interval mapping was used to analyze the association of the genotypes with the performance of Varroa sensitive hygiene. We identified one major QTL on chromosome 9 (LOD score = 3.21) and a suggestive QTL on chromosome 1 (LOD = 1.95). The QTL confidence interval on chromosome 9 contains the gene ‘no receptor potential A’ and a dopamine receptor. ‘No receptor potential A’ is involved in vision and olfaction in Drosophila, and dopamine signaling has been previously shown to be required for aversive olfactory learning in honey bees, which is probably necessary for identifying mites within brood cells. Further studies on these candidate genes may allow for breeding bees with this trait using marker-assisted selection. PMID:23133626

  11. Inorganic analyses of volatilized and condensed species within prototypic Defense Waste Processing Facility (DWPF) canistered waste

    SciTech Connect

    Jantzen, C.M.

    1992-06-30

    The high-level radioactive waste currently stored in carbon steel tanks at the Savannah River Site (SRS) will be immobilized in a borosilicate glass in the Defense Waste Processing Facility (DWPF). The canistered waste will be sent to a geologic repository for final disposal. The Waste Acceptance Preliminary Specifications (WAPS) require the identification of any inorganic phases that may be present in the canister that may lead to internal corrosion of the canister or that could potentially adversely affect normal canister handling. During vitrification, volatilization of mixed (Na, K, Cs)Cl, (Na, K, Cs){sub 2}SO{sub 4}, (Na, K, Cs)BF{sub 4}, (Na, K){sub 2}B{sub 4}O{sub 7} and (Na,K)CrO{sub 4} species from glass melt condensed in the melter off-gas and in the cyclone separator in the canister pour spout vacuum line. A full-scale DWPF prototypic canister filled during Campaign 10 of the SRS Scale Glass Melter was sectioned and examined. Mixed (NaK)CI, (NaK){sub 2}SO{sub 4}, (NaK) borates, and a (Na,K) fluoride phase (either NaF or Na{sub 2}BF{sub 4}) were identified on the interior canister walls, neck, and shoulder above the melt pour surface. Similar deposits were found on the glass melt surface and on glass fracture surfaces. Chromates were not found. Spinel crystals were found associated with the glass pour surface. Reference amounts of the halides and sulfates were found retained in the glass and the glass chemistry, including the distribution of the halides and sulfates, was homogeneous. In all cases where rust was observed, heavy metals (Zn, Ti, Sn) from the cutting blade/fluid were present indicating that the rust was a reaction product of the cutting fluid with glass and heat sensitized canister or with carbon-steel contamination on canister interior. Only minimal water vapor is present so that internal corrosion of the canister, will not occur.

  12. Executive control over unconscious cognition: attentional sensitization of unconscious information processing

    PubMed Central

    Kiefer, Markus

    2012-01-01

    Unconscious priming is a prototypical example of an automatic process, which is initiated without deliberate intention. Classical theories of automaticity assume that such unconscious automatic processes occur in a purely bottom-up driven fashion independent of executive control mechanisms. In contrast to these classical theories, our attentional sensitization model of unconscious information processing proposes that unconscious processing is susceptible to executive control and is only elicited if the cognitive system is configured accordingly. It is assumed that unconscious processing depends on attentional amplification of task-congruent processing pathways as a function of task sets. This article provides an overview of the latest research on executive control influences on unconscious information processing. I introduce refined theories of automaticity with a particular focus on the attentional sensitization model of unconscious cognition which is specifically developed to account for various attentional influences on different types of unconscious information processing. In support of the attentional sensitization model, empirical evidence is reviewed demonstrating executive control influences on unconscious cognition in the domains of visuo-motor and semantic processing: subliminal priming depends on attentional resources, is susceptible to stimulus expectations and is influenced by action intentions and task sets. This suggests that even unconscious processing is flexible and context-dependent as a function of higher-level executive control settings. I discuss that the assumption of attentional sensitization of unconscious information processing can accommodate conflicting findings regarding the automaticity of processes in many areas of cognition and emotion. This theoretical view has the potential to stimulate future research on executive control of unconscious processing in healthy and clinical populations. PMID:22470329

  13. Risk-Sensitive Control of Pure Jump Process on Countable Space with Near Monotone Cost

    SciTech Connect

    Suresh Kumar, K. Pal, Chandan

    2013-12-15

    In this article, we study risk-sensitive control problem with controlled continuous time pure jump process on a countable space as state dynamics. We prove multiplicative dynamic programming principle, elliptic and parabolic Harnack’s inequalities. Using the multiplicative dynamic programing principle and the Harnack’s inequalities, we prove the existence and a characterization of optimal risk-sensitive control under the near monotone condition.

  14. Effect of uniaxial deformation to 50% on the sensitization process in 316 stainless steel

    SciTech Connect

    Ramirez, L.M.; Almanza, E.; Murr, L.E. . E-mail: fekberg@utep.edu

    2004-09-15

    The effect of uniaxial deformation to 50% on the degree of sensitization (DOS) in 316 stainless steel was investigated at 625 and 670 deg. C for 5-100 h using the electrochemical potentiokinetic reactivation (EPR) test. The results showed that the deformation accelerated the sensitization/desensitization process, especially at 670 deg. C. However, the material is still sensitized after up to 100 h of aging time. Transmission electron microscopy was used to corroborate these results. The deformed material showed more carbide precipitates (Cr{sub 23}C{sub 6}) at the grain boundaries and twin intersections than did the nondeformed material.

  15. Reward Sensitivity Is Associated with Brain Activity during Erotic Stimulus Processing

    PubMed Central

    Costumero, Victor; Barrós-Loscertales, Alfonso; Bustamante, Juan Carlos; Ventura-Campos, Noelia; Fuentes, Paola; Rosell-Negre, Patricia; Ávila, César

    2013-01-01

    The behavioral approach system (BAS) from Gray’s reinforcement sensitivity theory is a neurobehavioral system involved in the processing of rewarding stimuli that has been related to dopaminergic brain areas. Gray’s theory hypothesizes that the functioning of reward brain areas is modulated by BAS-related traits. To test this hypothesis, we performed an fMRI study where participants viewed erotic and neutral pictures, and cues that predicted their appearance. Forty-five heterosexual men completed the Sensitivity to Reward scale (from the Sensitivity to Punishment and Sensitivity to Reward Questionnaire) to measure BAS-related traits. Results showed that Sensitivity to Reward scores correlated positively with brain activity during reactivity to erotic pictures in the left orbitofrontal cortex, left insula, and right ventral striatum. These results demonstrated a relationship between the BAS and reward sensitivity during the processing of erotic stimuli, filling the gap of previous reports that identified the dopaminergic system as a neural substrate for the BAS during the processing of other rewarding stimuli such as money and food. PMID:23840558

  16. Complex patterns of divergence among green-sensitive (RH2a) African cichlid opsins revealed by Clade model analyses

    PubMed Central

    2012-01-01

    Background Gene duplications play an important role in the evolution of functional protein diversity. Some models of duplicate gene evolution predict complex forms of paralog divergence; orthologous proteins may diverge as well, further complicating patterns of divergence among and within gene families. Consequently, studying the link between protein sequence evolution and duplication requires the use of flexible substitution models that can accommodate multiple shifts in selection across a phylogeny. Here, we employed a variety of codon substitution models, primarily Clade models, to explore how selective constraint evolved following the duplication of a green-sensitive (RH2a) visual pigment protein (opsin) in African cichlids. Past studies have linked opsin divergence to ecological and sexual divergence within the African cichlid adaptive radiation. Furthermore, biochemical and regulatory differences between the RH2aα and RH2aβ paralogs have been documented. It thus seems likely that selection varies in complex ways throughout this gene family. Results Clade model analysis of African cichlid RH2a opsins revealed a large increase in the nonsynonymous-to-synonymous substitution rate ratio (ω) following the duplication, as well as an even larger increase, one consistent with positive selection, for Lake Tanganyikan cichlid RH2aβ opsins. Analysis using the popular Branch-site models, by contrast, revealed no such alteration of constraint. Several amino acid sites known to influence spectral and non-spectral aspects of opsin biochemistry were found to be evolving divergently, suggesting that orthologous RH2a opsins may vary in terms of spectral sensitivity and response kinetics. Divergence appears to be occurring despite intronic gene conversion among the tandemly-arranged duplicates. Conclusions Our findings indicate that variation in selective constraint is associated with both gene duplication and divergence among orthologs in African cichlid RH2a opsins. At

  17. Longitudinal Changes in Behavioral Approach System Sensitivity and Brain Structures Involved in Reward Processing during Adolescence

    PubMed Central

    Urošević, Snežana; Collins, Paul; Muetzel, Ryan; Lim, Kelvin; Luciana, Monica

    2012-01-01

    Adolescence is a period of radical normative changes and increased risk for substance use, mood disorders, and physical injury. Researchers have proposed that increases in reward sensitivity, i.e., sensitivity of the behavioral approach system (BAS), and/or increases in reactivity to all emotional stimuli (i.e., reward and threat sensitivities) lead to these phenomena. The present study is the first longitudinal investigation of changes in reward (i.e., BAS) sensitivity in 9 to 23-year-olds across a two-year follow-up. We found support for increased reward sensitivity from early to late adolescence and evidence for decline in the early twenties. This decline is combined with a decrease in left nucleus accumbens (Nacc) volume, a key structure for reward processing, from the late teens into the early twenties. Furthermore, we found longitudinal increases in sensitivity to reward to be predicted by individual differences in the Nacc and medial OFC volumes at baseline in this developmental sample. Similarly, increases in sensitivity to threat (i.e., BIS sensitivity) were qualified by sex, with only females experiencing this increase, and predicted by individual differences in lateral OFC volumes at baseline. PMID:22390662

  18. Analysing agricultural drought vulnerability at sub-district level through exposure, sensitivity and adaptive capacity based composite index

    NASA Astrophysics Data System (ADS)

    Murthy, C. S.; Laxman, B.; Sesha Sai, M. V. R.; Diwakar, P. G.

    2014-11-01

    Information on agricultural drought vulnerability status of different regions is extremely useful for implementation of long term drought management measures. A quantitative approach for measuring agricultural drought vulnerability at sub-district level was developed and implemented in the current study, which was carried-out in Andhra Pradesh state, India with the data of main cropping season i.e., kharif. The contributing indicators represent exposure, sensitivity and adaptive capacity components of vulnerability and were drawn from weather, soil, crop, irrigation and land holdings related data. After performing data normalisation and variance based weights generation, component wise composite indices were generated. Agricultural Drought Vulnerability Index (ADVI) was generated using the three component indices and beta distribution was fitted to it. Mandals (sub-district level administrative units) of the state were categorised into 5 classes - Less vulnerable, Moderately vulnerable, Vulnerable, Highly vulnerable and Very highly vulnerable. Districts dominant with vulnerable Mandals showed considerably larger variability of detrended yields of principal crops compared to the other districts, thus validating the index based vulnerability status. Current status of agricultural drought vulnerability in the state, based on ADVI, indicated that vulnerable to very highly vulnerable group of Mandals represent 54 % of total Mandals and about 55 % of the agricultural area and 65 % of the rainfed crop area. The variability in the agricultural drought vulnerability at disaggregated level was effectively captured by ADVI. The vulnerability status map is useful for diagnostic analysis and for formulating vulnerability reduction plans.

  19. Pushing the surface-enhanced Raman scattering analyses sensitivity by magnetic concentration: a simple non core-shell approach.

    PubMed

    Toma, Sergio H; Santos, Jonnatan J; Araki, Koiti; Toma, Henrique E

    2015-01-15

    A simple and accessible method for molecular analyses down to the picomolar range was realized using self-assembled hybrid superparamagnetic nanostructured materials, instead of complicated SERS substrates such as core-shell, surface nanostructured, or matrix embedded gold nanoparticles. Good signal-to-noise ratio has been achieved in a reproducible way even at concentrations down to 5×10(-11) M using methylene blue (MB) and phenanthroline (phen) as model species, exploiting the plasmonic properties of conventional citrate protected gold nanoparticles and alkylamine functionalized magnetite nanoparticles. The hot spots were generated by salt induced aggregation of gold nanoparticles (AuNP) in the presence of those analytes. Then, the aggregates of AuNP/analyte were decorated with small magnetite nanoparticles by electrostatic self-assembly forming MagSERS hybrid nanostructured materials. SERS peaks were enhanced up to 100 times after magnetic concentration in a circular spot using a magnet in comparison with the respective dispersion of the nanostructured material. PMID:25542091

  20. Two Model-Based Methods for Policy Analyses of Fine Particulate Matter Control in China: Source Apportionment and Source Sensitivity

    NASA Astrophysics Data System (ADS)

    Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.

    2013-12-01

    Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid

  1. Tailoring Catalytic Activity of Pt Nanoparticles Encapsulated Inside Dendrimers by Tuning Nanoparticle Sizes with Subnanometer Accuracy for Sensitive Chemiluminescence-Based Analyses.

    PubMed

    Lim, Hyojung; Ju, Youngwon; Kim, Joohoon

    2016-05-01

    Here, we report the size-dependent catalysis of Pt dendrimer-encapsulated nanoparticles (DENs) having well-defined sizes over the range of 1-3 nm with subnanometer accuracy for the highly enhanced chemiluminescence of the luminol/H2O2 system. This size-dependent catalysis is ascribed to the differences in the chemical states of the Pt DENs as well as in their surface areas depending on their sizes. Facile and versatile applications of the Pt DENs in diverse oxidase-based assays are demonstrated as efficient catalysts for sensitive chemiluminescence-based analyses. PMID:27032992

  2. European Citizens under Construction: The Bologna Process Analysed from a Governmentality Perspective

    ERIC Educational Resources Information Center

    Fejes, Andreas

    2008-01-01

    This article focuses on problematizing the harmonisation of higher education in Europe today. The overall aim is to analyse the construction of the European citizen and the rationality of governing related to such a construction. The specific focus will be on the rules and standards of reason in higher education reforms which inscribe continuums…

  3. Studying Mathematics Teacher Education: Analysing the Process of Task Variation on Learning

    ERIC Educational Resources Information Center

    Bragg, Leicha A.

    2015-01-01

    Self-study of variations to task design offers a way of analysing how learning takes place. Over several years, variations were made to improve an assessment task completed by final-year teacher candidates in a primary mathematics teacher education subject. This article describes how alterations to a task informed on-going developments in…

  4. The monitoring and control of TRUEX processes. Volume 1, The use of sensitivity analysis to determine key process variables and their control bounds

    SciTech Connect

    Regalbuto, M.C.; Misra, B.; Chamberlain, D.B.; Leonard, R.A.; Vandegrift, G.F.

    1992-04-01

    The Generic TRUEX Model (GTM) was used to design a flowsheet for the TRUEX solvent extraction process that would be used to determine its instrumentation and control requirements. Sensitivity analyses of the key process variables, namely, the aqueous and organic flow rates, feed compositions, and the number of contactor stages, were carried out to assess their impact on the operation of the TRUEX process. Results of these analyses provide a basis for the selection of an instrument and control system and the eventual implementation of a control algorithm. Volume Two of this report is an evaluation of the instruments available for measuring many of the physical parameters. Equations that model the dynamic behavior of the TRUEX process have been generated. These equations can be used to describe the transient or dynamic behavior of the process for a given flowsheet in accordance with the TRUEX model. Further work will be done with the dynamic model to determine how and how quickly the system responds to various perturbations. The use of perturbation analysis early in the design stage will lead to a robust flowsheet, namely, one that will meet all process goals and allow for wide control bounds. The process time delay, that is, the speed with which the system reaches a new steady state, is an important parameter in monitoring and controlling a process. In the future, instrument selection and point-of-variable measurement, now done using the steady-state results reported here, will be reviewed and modified as necessary based on this dynamic method of analysis.

  5. Analyses developed by Dean Martens for soil carbohydrates, phenols, and amino compounds: Tools for understanding soil processes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil organic matter has largely remained a black box, in part because we cannot quantitatively identify its components nor consequently their roles in soil processes. Dean Martens developed three useful laboratory analyses for quantifying soil concentrations of specific compounds in key biochemical ...

  6. A highly sensitive mean-reverting process in finance and the Euler-Maruyama approximations

    NASA Astrophysics Data System (ADS)

    Wu, Fuke; Mao, Xuerong; Chen, Kan

    2008-12-01

    Empirical studies show that the most successful continuous-time models of the short-term rate in capturing the dynamics are those that allow the volatility of interest changes to be highly sensitive to the level of the rate. However, from the mathematics, the high sensitivity to the level implies that the coefficients do not satisfy the linear growth condition, so we can not examine its properties by traditional techniques. This paper overcomes the mathematical difficulties due to the nonlinear growth and examines its analytical properties and the convergence of numerical solutions in probability. The convergence result can be used to justify the method within Monte Carlo simulations that compute the expected payoff of financial productsE For illustration, we apply our results compute the value of a bond with interest rate given by the highly sensitive mean-reverting process as well as the value of a single barrier call option with the asset price governed by this process.

  7. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 4: Uncertainty and sensitivity analyses for 40 CFR 191, Subpart B

    SciTech Connect

    Not Available

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to the EPA`s Environmental Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Additional information about the 1992 PA is provided in other volumes. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions, the choice of parameters selected for sampling, and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect compliance with 40 CFR 191B are: drilling intensity, intrusion borehole permeability, halite and anhydrite permeabilities, radionuclide solubilities and distribution coefficients, fracture spacing in the Culebra Dolomite Member of the Rustler Formation, porosity of the Culebra, and spatial variability of Culebra transmissivity. Performance with respect to 40 CFR 191B is insensitive to uncertainty in other parameters; however, additional data are needed to confirm that reality lies within the assigned distributions.

  8. A near-infrared fluorescent voltage-sensitive dye allows for moderate-throughput electrophysiological analyses of human induced pluripotent stem cell-derived cardiomyocytes

    PubMed Central

    Lopez-Izquierdo, Angelica; Warren, Mark; Riedel, Michael; Cho, Scott; Lai, Shuping; Lux, Robert L.; Spitzer, Kenneth W.; Benjamin, Ivor J.; Jou, Chuanchau J.

    2014-01-01

    Human induced pluripotent stem cell-derived cardiomyocyte (iPSC-CM)-based assays are emerging as a promising tool for the in vitro preclinical screening of QT interval-prolonging side effects of drugs in development. A major impediment to the widespread use of human iPSC-CM assays is the low throughput of the currently available electrophysiological tools. To test the precision and applicability of the near-infrared fluorescent voltage-sensitive dye 1-(4-sulfanatobutyl)-4-{β[2-(di-n-butylamino)-6-naphthyl]butadienyl}quinolinium betaine (di-4-ANBDQBS) for moderate-throughput electrophysiological analyses, we compared simultaneous transmembrane voltage and optical action potential (AP) recordings in human iPSC-CM loaded with di-4-ANBDQBS. Optical AP recordings tracked transmembrane voltage with high precision, generating nearly identical values for AP duration (AP durations at 10%, 50%, and 90% repolarization). Human iPSC-CMs tolerated repeated laser exposure, with stable optical AP parameters recorded over a 30-min study period. Optical AP recordings appropriately tracked changes in repolarization induced by pharmacological manipulation. Finally, di-4-ANBDQBS allowed for moderate-throughput analyses, increasing throughput >10-fold over the traditional patch-clamp technique. We conclude that the voltage-sensitive dye di-4-ANBDQBS allows for high-precision optical AP measurements that markedly increase the throughput for electrophysiological characterization of human iPSC-CMs. PMID:25172899

  9. A near-infrared fluorescent voltage-sensitive dye allows for moderate-throughput electrophysiological analyses of human induced pluripotent stem cell-derived cardiomyocytes.

    PubMed

    Lopez-Izquierdo, Angelica; Warren, Mark; Riedel, Michael; Cho, Scott; Lai, Shuping; Lux, Robert L; Spitzer, Kenneth W; Benjamin, Ivor J; Tristani-Firouzi, Martin; Jou, Chuanchau J

    2014-11-01

    Human induced pluripotent stem cell-derived cardiomyocyte (iPSC-CM)-based assays are emerging as a promising tool for the in vitro preclinical screening of QT interval-prolonging side effects of drugs in development. A major impediment to the widespread use of human iPSC-CM assays is the low throughput of the currently available electrophysiological tools. To test the precision and applicability of the near-infrared fluorescent voltage-sensitive dye 1-(4-sulfanatobutyl)-4-{β[2-(di-n-butylamino)-6-naphthyl]butadienyl}quinolinium betaine (di-4-ANBDQBS) for moderate-throughput electrophysiological analyses, we compared simultaneous transmembrane voltage and optical action potential (AP) recordings in human iPSC-CM loaded with di-4-ANBDQBS. Optical AP recordings tracked transmembrane voltage with high precision, generating nearly identical values for AP duration (AP durations at 10%, 50%, and 90% repolarization). Human iPSC-CMs tolerated repeated laser exposure, with stable optical AP parameters recorded over a 30-min study period. Optical AP recordings appropriately tracked changes in repolarization induced by pharmacological manipulation. Finally, di-4-ANBDQBS allowed for moderate-throughput analyses, increasing throughput >10-fold over the traditional patch-clamp technique. We conclude that the voltage-sensitive dye di-4-ANBDQBS allows for high-precision optical AP measurements that markedly increase the throughput for electrophysiological characterization of human iPSC-CMs. PMID:25172899

  10. Normative Topographic ERP Analyses of Speed of Speech Processing and Grammar Before and After Grammatical Treatment

    PubMed Central

    Yoder, Paul J.; Molfese, Dennis; Murray, Micah M.; Key, Alexandra P. F.

    2013-01-01

    Typically developing (TD) preschoolers and age-matched preschoolers with specific language impairment (SLI) received event-related potentials (ERPs) to four monosyllabic speech sounds prior to treatment and, in the SLI group, after 6 months of grammatical treatment. Before treatment, the TD group processed speech sounds faster than the SLI group. The SLI group increased the speed of their speech processing after treatment. Post-treatment speed of speech processing predicted later impairment in comprehending phrase elaboration in the SLI group. During the treatment phase, change in speed of speech processing predicted growth rate of grammar in the SLI group. PMID:24219693

  11. What Is the deficit in Phonological Processing Deficits: Auditory Sensitivity, Masking, or Category Formation?

    ERIC Educational Resources Information Center

    Nittrouer, Susan; Shune, Samantha; Lowenstein, Joanna H.

    2011-01-01

    Although children with language impairments, including those associated with reading, usually demonstrate deficits in phonological processing, there is minimal agreement as to the source of those deficits. This study examined two problems hypothesized to be possible sources: either poor auditory sensitivity to speech-relevant acoustic properties,…

  12. Integrated process analyses studies on mixed low level and transuranic wastes. Summary report

    SciTech Connect

    1997-12-01

    Options for integrated thermal and nonthermal treatment systems for mixed low-level waste (MLLW) are compared such as total life cycle cost (TLCC), cost sensitivities, risk, energy requirements, final waste volume, and aqueous and gaseous effluents. The comparisons were derived by requiring all conceptual systems to treat the same composition of waste with the same operating efficiency. Thus, results can be used as a general guideline for the selection of treatment and disposal concepts. However, specific applications of individual systems will require further analysis. The potential for cost saving options and the research and development opportunities are summarized.

  13. New Geochemical Analyses Reveal Crustal Accretionary Processes at The Overlapping Spreading Center Near 3 N East Pacific Rise

    NASA Astrophysics Data System (ADS)

    Smithka, I. N.; Perfit, M. R.

    2013-12-01

    Mid-ocean ridges (MORs) are the sites of oceanic lithosphere creation and construction. Ridge discontinuities are a global phenomenom but are not as well understood as ridge axes. Geochemical analyses provide insights into upper mantle processes since elements fractionate with melting and freezing as well as reside in material to retain source signature. Lavas collected from ridge discontinuities consist of greater chemical diversity and represent variations in source, melting parameters, and local crustal processes. The small overlapping spreading center (OSC) near the third parallel north on the East Pacific Rise has been superficially analyzed previously, but here we present new isotope analyses and expand our understanding of MOR processes and processes near OSCs. Initial analyses of lavas collected in 2000 on AHA-NEMO2 revealed normal MOR basalt trends in rare earth element enrichments as well as in major element concentrations. Crystal fractionation varies along the tips of both axes, with MgO and TiO2 concentrations increasing towards the OSC basin. Newly analyzed Sr, Nd, and Pb isotope ratios will further constrain the nature of geochemical diversity along axis. As the northern tip seems to be propagating and the southern tip dying, lavas collected from each may reflect two different underlying mantle melting and magma storage processes.

  14. Response sensitivity analysis of the dynamic milling process based on the numerical integration method

    NASA Astrophysics Data System (ADS)

    Ding, Ye; Zhu, Limin; Zhang, Xiaojian; Ding, Han

    2012-09-01

    As one of the bases of gradient-based optimization algorithms, sensitivity analysis is usually required to calculate the derivatives of the system response with respect to the machining parameters. The most widely used approaches for sensitivity analysis are based on time-consuming numerical methods, such as finite difference methods. This paper presents a semi-analytical method for calculation of the sensitivity of the stability boundary in milling. After transforming the delay-differential equation with time-periodic coefficients governing the dynamic milling process into the integral form, the Floquet transition matrix is constructed by using the numerical integration method. Then, the analytical expressions of derivatives of the Floquet transition matrix with respect to the machining parameters are obtained. Thereafter, the classical analytical expression of the sensitivity of matrix eigenvalues is employed to calculate the sensitivity of the stability lobe diagram. The two-degree-of-freedom milling example illustrates the accuracy and efficiency of the proposed method. Compared with the existing methods, the unique merit of the proposed method is that it can be used for analytically computing the sensitivity of the stability boundary in milling, without employing any finite difference methods. Therefore, the high accuracy and high efficiency are both achieved. The proposed method can serve as an effective tool for machining parameter optimization and uncertainty analysis in high-speed milling.

  15. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

  16. Enhancing Sensitivity of a Miniature Spectrometer Using a Real-Time Image Processing Algorithm.

    PubMed

    Chandramohan, Sabarish; Avrutsky, Ivan

    2016-05-01

    A real-time image processing algorithm is developed to enhance the sensitivity of a planar single-mode waveguide miniature spectrometer with integrated waveguide gratings. A novel approach of averaging along the arcs in a curved coordinate system is introduced which allows for collecting more light, thereby enhancing the sensitivity. The algorithm is tested using CdSeS/ZnS quantum dots drop casted on the surface of a single-mode waveguide. Measurements indicate that a monolayer of quantum dots is expected to produce guided mode attenuation approximately 11 times above the noise level. PMID:27170777

  17. Join-Lock-Sensitive Forward Reachability Analysis for Concurrent Programs with Dynamic Process Creation

    NASA Astrophysics Data System (ADS)

    Gawlitza, Thomas Martin; Lammich, Peter; Müller-Olm, Markus; Seidl, Helmut; Wenner, Alexander

    Dynamic Pushdown Networks (DPNs) are a model for parallel programs with (recursive) procedures and dynamic process creation. Constraints on the sequences of spawned processes allow to extend the basic model with joining of created processes [2]. Orthogonally DPNs can be extended with nested locking [9]. Reachability of a regular set R of configurations in presence of stable constraints as well as reachability without constraints but with nested locking are based on computing the set of predecessors pre *(R). In the present paper, we present a forward-propagating algorithm for deciding reachability for DPNs. We represent sets of executions by sets of execution trees and show that the set of all execution trees resulting in configurations from R which either allow a lock-sensitive execution or a join-sensitive execution, is regular. Here, we rely on basic results about macro tree transducers. As a second contribution, we show that reachability is decidable also for DPNs with both nested locking and joins.

  18. Sensitivity studies for the main r process: β-decay rates

    SciTech Connect

    Mumpower, M.; Cass, J.; Passucci, G.; Aprahamian, A.; Surman, R.

    2014-04-15

    The pattern of isotopic abundances produced in rapid neutron capture, or r-process, nucleosynthesis is sensitive to the nuclear physics properties of thousands of unstable neutron-rich nuclear species that participate in the process. It has long been recognized that the some of the most influential pieces of nuclear data for r-process simulations are β-decay lifetimes. In light of experimental advances that have pushed measurement capabilities closer to the classic r-process path, we revisit the role of individual β-decay rates in the r process. We perform β-decay rate sensitivity studies for a main (A > 120) r process in a range of potential astrophysical scenarios. We study the influence of individual rates during (n, γ)-(γ, n) equilibrium and during the post-equilibrium phase where material moves back toward stability. We confirm the widely accepted view that the most important lifetimes are those of nuclei along the r-process path for each astrophysical scenario considered. However, we find in addition that individual β-decay rates continue to shape the final abundance pattern through the post-equilibrium phase, for as long as neutron capture competes with β decay. Many of the lifetimes important for this phase of the r process are within current or near future experimental reach.

  19. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems.

    PubMed

    Vredenberg, Wim

    2011-02-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with multi-turnover pulses (MTFs) of variable intensity and duration. These analyses have led to definition and formulation of rate equations that describe the sequence of primary linear electron transfer (LET) steps in photosystem II (PSII) and of cyclic electron transport (CET) in PSI. The model considers heterogeneity in PSII reaction centers (RCs) associated with the S-states of the OEC and incorporates in a dark-adapted state the presence of a 15-35% fraction of Q(B)-nonreducing RCs that probably is identical with the S₀ fraction. The fluorescence induction algorithm (FIA) in the 10 μs-1s excitation time range considers a photochemical O-J-D, a photo-electrochemical J-I and an I-P phase reflecting the response of the variable fluorescence to the electric trans-thylakoid potential generated by the proton pump fuelled by CET in PSI. The photochemical phase incorporates the kinetics associated with the double reduction of the acceptor pair of pheophytin (Phe) and plastoquinone Q(A) [PheQ(A)] in Q(B) nonreducing RCs and the associated doubling of the variable fluorescence, in agreement with the three-state trapping model (TSTM) of PS II. The decline in fluorescence emission during the so called SMT in the 1-100s excitation time range, known as the Kautsky curve, is shown to be associated with a substantial decrease of CET-powered proton efflux from the stroma into the chloroplast lumen through the ATPsynthase of the photosynthetic machinery. PMID:21070830

  20. Sensitivity of global tropical climate to land surface processes: Mean state and interannual variability

    SciTech Connect

    Ma, Hsi-Yen; Xiao, Heng; Mechoso, C. R.; Xue, Yongkang

    2013-03-01

    This study examines the sensitivity of global tropical climate to land surface processes (LSP) using an atmospheric general circulation model both uncoupled (with prescribed SSTs) and coupled to an oceanic general circulation model. The emphasis is on the interactive soil moisture and vegetation biophysical processes, which have first order influence on the surface energy and water budgets. The sensitivity to those processes is represented by the differences between model simulations, in which two land surface schemes are considered: 1) a simple land scheme that specifies surface albedo and soil moisture availability, and 2) the Simplified Simple Biosphere Model (SSiB), which allows for consideration of interactive soil moisture and vegetation biophysical process. Observational datasets are also employed to assess the reality of model-revealed sensitivity. The mean state sensitivity to different LSP is stronger in the coupled mode, especially in the tropical Pacific. Furthermore, seasonal cycle of SSTs in the equatorial Pacific, as well as ENSO frequency, amplitude, and locking to the seasonal cycle of SSTs are significantly modified and more realistic with SSiB. This outstanding sensitivity of the atmosphere-ocean system develops through changes in the intensity of equatorial Pacific trades modified by convection over land. Our results further demonstrate that the direct impact of land-atmosphere interactions on the tropical climate is modified by feedbacks associated with perturbed oceanic conditions ("indirect effect" of LSP). The magnitude of such indirect effect is strong enough to suggest that comprehensive studies on the importance of LSP on the global climate have to be made in a system that allows for atmosphere-ocean interactions.

  1. Neurodynamics of executive control processes in bilinguals: evidence from ERP and source reconstruction analyses

    PubMed Central

    Heidlmayr, Karin; Hemforth, Barbara; Moutier, Sylvain; Isel, Frédéric

    2015-01-01

    The present study was designed to examine the impact of bilingualism on the neuronal activity in different executive control processes namely conflict monitoring, control implementation (i.e., interference suppression and conflict resolution) and overcoming of inhibition. Twenty-two highly proficient but non-balanced successive French–German bilingual adults and 22 monolingual adults performed a combined Stroop/Negative priming task while event-related potential (ERP) were recorded online. The data revealed that the ERP effects were reduced in bilinguals in comparison to monolinguals but only in the Stroop task and limited to the N400 and the sustained fronto-central negative-going potential time windows. This result suggests that bilingualism may impact the process of control implementation rather than the process of conflict monitoring (N200). Critically, our study revealed a differential time course of the involvement of the anterior cingulate cortex (ACC) and the prefrontal cortex (PFC) in conflict processing. While the ACC showed major activation in the early time windows (N200 and N400) but not in the latest time window (late sustained negative-going potential), the PFC became unilaterally active in the left hemisphere in the N400 and the late sustained negative-going potential time windows. Taken together, the present electroencephalography data lend support to a cascading neurophysiological model of executive control processes, in which ACC and PFC may play a determining role. PMID:26124740

  2. Neurodynamics of executive control processes in bilinguals: evidence from ERP and source reconstruction analyses.

    PubMed

    Heidlmayr, Karin; Hemforth, Barbara; Moutier, Sylvain; Isel, Frédéric

    2015-01-01

    The present study was designed to examine the impact of bilingualism on the neuronal activity in different executive control processes namely conflict monitoring, control implementation (i.e., interference suppression and conflict resolution) and overcoming of inhibition. Twenty-two highly proficient but non-balanced successive French-German bilingual adults and 22 monolingual adults performed a combined Stroop/Negative priming task while event-related potential (ERP) were recorded online. The data revealed that the ERP effects were reduced in bilinguals in comparison to monolinguals but only in the Stroop task and limited to the N400 and the sustained fronto-central negative-going potential time windows. This result suggests that bilingualism may impact the process of control implementation rather than the process of conflict monitoring (N200). Critically, our study revealed a differential time course of the involvement of the anterior cingulate cortex (ACC) and the prefrontal cortex (PFC) in conflict processing. While the ACC showed major activation in the early time windows (N200 and N400) but not in the latest time window (late sustained negative-going potential), the PFC became unilaterally active in the left hemisphere in the N400 and the late sustained negative-going potential time windows. Taken together, the present electroencephalography data lend support to a cascading neurophysiological model of executive control processes, in which ACC and PFC may play a determining role. PMID:26124740

  3. Efficient simulation of press hardening process through integrated structural and CFD analyses

    NASA Astrophysics Data System (ADS)

    Palaniswamy, Hariharasudhan; Mondalek, Pamela; Wronski, Maciek; Roy, Subir

    2013-12-01

    Press hardened steel parts are being increasingly used in automotive structures for their higher strength to meet safety standards while reducing vehicle weight to improve fuel consumption. However, manufacturing of sheet metal parts by press hardening process to achieve desired properties is extremely challenging as it involves complex interaction of plastic deformation, metallurgical change, thermal distribution, and fluid flow. Numerical simulation is critical for successful design of the process and to understand the interaction among the numerous process parameters to control the press hardening process in order to consistently achieve desired part properties. Until now there has been no integrated commercial software solution that can efficiently model the complete process from forming of the blank, heat transfer between the blank and tool, microstructure evolution in the blank, heat loss from tool to the fluid that flows through water channels in the tools. In this study, a numerical solution based on Altair HyperWorks® product suite involving RADIOSS®, a non-linear finite element based structural analysis solver and AcuSolve®, an incompressible fluid flow solver based on Galerkin Least Square Finite Element Method have been utilized to develop an efficient solution for complete press hardening process design and analysis. RADIOSS is used to handle the plastic deformation, heat transfer between the blank and tool, and microstructure evolution in the blank during cooling. While AcuSolve is used to efficiently model heat loss from tool to the fluid that flows through water channels in the tools. The approach is demonstrated through some case studies.

  4. Efficient simulation of press hardening process through integrated structural and CFD analyses

    SciTech Connect

    Palaniswamy, Hariharasudhan; Mondalek, Pamela; Wronski, Maciek; Roy, Subir

    2013-12-16

    Press hardened steel parts are being increasingly used in automotive structures for their higher strength to meet safety standards while reducing vehicle weight to improve fuel consumption. However, manufacturing of sheet metal parts by press hardening process to achieve desired properties is extremely challenging as it involves complex interaction of plastic deformation, metallurgical change, thermal distribution, and fluid flow. Numerical simulation is critical for successful design of the process and to understand the interaction among the numerous process parameters to control the press hardening process in order to consistently achieve desired part properties. Until now there has been no integrated commercial software solution that can efficiently model the complete process from forming of the blank, heat transfer between the blank and tool, microstructure evolution in the blank, heat loss from tool to the fluid that flows through water channels in the tools. In this study, a numerical solution based on Altair HyperWorks® product suite involving RADIOSS®, a non-linear finite element based structural analysis solver and AcuSolve®, an incompressible fluid flow solver based on Galerkin Least Square Finite Element Method have been utilized to develop an efficient solution for complete press hardening process design and analysis. RADIOSS is used to handle the plastic deformation, heat transfer between the blank and tool, and microstructure evolution in the blank during cooling. While AcuSolve is used to efficiently model heat loss from tool to the fluid that flows through water channels in the tools. The approach is demonstrated through some case studies.

  5. Transcriptome analyses of blood and sugar digestive processes in female Culicoides sonorensis midges (Diptera: Ceratopogonidae)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Female Culicoides sonorensis Wirth & Jones (Diptera:Ceratopogonidae) midges vector numerous diseases impacting livestock and humans. The molecular physiology of this midge has been under-studied, so our approach was to gain an understanding of basic processes of blood and sucrose digestion using tra...

  6. Analysing Feedback Processes in an Online Teaching and Learning Environment: An Exploratory Study

    ERIC Educational Resources Information Center

    Espasa, Anna; Meneses, Julio

    2010-01-01

    Within the constructivist framework of online distance education the feedback process is considered a key element in teachers' roles because it can promote the regulation of learning. Therefore, faced with the need to guide and train teachers in the kind of feedback to provide and how to provide it, we establish three aims for this research:…

  7. Sensitivity analysis of a process-based ecosystem model: Pinpointing parameterization and structural issues

    NASA Astrophysics Data System (ADS)

    Pappas, Christoforos; Fatichi, Simone; Leuzinger, Sebastian; Wolf, Annett; Burlando, Paolo

    2013-06-01

    vegetation models have been widely used for analyzing ecosystem dynamics and their interactions with climate. Their performance has been tested extensively against observations and by model intercomparison studies. In the present analysis, Lund-Potsdam-Jena General Ecosystem Simulator (LPJ-GUESS), a state-of-the-art ecosystem model, was evaluated by performing a global sensitivity analysis. The study aims at examining potential model limitations, particularly with regard to long-term applications. A detailed sensitivity analysis based on variance decomposition is presented to investigate structural model assumptions and to highlight processes and parameters that cause the highest variability in the output. First- and total-order sensitivity indices were calculated for selected parameters using Sobol's methodology. In order to elucidate the role of climate on model sensitivity, different climate forcings were used based on observations from Switzerland. The results clearly indicate a very high sensitivity of LPJ-GUESS to photosynthetic parameters. Intrinsic quantum efficiency alone is able to explain about 60% of the variability in vegetation carbon fluxes and pools for a wide range of climate forcings. Processes related to light harvesting were also found to be important together with parameters affecting forest structure (growth, establishment, and mortality). The model shows minor sensitivity to hydrological and soil texture parameters, questioning its skills in representing spatial vegetation heterogeneity at regional or watershed scales. In the light of these results, we discuss the deficiencies of LPJ-GUESS and possibly that of other, structurally similar, dynamic vegetation models and we highlight potential directions for further model improvements.

  8. Scientific predictability of solid rocket performance: Analyses of the processing parameters

    NASA Astrophysics Data System (ADS)

    Perez, Daniel Lizarraga

    The objective is to present a computational model of the suspensions composing uncured composite solid propellant. Highly concentrated suspensions of more than 50 pct. solid volume were examined with attendence to bimodal mixtures. Study of propellant processing was conducted to determine how this model can be applied to processing. Experimental work was conducted to supply data for comparison to the computational results. This involved data gathered from an orifice viscometer on viscosity and flow behavior. This model is a tool to study goodness of mixing throughout the processing stages of the propellant. The study into processing focused both on mixing and casting of the suspension. By studying this model for concentration, velocity and thermal behaviors, a better understanding of how well the propellant composition progresses in processing was obtained. A multiple mixture approach was taken. This involved a continuum description for the mixture and each constituent. A Fortran program was written to construct this routine. It was run on both a VAXstation 3100, Model 40 using VMS Digital operating system, and a SUN IPX, using SUN UNIX operating system. The code examined two-dimensional monomodal and bimodal mixture flows through a pipe. It examined concentration between 65 and 75 pcts. Due to the high concentration, it was necessary to apply all inertial and viscous terms within each constituent and the entire mixture. Proper boundary conditions and initial conditions to produce stable runs were found. Both monomodal and biomodal computational results showed good correlations with the experimental data, although a slight dilatation was produced by the program. No dilatation appeared in the experimental work.

  9. Etch Process Sensitivity To An Inductively Coupled Plasma Etcher Treated With Fluorine-Based Plasma

    NASA Astrophysics Data System (ADS)

    Xu, Songlin; Sun, Zhiwen; Qian, Xueyu; Yin, Gerald

    1997-10-01

    Significant etch rate drop after the treatment of an etch chamber with Fluorine-based plasma has been found for some silicon etch processes on an inductively coupled plasma reactor, which might cause problems in IC production line once the etch chamber runs alternative processes with F-based and F-free chemistry, or needs frequent cleaning with F-plasma. In this work, a systematic study of the root cause of process sensitivity to the etch chamber treated with F-plasma has been conducted. The experimental results show that pressure is a key factor to affect the etch rate drop. Processes at high pressure are more sensitive than those at low pressure because the quenching of neutral reactive species becomes more severe after the F-treatment. O2 addition also increases the etch rate sensitivity, basically due to higher O2(subscript: )concentration after F-treatment which enhances the oxidation of silicon. The EDX and XPS elemental analysis of the chamber interior wall reveals a significant composition change after the interaction with F-plasma, the altered surface might accelerate the recombination of free radical species.

  10. Root border cell development is a temperature-insensitive and Al-sensitive process in barley.

    PubMed

    Pan, Jian-Wei; Ye, Dan; Wang, Li-Ling; Hua, Jing; Zhao, Gu-Feng; Pan, Wei-Huai; Han, Ning; Zhu, Mu-Yuan

    2004-06-01

    In vivo and in vitro experiments showed that border cell (BC) survival was dependent on root tip mucigel in barley (Hordeum vulgare L. cv. Hang 981). In aeroponic culture, BC development was an induced process in barley, whereas in hydroponic culture, it was a kinetic equilibrium process during which 300-400 BCs were released into water daily. The response of root elongation to temperatures (10-35 degrees C) was very sensitive but temperature changes had no great effect on barley BC development. At 35 degrees C, the root elongation ceased whereas BC production still continued, indicating that the two processes might be regulated independently under high temperature (35 degrees C) stress. Fifty microM Al could inhibit significantly BC development by inhibiting pectin methylesterase activity in the root cap of cv. 2000-2 (Al-sensitive) and cv. Humai 16 (Al-tolerant), but 20 microM Al could not block BC development in cv. Humai 16. BCs and their mucigel of barley had a limited role in the protection of Al-induced inhibition of root elongation, but played a significant role in the prevention of Al from diffusing into the meristems of the root tip and the root cap. Together, these results suggested that BC development was a temperature-insensitive but Al-sensitive process, and that BCs and their mucigel played an important role in the protection of root tip and root cap meristems from Al toxicity. PMID:15215510

  11. Multi-criteria analyses of wastewater treatment bio-processes under an uncertainty and a multiplicity of steady states.

    PubMed

    Južnič-Zonta, Zivko; Kocijan, Juš; Flotats, Xavier; Vrečko, Darko

    2012-11-15

    This paper presents a multi-criteria evaluation methodology for determining the operating strategies for bio-chemical, wastewater treatment plants based on a model analysis under an uncertainty that can present multiple steady states. The method is based on Monte Carlo (MC) simulations and the expected utility theory in order to deal with the analysis of choices among risky operating strategies with multi-dimensional outcomes. The motivation is given by a case study using an anaerobic digestion model (ADM) adapted for multiple co-substrates. It is shown how the multi-criteria analyses' computational complexity can be reduced within an approximation based on Gaussian-process regression and how a reliability map can be built for a bio-process model under uncertainty and multiplicity. In our uncertainty-analyses case study, the reliability map shows the probability of a biogas-production collapse for a given set of substrates mixture input loads. PMID:23021337

  12. Introspective Minds: Using ALE Meta-Analyses to Study Commonalities in the Neural Correlates of Emotional Processing, Social & Unconstrained Cognition

    PubMed Central

    Schilbach, Leonhard; Bzdok, Danilo; Timmermans, Bert; Fox, Peter T.; Laird, Angela R.; Vogeley, Kai; Eickhoff, Simon B.

    2012-01-01

    Previous research suggests overlap between brain regions that show task-induced deactivations and those activated during the performance of social-cognitive tasks. Here, we present results of quantitative meta-analyses of neuroimaging studies, which confirm a statistical convergence in the neural correlates of social and resting state cognition. Based on the idea that both social and unconstrained cognition might be characterized by introspective processes, which are also thought to be highly relevant for emotional experiences, a third meta-analysis was performed investigating studies on emotional processing. By using conjunction analyses across all three sets of studies, we can demonstrate significant overlap of task-related signal change in dorso-medial prefrontal and medial parietal cortex, brain regions that have, indeed, recently been linked to introspective abilities. Our findings, therefore, provide evidence for the existence of a core neural network, which shows task-related signal change during socio-emotional tasks and during resting states. PMID:22319593

  13. The sensitivity of current and future forest managers to climate-induced changes in ecological processes.

    PubMed

    Seidl, Rupert; Aggestam, Filip; Rammer, Werner; Blennow, Kristina; Wolfslehner, Bernhard

    2016-05-01

    Climate vulnerability of managed forest ecosystems is not only determined by ecological processes but also influenced by the adaptive capacity of forest managers. To better understand adaptive behaviour, we conducted a questionnaire study among current and future forest managers (i.e. active managers and forestry students) in Austria. We found widespread belief in climate change (94.7 % of respondents), and no significant difference between current and future managers. Based on intended responses to climate-induced ecosystem changes, we distinguished four groups: highly sensitive managers (27.7 %), those mainly sensitive to changes in growth and regeneration processes (46.7 %), managers primarily sensitive to regeneration changes (11.2 %), and insensitive managers (14.4 %). Experiences and beliefs with regard to disturbance-related tree mortality were found to particularly influence a manager's sensitivity to climate change. Our findings underline the importance of the social dimension of climate change adaptation, and suggest potentially strong adaptive feedbacks between ecosystems and their managers. PMID:26695393

  14. Microgravity and Materials Processing Facility study (MMPF): Requirements and Analyses of Commercial Operations (RACO) preliminary data release

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This requirements and analyses of commercial operations (RACO) study data release reflects the current status of research activities of the Microgravity and Materials Processing Facility under Modification No. 21 to NASA/MSFC Contract NAS8-36122. Section 1 includes 65 commercial space processing projects suitable for deployment aboard the Space Station. Section 2 contains reports of the R:BASE (TM) electronic data base being used in the study, synopses of the experiments, and a summary of data on the experimental facilities. Section 3 is a discussion of video and data compression techniques used as well as a mission timeline analysis.

  15. Thermal analyses of a materials processing furnace being developed for use with heat pipes

    NASA Technical Reports Server (NTRS)

    Mcanally, J. V.; Robertson, S. J.

    1979-01-01

    A special materials processing furnace is being developed for the forthcoming Spacelab missions to study the solidification under closely controlled conditions of various sample materials in the absence of gravity. The samples are to be rod shaped and subjected to both heating and cooling simultaneously. The thermal model is based on a developed Thermal Analyzer computer program. The model was developed to be very general to enable the simulation of variations in the furnace design and, hence, serve as an aid in finalizing the design. The thermal model is described and a user's guide given. Some preliminary results obtained in testing the model are also given.

  16. Quantitative and qualitative analyses of the cell death process in Candida albicans treated by antifungal agents.

    PubMed

    Kim, Kyung Sook; Kim, Young-Sun; Han, Ihn; Kim, Mi-Hyun; Jung, Min Hyung; Park, Hun-Kuk

    2011-01-01

    The death process of Candida albicans was investigated after treatment with the antifungal agents flucytosine and amphotericin B by assessing morphological and biophysical properties associated with cell death. C. albicans was treated varying time periods (from 6 to 48 hours) and examined by scanning electron microscopy (SEM) and atomic force microscopy (AFM). SEM and AFM images clearly showed changes in morphology and biophysical properties. After drug treatment, the membrane of C. albicans was perforated, deformed, and shrunken. Compared to the control, C. albicans treated with flucytosine was softer and initially showed a greater adhesive force. Conversely, C. albicans treated with amphotericin B was harder and had a lower adhesive force. In both cases, the surface roughness increased as the treatment time increased. The relationships between morphological changes and the drugs were observed by AFM clearly; the surface of C. albicans treated with flucytosine underwent membrane collapse, expansion of holes, and shrinkage, while the membranes of cells treated with amphotericin B peeled off. According to these observations, the death process of C. albicans was divided into 4 phases, CDP(0), CDP(1), CDP(2), and CDP(4), which were determined based on morphological changes. Our results could be employed to further investigate the antifungal activity of compounds derived from natural sources. PMID:22174777

  17. Glacial Processes on Earth and Mars: New Perspectives from Remote Sensing and Laboratory Analyses

    NASA Astrophysics Data System (ADS)

    Rutledge, Alicia Marie

    Chemical and physical interactions of flowing ice and rock have inexorably shaped planetary surfaces. Weathering in glacial environments is a significant link in biogeochemical cycles --- carbon and strontium --- on Earth, and may have once played an important role in altering Mars' surface. Despite growing recognition of the importance of low-temperature chemical weathering, these processes are still not well understood. Debris-coated glaciers are also present on Mars, emphasizing the need to study ice-related processes in the evolution of planetary surfaces. During Earth's history, subglacial environments are thought to have sheltered communities of microorganisms from extreme climate variations. On Amazonian Mars, glaciers such as lobate debris aprons (LDA) could have hosted chemolithotrophic communities, making Mars' present glaciers candidates for life preservation. This study characterizes glacial processes on both Earth and Mars. Chemical weathering at Robertson Glacier, a small alpine glacier in the Canadian Rocky Mountains, is examined with a multidisciplinary approach. The relative proportions of differing dissolution reactions at various stages in the glacial system are empirically determined using aqueous geochemistry. Synthesis of laboratory and orbital thermal infrared spectroscopy allows identification of dissolution rinds on hand samples and characterization of carbonate dissolution signals at orbital scales, while chemical and morphological evidence for thin, discontinuous weathering rinds at microscales are evident from electron microscopy. Subglacial dissolution rates are found to outpace those of the proglacial till plain; biologically-mediated pyrite oxidation drives the bulk of this acidic weathering. Second, the area-elevation relationship, or hypsometry, of LDA in the midlatitudes of Mars is characterized. These glaciers are believed to have formed ˜500 Ma during a climate excursion. Hypsometric measurements of these debris-covered glaciers

  18. A high-throughput contact-hole resolution metric for photoresists:Full-process sensitivity study

    SciTech Connect

    Anderson, Christopher N.; Naulleau, Patrick P.

    2008-01-22

    The ability to accurately quantify the intrinsic resolution of chemically amplified photoresists is critical for the optimization of resists for extreme ultraviolet (EUV) Iithography. We have recently reported on two resolution metrics that have been shown to extract resolution numbers consistent with direct observation. In this paper we examine the previously reported contact-hole resolution metric and explore the sensitivity of the metric to potential error sources associated with the experimental side of the resolution extraction process. For EUV exposures at the SEMATECH Berkeley microfield exposure tool, we report a full-process error-bar in extracted resolution of 1.75 nm RMS and verify this result experimentally.

  19. Sensitivity analysis of the add-on price estimate for the silicon web growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1981-01-01

    The web growth process, a silicon-sheet technology option, developed for the flat plate solar array (FSA) project, was examined. Base case data for the technical and cost parameters for the technical and commercial readiness phase of the FSA project are projected. The process add on price, using the base case data for cost parameters such as equipment, space, direct labor, materials and utilities, and the production parameters such as growth rate and run length, using a computer program developed specifically to do the sensitivity analysis with improved price estimation are analyzed. Silicon price, sheet thickness and cell efficiency are also discussed.

  20. Dynamic speckle-interferometer for intracellular processes analyses at high optical magnification

    NASA Astrophysics Data System (ADS)

    Baharev, A. A.; Vladimirov, A. P.; Malygin, A. S.; Mikhailova, Y. A.; Novoselova, I. A.; Yakin, D. I.; Druzhinin, A. V.

    2015-05-01

    At present work dynamic of biospeckles is used for studying processes occurring in cells which arranged in the one layer. The basis of many diseases is changes in the structural and functional properties of the molecular cells components as caused by the influence of external factors and internal functional disorders. Purpose of work is approbation of speckle-interferometer designed for the analysis of cellular metabolism in individual cells. As a parameter, characterizing the metabolic activity of cells used the value of the correlation coefficient (η) of optical signals proportional to the radiation intensity I, recorded at two points in time t. At 320x magnification for the cell diameter of 20 microns value η can be determined in the area size of 6 microns.

  1. The Sensitivity of r-PROCESS Nucleosynthesis to the Properties of Neutron-Rich Nuclei

    NASA Astrophysics Data System (ADS)

    Surman, R.; Mumpower, M. R.; Cass, J.; Aprahamian, A.

    2014-09-01

    About half of the heavy elements in the Solar System were created by rapid neutron capture, or r-process, nucleosynthesis. In the r-process, heavy elements are built up via a sequence of neutron captures and beta decays in which an intense neutron flux pushes material out towards the neutron drip line. The nuclear network simulations used to test potential astrophysical scenarios for the r-process therefore require nuclear physics data (masses, beta decay lifetimes, neutron capture rates, fission probabilities) for thousands of nuclei far from stability. Only a small fraction of this data has been experimentally measured. Here we discuss recent sensitivity studies that aim to determine the nuclei whose properties are most crucial for r-process calculations.

  2. Treatment of exhaust fluorescent lamps to recover yttrium: Experimental and process analyses

    SciTech Connect

    De Michelis, Ida; Ferella, Francesco; Varelli, Ennio Fioravante; Veglio, Francesco

    2011-12-15

    Highlights: > Recovery of yttrium from spent fluorescent lamps by sulphuric acid leaching. > The use of sulphuric acid allows to reduce calcium dissolutions. > Main contaminant of fluorescent powder are Si, Pb, Ca and Ba. > Hydrated yttrium oxalate, recovered by selective precipitation, is quite pure (>90%). > We have studied the whole process for the treatment of dangerous waste (plant capability). - Abstract: The paper deals with recovery of yttrium from fluorescent powder coming from dismantling of spent fluorescent tubes. Metals are leached by using different acids (nitric, hydrochloric and sulphuric) and ammonia in different leaching tests. These tests show that ammonia is not suitable to recover yttrium, whereas HNO{sub 3} produces toxic vapours. A full factorial design is carried out with HCl and H{sub 2}SO{sub 4} to evaluate the influence of operating factors. HCl and H{sub 2}SO{sub 4} leaching systems give similar results in terms of yttrium extraction yield, but the last one allows to reduce calcium extraction with subsequent advantage during recovery of yttrium compounds in the downstream. The greatest extraction of yttrium is obtained by 20% w/v S/L ratio, 4 N H{sub 2}SO{sub 4} concentration and 90 deg. C. Yttrium and calcium yields are nearly 85% and 5%, respectively. The analysis of variance shows that acid concentration alone and interaction between acid and pulp density have a significant positive effect on yttrium solubilization for both HCl and H{sub 2}SO{sub 4} medium. Two models are empirically developed to estimate yttrium and calcium concentration during leaching. Precipitation tests demonstrate that at least the stoichiometric amount of oxalic acid is necessary to recover yttrium efficiently and a pure yttrium oxalate n-hydrate can be produced (99% grade). The process is economically feasible if other components of the fluorescent lamps (glass, ferrous and non-ferrous scraps) are recovered after the equipment dismantling and valorized

  3. Spectroscopic analyses of chemical adaptation processes within microalgal biomass in response to changing environments.

    PubMed

    Vogt, Frank; White, Lauren

    2015-03-31

    Via photosynthesis, marine phytoplankton transforms large quantities of inorganic compounds into biomass. This has considerable environmental impacts as microalgae contribute for instance to counter-balancing anthropogenic releases of the greenhouse gas CO2. On the other hand, high concentrations of nitrogen compounds in an ecosystem can lead to harmful algae blooms. In previous investigations it was found that the chemical composition of microalgal biomass is strongly dependent on the nutrient availability. Therefore, it is expected that algae's sequestration capabilities and productivity are also determined by the cells' chemical environments. For investigating this hypothesis, novel analytical methodologies are required which are capable of monitoring live cells exposed to chemically shifting environments followed by chemometric modeling of their chemical adaptation dynamics. FTIR-ATR experiments have been developed for acquiring spectroscopic time series of live Dunaliella parva cultures adapting to different nutrient situations. Comparing experimental data from acclimated cultures to those exposed to a chemically shifted nutrient situation reveals insights in which analyte groups participate in modifications of microalgal biomass and on what time scales. For a chemometric description of these processes, a data model has been deduced which explains the chemical adaptation dynamics explicitly rather than empirically. First results show that this approach is feasible and derives information about the chemical biomass adaptations. Future investigations will utilize these instrumental and chemometric methodologies for quantitative investigations of the relation between chemical environments and microalgal sequestration capabilities. PMID:25813024

  4. Analyses of precipitation processes of BIS(dimethylglyoximato)Ni(II) and related complexes

    NASA Astrophysics Data System (ADS)

    Kozlovskii, M. I.; Wakita, H.; Masuda, I.

    1983-03-01

    Precipitates of Ni(dioximato) 2 complexes, where dioximato is 2,3-butanedione dioximate (dimethyglyoximate: dmgH), 2,3-pentanedione dioximate (ethylmethylglyoximate: emgH) or 1,2-cyclohexanedione dioximate (nioximate: nioxH) monoanions, were formed by a manner of direct mixing of NiCl 2 and dioxime solutions in the molar ratios: [dioxime]/[NiCl 2] is 0.57-5.0 for dmgH 2, 1.0-2.2 for emgH 2, and 0.03-0.09 for nioxH 2. The precipitation processes followed by light-scattering measurements were found to fit Avrami's equation. This fact made it possible for us to obtain the induction periods for the precipitation. The p values, the number of molecules in a "nucleus", were estimated from these induction periods and the evaluated concentrations for the supersaturated solutions of the complexes; these values were 3.58 for Ni(dmgH) 2, 2.73 for Ni(emgH) 2, and 2.81 for Ni(nioxH) 2 precipitates.

  5. Experimental and theoretical analyses on the ultrasonic cavitation processing of Al-based alloys and nanocomposites

    NASA Astrophysics Data System (ADS)

    Jia, Shian

    Strong evidence is showing that microstructure and mechanical properties of a casting component can be significantly improved if nanoparticles are used as reinforcement to form metal-matrix-nano-composite (MMNC). In this paper, 6061/A356 nanocomposite castings are fabricated using the ultrasonic stirring technology (UST). The 6061/A356 alloy and Al2O3/SiC nanoparticles are used as the matrix alloy and the reinforcement, respectively. Nanoparticles are injected into the molten metal and dispersed by ultrasonic cavitation and acoustic streaming. The applied UST parameters in the current experiments are used to validate a recently developed multiphase Computational Fluid Dynamics (CFD) model, which is used to model the nanoparticle dispersion during UST processing. The CFD model accounts for turbulent fluid flow, heat transfer and the complex interaction between the molten alloy and nanoparticles using the ANSYS Fluent Dense Discrete Phase Model (DDPM). The modeling study includes the effects of ultrasonic probe location and the initial location where the nanoparticles are injected into the molten alloy. The microstructure, mechanical behavior and mechanical properties of the nanocomposite castings have been also investigated in detail. The current experimental results show that the tensile strength and elongation of the as-cast nanocomposite samples (6061/A356 alloy reinforced by Al2O 3 or SiC nanoparticles) are improved. The addition of the Al2O 3 or SiC nanoparticles in 6061/A356 alloy matrix changes the fracture mechanism from brittle dominated to ductile dominated.

  6. Adhesion improvement of electroless copper plating on phenolic resin matrix composite through a tin-free sensitization process

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Bian, Cheng; Jing, Xinli

    2013-04-01

    In order to improve the adhesion of electroless copper plating on phenolic resin matrix composite (PRMC), a new and efficient tin-free sensitization process has been developed. Electroless copper plating could be achieved in three steps, namely: (i) chemical etching with potassium permanganate solution; (ii) sensitization and activation with glucose and silver nitrate solution respectively; and (iii) electroless copper plating. Compared with the sample sensitized with stannous chloride (SnCl2), the copper plating obtained in the tin-free process showed excellent adhesion with the PRMC substrate, but had lower plating rate and conductivity. Additionally, the morphology of the copper plating was affected by the sensitization process, and the tin-free process was conducive to the formation of the large spherical copper polycrystal. Although the process is slightly complicated, the new sensitization process is so low-cost and environment-friendly that it is of great significance and could be applied into large-scale commercial manufacturing.

  7. Treatment of exhaust fluorescent lamps to recover yttrium: experimental and process analyses.

    PubMed

    De Michelis, Ida; Ferella, Francesco; Varelli, Ennio Fioravante; Vegliò, Francesco

    2011-12-01

    The paper deals with recovery of yttrium from fluorescent powder coming from dismantling of spent fluorescent tubes. Metals are leached by using different acids (nitric, hydrochloric and sulphuric) and ammonia in different leaching tests. These tests show that ammonia is not suitable to recover yttrium, whereas HNO(3) produces toxic vapours. A full factorial design is carried out with HCl and H(2)SO(4) to evaluate the influence of operating factors. HCl and H(2)SO(4) leaching systems give similar results in terms of yttrium extraction yield, but the last one allows to reduce calcium extraction with subsequent advantage during recovery of yttrium compounds in the downstream. The greatest extraction of yttrium is obtained by 20% w/v S/L ratio, 4N H(2)SO(4) concentration and 90°C. Yttrium and calcium yields are nearly 85% and 5%, respectively. The analysis of variance shows that acid concentration alone and interaction between acid and pulp density have a significant positive effect on yttrium solubilization for both HCl and H(2)SO(4) medium. Two models are empirically developed to estimate yttrium and calcium concentration during leaching. Precipitation tests demonstrate that at least the stoichiometric amount of oxalic acid is necessary to recover yttrium efficiently and a pure yttrium oxalate n-hydrate can be produced (99% grade). The process is economically feasible if other components of the fluorescent lamps (glass, ferrous and non-ferrous scraps) are recovered after the equipment dismantling and valorized, besides the cost that is usually paid to recycling companies for collection, treatment or final disposal of such fluorescent powders. PMID:21840197

  8. Modelling hydrological processes and analysing water-related ecosystem services of Western Siberian lowland basins

    NASA Astrophysics Data System (ADS)

    Schmalz, Britta; Kiesel, Jens; Kruse, Marion; Pfannerstill, Matthias; Sheludkov, Artyom; Khoroshavin, Vitaliy; Veshkurseva, Tatyana; Müller, Felix; Fohrer, Nicola

    2015-04-01

    For discussing and planning sustainable land management of river basins, stakeholders need suitable information on spatio-temporal patterns of hydrological components and ecosystem services. The ecosystem services concept, i.e., services provided by ecosystems that contribute to human welfare benefits, contributes comprehensive information for sustainable river management. This study shows an approach to use ecohydrological modelling results for quantifying and assessing water-related ecosystem services in three lowland river basins in Western Siberia, a region which is of global significance in terms of carbon sequestration, agricultural production and biodiversity preservation. Using the ecohydrological model SWAT, the three basins Pyschma (16762 km²), Vagai (3348 km²) and Loktinka (373 km²) were modelled following a gradient from the landscape units taiga, pre-taiga to forest steppe. For a correct representation of the Siberian lowland hydrology, the consideration of snow melt and retention of surface runoff as well as the implementation of a second groundwater aquifer was of great importance. Good to satisfying model performances were obtained for the extreme hydrological conditions. The simulated SWAT output variables of different hydrological processes were used as indicators for the two regulating services water flow and erosion regulation. The model results were translated into a relative ecosystem service valuation scale. The resulting ecosystem service maps show different spatial and seasonal patterns. Although the high resolution modelling results are averaged out within the aggregated relative valuation scale, seasonal differences can be depicted: during snowmelt, low relevant regulation can be determined, especially for water flow regulation, but a very high relevant regulation was calculated for the vegetation period during summer and for the winter period. The SWAT model serves as a suitable quantification method for the assessment of water

  9. Sensitivity study and parameter optimization of OCD tool for 14nm finFET process

    NASA Astrophysics Data System (ADS)

    Zhang, Zhensheng; Chen, Huiping; Cheng, Shiqiu; Zhan, Yunkun; Huang, Kun; Shi, Yaoming; Xu, Yiping

    2016-03-01

    Optical critical dimension (OCD) measurement has been widely demonstrated as an essential metrology method for monitoring advanced IC process in the technology node of 90 nm and beyond. However, the rapidly shrunk critical dimensions of the semiconductor devices and the increasing complexity of the manufacturing process bring more challenges to OCD. The measurement precision of OCD technology highly relies on the optical hardware configuration, spectral types, and inherently interactions between the incidence of light and various materials with various topological structures, therefore sensitivity analysis and parameter optimization are very critical in the OCD applications. This paper presents a method for seeking the optimum sensitive measurement configuration to enhance the metrology precision and reduce the noise impact to the greatest extent. In this work, the sensitivity of different types of spectra with a series of hardware configurations of incidence angles and azimuth angles were investigated. The optimum hardware measurement configuration and spectrum parameter can be identified. The FinFET structures in the technology node of 14 nm were constructed to validate the algorithm. This method provides guidance to estimate the measurement precision before measuring actual device features and will be beneficial for OCD hardware configuration.

  10. Preliminary Thermal-Mechanical Sizing of Metallic TPS: Process Development and Sensitivity Studies

    NASA Technical Reports Server (NTRS)

    Poteet, Carl C.; Abu-Khajeel, Hasan; Hsu, Su-Yuen

    2002-01-01

    The purpose of this research was to perform sensitivity studies and develop a process to perform thermal and structural analysis and sizing of the latest Metallic Thermal Protection System (TPS) developed at NASA LaRC (Langley Research Center). Metallic TPS is a key technology for reducing the cost of reusable launch vehicles (RLV), offering the combination of increased durability and competitive weights when compared to other systems. Accurate sizing of metallic TPS requires combined thermal and structural analysis. Initial sensitivity studies were conducted using transient one-dimensional finite element thermal analysis to determine the influence of various TPS and analysis parameters on TPS weight. The thermal analysis model was then used in combination with static deflection and failure mode analysis of the sandwich panel outer surface of the TPS to obtain minimum weight TPS configurations at three vehicle stations on the windward centerline of a representative RLV. The coupled nature of the analysis requires an iterative analysis process, which will be described herein. Findings from the sensitivity analysis are reported, along with TPS designs at the three RLV vehicle stations considered.

  11. Integration of sensitivity and bifurcation analysis to detect critical processes in a model combining signalling and cell population dynamics

    NASA Astrophysics Data System (ADS)

    Nikolov, S.; Lai, X.; Liebal, U. W.; Wolkenhauer, O.; Vera, J.

    2010-01-01

    In this article we present and test a strategy to integrate, in a sequential manner, sensitivity analysis, bifurcation analysis and predictive simulations. Our strategy uses some of these methods in a coordinated way such that information, generated in one step, feeds into the definition of further analyses and helps refining the structure of the mathematical model. The aim of the method is to help in the designing of more informative predictive simulations, which focus on critical model parameters and the biological effects of their modulation. We tested our methodology with a multilevel model, accounting for the effect of erythropoietin (Epo)-mediated JAK2-STAT5 signalling in erythropoiesis. Our analysis revealed that time-delays associated with the proliferation-differentiation process are critical to induce pathological sustained oscillations, whereas the modulation of time-delays related to intracellular signalling and hypoxia-controlled physiological dynamics is not enough to induce self-oscillations in the system. Furthermore, our results suggest that the system is able to compensate (through the physiological-level feedback loop on hypoxia) the partial impairment of intracellular signalling processes (downregulation or overexpression of Epo receptor complex and STAT5), but cannot control impairment in some critical physiological-level processes, which provoke the emergence of pathological oscillations.

  12. Pain Processing after Social Exclusion and Its Relation to Rejection Sensitivity in Borderline Personality Disorder

    PubMed Central

    Bungert, Melanie; Koppe, Georgia; Niedtfeld, Inga; Vollstädt-Klein, Sabine; Schmahl, Christian

    2015-01-01

    Objective There is a general agreement that physical pain serves as an alarm signal for the prevention of and reaction to physical harm. It has recently been hypothesized that “social pain,” as induced by social rejection or abandonment, may rely on comparable, phylogenetically old brain structures. As plausible as this theory may sound, scientific evidence for this idea is sparse. This study therefore attempts to link both types of pain directly. We studied patients with borderline personality disorder (BPD) because BPD is characterized by opposing alterations in physical and social pain; hyposensitivity to physical pain is associated with hypersensitivity to social pain, as indicated by an enhanced rejection sensitivity. Method Twenty unmedicated female BPD patients and 20 healthy participants (HC, matched for age and education) played a virtual ball-tossing game (cyberball), with the conditions for exclusion, inclusion, and a control condition with predefined game rules. Each cyberball block was followed by a temperature stimulus (with a subjective pain intensity of 60% in half the cases). The cerebral responses were measured by functional magnetic resonance imaging. The Adult Rejection Sensitivity Questionnaire was used to assess rejection sensitivity. Results Higher temperature heat stimuli had to be applied to BPD patients relative to HCs to reach a comparable subjective experience of painfulness in both groups, which suggested a general hyposensitivity to pain in BPD patients. Social exclusion led to a subjectively reported hypersensitivity to physical pain in both groups that was accompanied by an enhanced activation in the anterior insula and the thalamus. In BPD, physical pain processing after exclusion was additionally linked to enhanced posterior insula activation. After inclusion, BPD patients showed reduced amygdala activation during pain in comparison with HC. In BPD patients, higher rejection sensitivity was associated with lower activation

  13. SML resist processing for high-aspect-ratio and high-sensitivity electron beam lithography

    NASA Astrophysics Data System (ADS)

    Mohammad, Mohammad Ali; Dew, Steven K.; Stepanova, Maria

    2013-03-01

    A detailed process characterization of SML electron beam resist for high-aspect-ratio nanopatterning at high sensitivity is presented. SML contrast curves were generated for methyl isobutyl ketone (MIBK), MIBK/isopropyl alcohol (IPA) (1:3), IPA/water (7:3), n-amyl acetate, xylene, and xylene/methanol (3:1) developers. Using IPA/water developer, the sensitivity of SML was improved considerably and found to be comparable to benchmark polymethylmethacrylate (PMMA) resist without affecting the aspect ratio performance. Employing 30-keV exposures and ultrasonic IPA/water development, an aspect ratio of 9:1 in 50-nm half-pitch dense grating patterns was achieved representing a greater than two times improvement over PMMA. Through demonstration of 25-nm lift-off features, the pattern transfer performance of SML is also addressed.

  14. A case study analysing the process of analogy-based learning in a teaching unit about simple electric circuits

    NASA Astrophysics Data System (ADS)

    Paatz, Roland; Ryder, James; Schwedes, Hannelore; Scott, Philip

    2004-09-01

    The purpose of this case study is to analyse the learning processes of a 16-year-old student as she learns about simple electric circuits in response to an analogy-based teaching sequence. Analogical thinking processes are modelled by a sequence of four steps according to Gentner's structure mapping theory (activate base domain, postulate local matches, connect them to a global match, draw candidate inferences). We consider whether Gentner's theory can be used to account for the details of this specific teaching/learning context. The case study involved video-taping teaching and learning activities in a 10th-grade high school course in Germany. Teaching used water flow through pipes as an analogy for electrical circuits. Using Gentner's theory, relational nets were created from the student's statements at different stages of her learning. Overall, these nets reflect the four steps outlined earlier. We also consider to what extent the learning processes revealed by this case study are different from previous analyses of contexts in which no analogical knowledge is available.

  15. Neural Processing of Calories in Brain Reward Areas Can be Modulated by Reward Sensitivity

    PubMed Central

    van Rijn, Inge; Griffioen-Roose, Sanne; de Graaf, Cees; Smeets, Paul A. M.

    2016-01-01

    A food's reward value is dependent on its caloric content. Furthermore, a food's acute reward value also depends on hunger state. The drive to obtain rewards (reward sensitivity), however, differs between individuals. Here, we assessed the association between brain responses to calories in the mouth and trait reward sensitivity in different hunger states. Firstly, we assessed this in data from a functional neuroimaging study (van Rijn et al., 2015), in which participants (n = 30) tasted simple solutions of a non-caloric sweetener with or without a non-sweet carbohydrate (maltodextrin) during hunger and satiety. Secondly, we expanded these analyses to regular drinks by assessing the same relationship in data from a study in which soft drinks sweetened with either sucrose or a non-caloric sweetener were administered during hunger (n = 18) (Griffioen-Roose et al., 2013). First, taste activation by the non-caloric solution/soft drink was subtracted from that by the caloric solution/soft drink to eliminate sweetness effects and retain activation induced by calories. Subsequently, this difference in taste activation was correlated with reward sensitivity as measured with the BAS drive subscale of the Behavioral Activation System (BAS) questionnaire. When participants were hungry and tasted calories from the simple solution, brain activation in the right ventral striatum (caudate), right amygdala and anterior cingulate cortex (bilaterally) correlated negatively with BAS drive scores. In contrast, when participants were satiated, taste responses correlated positively with BAS drive scores in the left caudate. These results were not replicated for soft drinks. Thus, neural responses to oral calories from maltodextrin were modulated by reward sensitivity in reward-related brain areas. This was not the case for sucrose. This may be due to the direct detection of maltodextrin, but not sucrose in the oral cavity. Also, in a familiar beverage, detection of calories per se may be

  16. Evaluating Processes, Parameters and Observations Using Cross Validation and Computationally Frugal Sensitivity Analysis Methods

    NASA Astrophysics Data System (ADS)

    Foglia, L.; Mehl, S.; Hill, M. C.

    2013-12-01

    Sensitivity analysis methods are used to identify measurements most likely to provide important information for model development and predictions and therefore identify critical processes. Methods range from computationally demanding Monte Carlo and cross-validation methods, to very computationally efficient linear methods. The methods are able to account for interrelations between parameters, but some argue that because linear methods neglect the effects of model nonlinearity, they are not worth considering when examining complex, nonlinear models of environmental systems. However, when faced with computationally demanding models needed to simulate, for example, climate change, the chance of obtaining fundamental insights (such as important and relationships between predictions and parameters) with few model runs is tempting. In the first part of this work, comparisons of local sensitivity analysis and cross-validation are conducted using a nonlinear groundwater model of the Maggia Valley, Southern Switzerland; sensitivity analysis are then applied to an integrated hydrological model of the same system where the impact of more processes and of using different sets of observations on the model results are considered; applicability to models of a variety of situations (climate, water quality, water management) is inferred. Results show that the frugal linear methods produced about 70% of the insight from about 2% of the model runs required by the computationally demanding methods. Regarding important observations, linear methods were not always able to distinguish between moderately and unimportant observations. However, they consistently identified the most important observations which are critical to characterize relationships between parameters and to assess the worth of potential new data collection efforts. Importance both to estimate parameters and predictions of interest was readily identified. The results suggest that it can be advantageous to consider local

  17. Sensitivity of Multiangle, Multispectral Polarimetric Remote Sensing Over Open Oceans to Water-Leaving Radiance: Analyses of RSP Data Acquired During the MILAGRO Campaign

    NASA Technical Reports Server (NTRS)

    Chowdhary, Jacek; Cairns, Brian; Waquet, Fabien; Knobelspiesse, Kirk; Ottaviani, Matteo; Redemann, Jens; Travis, Larry; Mishchenko, Michael

    2012-01-01

    For remote sensing of aerosol over the ocean, there is a contribution from light scattered underwater. The brightness and spectrum of this light depends on the biomass content of the ocean, such that variations in the color of the ocean can be observed even from space. Rayleigh scattering by pure sea water, and Rayleigh-Gans type scattering by plankton, causes this light to be polarized with a distinctive angular distribution. To study the contribution of this underwater light polarization to multiangle, multispectral observations of polarized reflectance over ocean, we previously developed a hydrosol model for use in underwater light scattering computations that produces realistic variations of the ocean color and the underwater light polarization signature of pure sea water. In this work we review this hydrosol model, include a correction for the spectrum of the particulate scattering coefficient and backscattering efficiency, and discuss its sensitivity to variations in colored dissolved organic matter (CDOM) and in the scattering function of marine particulates. We then apply this model to measurements of total and polarized reflectance that were acquired over open ocean during the MILAGRO field campaign by the airborne Research Scanning Polarimeter (RSP). Analyses show that our hydrosol model faithfully reproduces the water-leaving contributions to RSP reflectance, and that the sensitivity of these contributions to Chlorophyll a concentration [Chl] in the ocean varies with the azimuth, height, and wavelength of observations. We also show that the impact of variations in CDOM on the polarized reflectance observed by the RSP at low altitude is comparable to or much less than the standard error of this reflectance whereas their effects in total reflectance may be substantial (i.e. up to >30%). Finally, we extend our study of polarized reflectance variations with [Chl] and CDOM to include results for simulated spaceborne observations.

  18. Stress Sensitivity and Stress Generation in Social Anxiety Disorder: A Temporal Process Approach

    PubMed Central

    Farmer, Antonina S.; Kashdan, Todd B.

    2015-01-01

    Dominant theoretical models of social anxiety disorder (SAD) suggest that people who suffer from function-impairing social fears are likely to react more strongly to social stressors. Researchers have examined the reactivity of people with SAD to stressful laboratory tasks, but there is little knowledge about how stress affects their daily lives. We asked 79 adults from the community, 40 diagnosed with SAD and 39 matched healthy controls, to self-monitor their social interactions, social events, and emotional experiences over two weeks using electronic diaries. These data allowed us to examine associations of social events and emotional well-being both within-day and from one day to the next. Using hierarchical linear modeling, we found all participants to report increases in negative affect and decreases in positive affect and self-esteem on days when they experienced more stressful social events. However, people with SAD displayed greater stress sensitivity, particularly in negative emotion reactions to stressful social events, compared to healthy controls. Groups also differed in how previous days’ events influenced sensitivity to current days’ events. Moreover, we found evidence of stress generation in that the SAD group reported more frequent interpersonal stress, though temporal analyses did not suggest greater likelihood of social stress on days following intense negative emotions. Our findings support the role of heightened social stress sensitivity in SAD, highlighting rigidity in reactions and occurrence of stressful experiences from one day to the next. These findings also shed light on theoretical models of emotions and self-esteem in SAD and present important clinical implications. PMID:25688437

  19. A sensitivity study of s-process: the impact of uncertainties from nuclear reaction rates

    NASA Astrophysics Data System (ADS)

    Vinyoles, N.; Serenelli, A.

    2016-01-01

    The slow neutron capture process (s-process) is responsible for the production of about half the elements beyond the Fe-peak. The production sites and the conditions under which the different components of s-process occur are relatively well established. A detailed quantitative understanding of s-process nucleosynthesis may yield light in physical processes, e.g. convection and mixing, taking place in the production sites. For this, it is important that the impact of uncertainties in the nuclear physics is well understood. In this work we perform a study of the sensitivity of s-process nucleosynthesis, with particular emphasis in the main component, on the nuclear reaction rates. Our aims are: to quantify the current uncertainties in the production factors of s-process elements originating from nuclear physics and, to identify key nuclear reactions that require more precise experimental determinations. In this work we studied two different production sites in which s-process occurs with very different neutron exposures: 1) a low-mass extremely metal-poor star during the He-core flash (nn reaching up to values of ∼ 1014cm-3); 2) the TP-AGB phase of a M⊙, Z=0.01 model, the typical site of the main s-process component (nn up to 108 — 109cm-3). In the first case, the main variation in the production of s-process elements comes from the neutron poisons and with relative variations around 30%-50%. In the second, the neutron poison are not as important because of the higher metallicity of the star that actually acts as a seed and therefore, the final error of the abundances are much lower around 10%-25%.

  20. Sensitivity analysis of a dry-processed Candu fuel pellet's design parameters

    SciTech Connect

    Choi, Hangbok; Ryu, Ho Jin

    2007-07-01

    Sensitivity analysis was carried out in order to investigate the effect of a fuel pellet's design parameters on the performance of a dry-processed Canada deuterium uranium (CANDU) fuel and to suggest the optimum design modifications. Under a normal operating condition, a dry-processed fuel has a higher internal pressure and plastic strain due to a higher fuel centerline temperature when compared with a standard natural uranium CANDU fuel. Under a condition that the fuel bundle dimensions do not change, sensitivity calculations were performed on a fuel's design parameters such as the axial gap, dish depth, gap clearance and plenum volume. The results showed that the internal pressure and plastic strain of the cladding were most effectively reduced if a fuel's element plenum volume was increased. More specifically, the internal pressure and plastic strain of the dry-processed fuel satisfied the design limits of a standard CANDU fuel when the plenum volume was increased by one half a pellet, 0.5 mm{sup 3}/K. (authors)

  1. Characterization of contamination through the use of position sensitive detectors and digital image processing

    SciTech Connect

    Shonka, J.J.; DeBord, D.M.; Bennett, T.E.; Weismann, J.J.

    1996-06-01

    This report describes development of a significant new method for monitoring radioactive surface contamination. A floor monitor prototype has been designed which uses position sensitive proportional counter based radiation detectors. The system includes a novel operator interface consisting of an enhanced reality display providing the operator with 3 dimensional contours of contamination and background subtracted stereo clicks. The process software saves electronic files of survey data at very high rates along with time stamped video recording and provides completely documented surveys in a visualization oriented data management system. The data management system allows simple re-assembly of strips of data that are taken with a linear PSPC and allows visualization and treatment of the data using algorithms developed for processing images from earth resource satellites. This report includes a brief history of the development path for the floor monitor, a discussion of position sensitive proportional counter technology, and details concerning the process software, post processor and hardware. The last chapter discusses the field tests that were conducted at five sites and an application of the data management system for data not associated with detector systems.

  2. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    NASA Astrophysics Data System (ADS)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems

  3. Sensitivity of mix in Inertial Confinement Fusion simulations to diffusion processes

    NASA Astrophysics Data System (ADS)

    Melvin, Jeremy; Cheng, Baolian; Rana, Verinder; Lim, Hyunkyung; Glimm, James; Sharp, David H.

    2015-11-01

    We explore two themes related to the simulation of mix within an Inertial Confinement Fusion (ICF) implosion, the role of diffusion (viscosity, mass diffusion and thermal conduction) processes and the impact of front tracking on the growth of the hydrodynamic instabilities. Using the University of Chicago HEDP code FLASH, we study the sensitivity of post-shot simulations of a NIC cryogenic shot to the diffusion models and front tracking of the material interfaces. Results of 1D and 2D simulations are compared to experimental quantities and an analysis of the current state of fully integrated ICF simulations is presented.

  4. Analysing the Policy Process.

    ERIC Educational Resources Information Center

    Humes, Walter M.

    1997-01-01

    Examines the recent development of educational policy analysis as a research field within Scottish education. Discusses "inside" and "outside" approaches to policy analysis; the value of theoretical models for making sense of source material; the potential of discourse analysis, illustrated by reference to Foucault and Lyotard; and the need for…

  5. Implementation of Complex Signal Processing Algorithms for Position-Sensitive Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Smith, Stephen J.

    2008-01-01

    We have recently reported on a theoretical digital signal-processing algorithm for improved energy and position resolution in position-sensitive, transition-edge sensor (POST) X-ray detectors [Smith et al., Nucl, lnstr and Meth. A 556 (2006) 2371. PoST's consists of one or more transition-edge sensors (TES's) on a large continuous or pixellated X-ray absorber and are under development as an alternative to arrays of single pixel TES's. PoST's provide a means to increase the field-of-view for the fewest number of read-out channels. In this contribution we extend the theoretical correlated energy position optimal filter (CEPOF) algorithm (originally developed for 2-TES continuous absorber PoST's) to investigate the practical implementation on multi-pixel single TES PoST's or Hydras. We use numerically simulated data for a nine absorber device, which includes realistic detector noise, to demonstrate an iterative scheme that enables convergence on the correct photon absorption position and energy without any a priori assumptions. The position sensitivity of the CEPOF implemented on simulated data agrees very well with the theoretically predicted resolution. We discuss practical issues such as the impact of random arrival phase of the measured data on the performance of the CEPOF. The CEPOF algorithm demonstrates that full-width-at- half-maximum energy resolution of < 8 eV coupled with position-sensitivity down to a few 100 eV should be achievable for a fully optimized device.

  6. Perceptual face processing in developmental prosopagnosia is not sensitive to the canonical location of face parts.

    PubMed

    Towler, John; Parketny, Joanna; Eimer, Martin

    2016-01-01

    Individuals with developmental prosopagnosia (DP) are strongly impaired in recognizing faces, but it is controversial whether this deficit is linked to atypical visual-perceptual face processing mechanisms. Previous behavioural studies have suggested that face perception in DP might be less sensitive to the canonical spatial configuration of face parts in upright faces. To test this prediction, we recorded event-related brain potentials (ERPs) to intact upright faces and to faces with spatially scrambled parts (eyes, nose, and mouth) in a group of ten participants with DP and a group of ten age-matched control participants with normal face recognition abilities. The face-sensitive N170 component and the vertex positive potential (VPP) were both enhanced and delayed for scrambled as compared to intact faces in the control group. In contrast, N170 and VPP amplitude enhancements to scrambled faces were absent in the DP group. For control participants, the N170 to scrambled faces was also sensitive to feature locations, with larger and delayed N170 components contralateral to the side where all features appeared in a non-canonical position. No such differences were present in the DP group. These findings suggest that spatial templates of the prototypical feature locations within an upright face are selectively impaired in DP. PMID:26649913

  7. What is the deficit in phonological processing deficits: Auditory sensitivity, masking, or category formation?

    PubMed Central

    Nittrouer, Susan; Shune, Samantha; Lowenstein, Joanna H.

    2012-01-01

    Although children with language impairments, including those associated with reading, usually demonstrate deficits in phonological processing, there is minimal agreement as to the source of those deficits. This study examined two problems hypothesized to be possible sources: either poor auditory sensitivity to speech-relevant acoustic properties, mainly formant transitions, or enhanced masking of those properties. Adults and 8-year-olds with and without phonological processing deficits (PPD) participated. Children with PPD demonstrated weaker abilities than children with typical language development (TLD) in reading, sentence recall, and phonological awareness. Dependent measures were: 1) word recognition; 2) discrimination of spectral glides; and 3) phonetic judgments based on spectral and temporal cues. All tasks were conducted in quiet and in noise. Children with PPD showed neither poorer auditory sensitivity nor greater masking than adults and children with TLD, but did demonstrate an unanticipated deficit in category formation for non-speech sounds. These results suggest that these children may have an underlying deficit in perceptually organizing sensory information to form coherent categories. PMID:21109251

  8. Modeling and Estimating Recall Processing Capacity: Sensitivity and Diagnostic Utility in Application to Mild Cognitive Impairment.

    PubMed

    Wenger, Michael K; Negash, Selamawit; Petersen, Ronald C; Petersen, Lyndsay

    2010-02-01

    We investigate the potential for using latency-based measures of retrieval processing capacity to assess changes in perfomance specific to individuals with mild cognitive impairment (MCI), a reliable precursor state to Alzheimer's Disease. Use of these capacity measures is motivated in part by exploration of the effects of atrophy on a computational model of a basic hippocampal circuit. We use this model to suggest that capacity may be a more sensitive indicator of undelying atrophy than speed of processing, and test this hypothesis by adapting a standard behavioral measure of memory (the free and cued selective reminding test, FCSRT) to allow for the collection of cued recall latencies. Participants were drawn from five groups: college-aged, middle-aged, healthy elderly, those with a diagnosis of MCI, and a sample of MCI control participants. The measure of capacity is shown to offer increased classificatory sensitivity relative to the standard behavioral measures, and is also shown to be the behavioral measure that correlated most strongly with hippocampal volume. PMID:20436932

  9. Modeling and Estimating Recall Processing Capacity: Sensitivity and Diagnostic Utility in Application to Mild Cognitive Impairment

    PubMed Central

    Wenger, Michael K.; Negash, Selamawit; Petersen, Ronald C.; Petersen, Lyndsay

    2009-01-01

    We investigate the potential for using latency-based measures of retrieval processing capacity to assess changes in perfomance specific to individuals with mild cognitive impairment (MCI), a reliable precursor state to Alzheimer's Disease. Use of these capacity measures is motivated in part by exploration of the effects of atrophy on a computational model of a basic hippocampal circuit. We use this model to suggest that capacity may be a more sensitive indicator of undelying atrophy than speed of processing, and test this hypothesis by adapting a standard behavioral measure of memory (the free and cued selective reminding test, FCSRT) to allow for the collection of cued recall latencies. Participants were drawn from five groups: college-aged, middle-aged, healthy elderly, those with a diagnosis of MCI, and a sample of MCI control participants. The measure of capacity is shown to offer increased classificatory sensitivity relative to the standard behavioral measures, and is also shown to be the behavioral measure that correlated most strongly with hippocampal volume. PMID:20436932

  10. Estimate design sensitivity to process variation for the 14nm node

    NASA Astrophysics Data System (ADS)

    Landié, Guillaume; Farys, Vincent

    2016-03-01

    Looking for the highest density and best performance, the 14nm technological node saw the development of aggressive designs, with design rules as close as possible to the limit of the process. Edge placement error (EPE) budget is now tighter and Reticle Enhancement Techniques (RET) must take into account the highest number of parameters to be able to get the best printability and guaranty yield requirements. Overlay is a parameter that must be taken into account earlier during the design library development to avoid design structures presenting a high risk of performance failure. This paper presents a method taking into account the overlay variation and the Resist Image simulation across the process window variation to estimate the design sensitivity to overlay. Areas in the design are classified with specific metrics, from the highest to the lowest overlay sensitivity. This classification can be used to evaluate the robustness of a full chip product to process variability or to work with designers during the design library development. The ultimate goal is to evaluate critical structures in different contexts and report the most critical ones. In this paper, we study layers interacting together, such as Contact/Poly area overlap or Contact/Active distance. ASML-Brion tooling allowed simulating the different resist contours and applying the overlay value to one of the layers. Lithography Manufacturability Check (LMC) detectors are then set to extract the desired values for analysis. Two different approaches have been investigated. The first one is a systematic overlay where we apply the same overlay everywhere on the design. The second one is using a real overlay map which has been measured and applied to the LMC tools. The data are then post-processed and compared to the design target to create a classification and show the error distribution. Figure: