DOE Office of Scientific and Technical Information (OSTI.GOV)
Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.
2005-09-15
The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less
Vinnakota, Kalyan C; Beard, Daniel A; Dash, Ranjan K
2009-01-01
Identification of a complex biochemical system model requires appropriate experimental data. Models constructed on the basis of data from the literature often contain parameters that are not identifiable with high sensitivity and therefore require additional experimental data to identify those parameters. Here we report the application of a local sensitivity analysis to design experiments that will improve the identifiability of previously unidentifiable model parameters in a model of mitochondrial oxidative phosphorylation and tricaboxylic acid cycle. Experiments were designed based on measurable biochemical reactants in a dilute suspension of purified cardiac mitochondria with experimentally feasible perturbations to this system. Experimental perturbations and variables yielding the most number of parameters above a 5% sensitivity level are presented and discussed.
Improvements to the YbF electron electric dipole moment experiment
NASA Astrophysics Data System (ADS)
Sauer, B. E.; Rabey, I. M.; Devlin, J. A.; Tarbutt, M. R.; Ho, C. J.; Hinds, E. A.
2017-04-01
The standard model of particle physics predicts that the permanent electric dipole moment (EDM) of the electron is very nearly zero. Many extensions to the standard model predict an electron EDM just below current experimental limits. We are currently working to improve the sensitivity of the Imperial College YbF experiment. We have implemented combined laser-radiofrequency pumping techniques which both increase the number of molecules which participate in the EDM experiment and also increase the probability of detection. Combined, these techniques give nearly two orders of magnitude increase in the experimental sensitivity. At this enhanced sensitivity magnetic effects which were negligible become important. We have developed a new way to construct the electrodes for electric field plates which minimizes the effect of magnetic Johnson noise. The new YbF experiment is expected to comparable in sensitivity to the most sensitive measurements of the electron EDM to date. We will also discuss laser cooling techniques which promise an even larger increase in sensitivity.
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.; Zhang, Y. Q.; Adebiyi, Adebimpe
1989-01-01
Progress performed on each task is described. Order of magnitude analyses related to liquid zone sensitivity and thermo-capillary flow sensitivity are covered. Progress with numerical models of the sensitivity of isothermal liquid zones is described. Progress towards a numerical model of coupled buoyancy-driven and thermo-capillary convection experiments is also described. Interaction with NASA personnel is covered. Results to date are summarized and they are discussed in terms of the predicted space station acceleration environment. Work planned for the second year is also discussed.
NASA Technical Reports Server (NTRS)
Carrasco, M.; Penpeci-Talgar, C.; Eckstein, M.
2000-01-01
This study is the first to report the benefits of spatial covert attention on contrast sensitivity in a wide range of spatial frequencies when a target alone was presented in the absence of a local post-mask. We used a peripheral precue (a small circle indicating the target location) to explore the effects of covert spatial attention on contrast sensitivity as assessed by orientation discrimination (Experiments 1-4), detection (Experiments 2 and 3) and localization (Experiment 3) tasks. In all four experiments the target (a Gabor patch ranging in spatial frequency from 0.5 to 10 cpd) was presented alone in one of eight possible locations equidistant from fixation. Contrast sensitivity was consistently higher for peripherally- than for neutrally-cued trials, even though we eliminated variables (distracters, global masks, local masks, and location uncertainty) that are known to contribute to an external noise reduction explanation of attention. When observers were presented with vertical and horizontal Gabor patches an external noise reduction signal detection model accounted for the cueing benefit in a discrimination task (Experiment 1). However, such a model could not account for this benefit when location uncertainty was reduced, either by: (a) Increasing overall performance level (Experiment 2); (b) increasing stimulus contrast to enable fine discriminations of slightly tilted suprathreshold stimuli (Experiment 3); and (c) presenting a local post-mask (Experiment 4). Given that attentional benefits occurred under conditions that exclude all variables predicted by the external noise reduction model, these results support the signal enhancement model of attention.
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.
NASA Astrophysics Data System (ADS)
Bray, J. D.
2016-04-01
Various experiments have been conducted to search for the radio emission from ultra-high-energy (UHE) particles interacting in the lunar regolith. Although they have not yielded any detections, they have been successful in establishing upper limits on the flux of these particles. I present a review of these experiments in which I re-evaluate their sensitivity to radio pulses, accounting for effects which were neglected in the original reports, and compare them with prospective near-future experiments. In several cases, I find that past experiments were substantially less sensitive than previously believed. I apply existing analytic models to determine the resulting limits on the fluxes of UHE neutrinos and cosmic rays (CRs). In the latter case, I amend the model to accurately reflect the fraction of the primary particle energy which manifests in the resulting particle cascade, resulting in a substantial improvement in the estimated sensitivity to CRs. Although these models are in need of further refinement, in particular to incorporate the effects of small-scale lunar surface roughness, their application here indicates that a proposed experiment with the LOFAR telescope would test predictions of the neutrino flux from exotic-physics models, and an experiment with a phased-array feed on a large single-dish telescope such as the Parkes radio telescope would allow the first detection of CRs with this technique, with an expected rate of one detection per 140 h.
A one-dimensional interactive soil-atmosphere model for testing formulations of surface hydrology
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Eagleson, Peter S.
1990-01-01
A model representing a soil-atmosphere column in a GCM is developed for off-line testing of GCM soil hydrology parameterizations. Repeating three representative GCM sensitivity experiments with this one-dimensional model demonstrates that, to first order, the model reproduces a GCM's sensitivity to imposed changes in parameterization and therefore captures the essential physics of the GCM. The experiments also show that by allowing feedback between the soil and atmosphere, the model improves on off-line tests that rely on prescribed precipitation, radiation, and other surface forcing.
Feinstein, Brian A; Goldfried, Marvin R; Davila, Joanne
2012-10-01
The current study used path analysis to examine potential mechanisms through which experiences of discrimination influence depressive and social anxiety symptoms. The sample included 218 lesbians and 249 gay men (total N = 467) who participated in an online survey about minority stress and mental health. The proposed model included 2 potential mediators-internalized homonegativity and rejection sensitivity-as well as a culturally relevant antecedent to experiences of discrimination-childhood gender nonconformity. Results indicated that the data fit the model well, supporting the mediating roles of internalized homonegativity and rejection sensitivity in the associations between experiences of discrimination and symptoms of depression and social anxiety. Results also supported the role of childhood gender nonconformity as an antecedent to experiences of discrimination. Although there were not significant gender differences in the overall model fit, some of the associations within the model were significantly stronger for gay men than lesbians. These findings suggest potential mechanisms through which experiences of discrimination influence well-being among sexual minorities, which has important implications for research and clinical practice with these populations. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koetke, D.D.; Manweiler, R.W.; Shirvel Stanislaus, T.D.
1993-01-01
The work done on this project was focused on two LAMPF experiments. The MEGA experiment, a high-sensitivity search for the lepton-family-number-violating decay [mu] [yields] e [gamma] to a sensitivity which, measured in terms of the branching ratio, BR = [[mu] [yields] e [gamma
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
NASA Astrophysics Data System (ADS)
Knote, Christoph; Barré, Jérôme; Eckl, Max
2018-02-01
The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
The GammeV suite of experimental searches for axion-like particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steffen, Jason H.; /Fermilab; Upadhye, Amol
2009-08-01
We report on the design and results of the GammeV search for axion-like particles and for chameleon particles. We also discuss plans for an improved experiment to search for chameleon particles, one which is sensitive to both cosmological and power-law chameleon models. Plans for an improved axion-like particle search using coupled resonant cavities are also presented. This experiment will be more sensitive to axion-like particles than stellar astrophysical models or current helioscope experiments.
Sensitivity of a Simulated Derecho Event to Model Initial Conditions
NASA Astrophysics Data System (ADS)
Wang, Wei
2014-05-01
Since 2003, the MMM division at NCAR has been experimenting cloud-permitting scale weather forecasting using Weather Research and Forecasting (WRF) model. Over the years, we've tested different model physics, and tried different initial and boundary conditions. Not surprisingly, we found that the model's forecasts are more sensitive to the initial conditions than model physics. In 2012 real-time experiment, WRF-DART (Data Assimilation Research Testbed) at 15 km was employed to produce initial conditions for twice-a-day forecast at 3 km. On June 29, this forecast system captured one of the most destructive derecho event on record. In this presentation, we will examine forecast sensitivity to different model initial conditions, and try to understand the important features that may contribute to the success of the forecast.
Reininghaus, Ulrich; Kempton, Matthew J; Valmaggia, Lucia; Craig, Tom K J; Garety, Philippa; Onyejiaka, Adanna; Gayer-Anderson, Charlotte; So, Suzanne H; Hubbard, Kathryn; Beards, Stephanie; Dazzan, Paola; Pariante, Carmine; Mondelli, Valeria; Fisher, Helen L; Mills, John G; Viechtbauer, Wolfgang; McGuire, Philip; van Os, Jim; Murray, Robin M; Wykes, Til; Myin-Germeys, Inez; Morgan, Craig
2016-05-01
While contemporary models of psychosis have proposed a number of putative psychological mechanisms, how these impact on individuals to increase intensity of psychotic experiences in real life, outside the research laboratory, remains unclear. We aimed to investigate whether elevated stress sensitivity, experiences of aberrant novelty and salience, and enhanced anticipation of threat contribute to the development of psychotic experiences in daily life. We used the experience sampling method (ESM) to assess stress, negative affect, aberrant salience, threat anticipation, and psychotic experiences in 51 individuals with first-episode psychosis (FEP), 46 individuals with an at-risk mental state (ARMS) for psychosis, and 53 controls with no personal or family history of psychosis. Linear mixed models were used to account for the multilevel structure of ESM data. In all 3 groups, elevated stress sensitivity, aberrant salience, and enhanced threat anticipation were associated with an increased intensity of psychotic experiences. However, elevated sensitivity to minor stressful events (χ(2)= 6.3,P= 0.044), activities (χ(2)= 6.7,P= 0.036), and areas (χ(2)= 9.4,P= 0.009) and enhanced threat anticipation (χ(2)= 9.3,P= 0.009) were associated with more intense psychotic experiences in FEP individuals than controls. Sensitivity to outsider status (χ(2)= 5.7,P= 0.058) and aberrantly salient experiences (χ(2)= 12.3,P= 0.002) were more strongly associated with psychotic experiences in ARMS individuals than controls. Our findings suggest that stress sensitivity, aberrant salience, and threat anticipation are important psychological processes in the development of psychotic experiences in daily life in the early stages of the disorder. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.
Reininghaus, Ulrich; Kempton, Matthew J.; Valmaggia, Lucia; Craig, Tom K. J.; Garety, Philippa; Onyejiaka, Adanna; Gayer-Anderson, Charlotte; So, Suzanne H.; Hubbard, Kathryn; Beards, Stephanie; Dazzan, Paola; Pariante, Carmine; Mondelli, Valeria; Fisher, Helen L.; Mills, John G.; Viechtbauer, Wolfgang; McGuire, Philip; van Os, Jim; Murray, Robin M.; Wykes, Til; Myin-Germeys, Inez; Morgan, Craig
2016-01-01
While contemporary models of psychosis have proposed a number of putative psychological mechanisms, how these impact on individuals to increase intensity of psychotic experiences in real life, outside the research laboratory, remains unclear. We aimed to investigate whether elevated stress sensitivity, experiences of aberrant novelty and salience, and enhanced anticipation of threat contribute to the development of psychotic experiences in daily life. We used the experience sampling method (ESM) to assess stress, negative affect, aberrant salience, threat anticipation, and psychotic experiences in 51 individuals with first-episode psychosis (FEP), 46 individuals with an at-risk mental state (ARMS) for psychosis, and 53 controls with no personal or family history of psychosis. Linear mixed models were used to account for the multilevel structure of ESM data. In all 3 groups, elevated stress sensitivity, aberrant salience, and enhanced threat anticipation were associated with an increased intensity of psychotic experiences. However, elevated sensitivity to minor stressful events (χ2 = 6.3, P = 0.044), activities (χ2 = 6.7, P = 0.036), and areas (χ2 = 9.4, P = 0.009) and enhanced threat anticipation (χ2 = 9.3, P = 0.009) were associated with more intense psychotic experiences in FEP individuals than controls. Sensitivity to outsider status (χ2 = 5.7, P = 0.058) and aberrantly salient experiences (χ2 = 12.3, P = 0.002) were more strongly associated with psychotic experiences in ARMS individuals than controls. Our findings suggest that stress sensitivity, aberrant salience, and threat anticipation are important psychological processes in the development of psychotic experiences in daily life in the early stages of the disorder. PMID:26834027
Process modelling for Space Station experiments
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.; Rosenberger, Franz; Nadarajah, Arunan; Ouazzani, Jalil; Amiroudine, Sakir
1990-01-01
Examined here is the sensitivity of a variety of space experiments to residual accelerations. In all the cases discussed the sensitivity is related to the dynamic response of a fluid. In some cases the sensitivity can be defined by the magnitude of the response of the velocity field. This response may involve motion of the fluid associated with internal density gradients, or the motion of a free liquid surface. For fluids with internal density gradients, the type of acceleration to which the experiment is sensitive will depend on whether buoyancy driven convection must be small in comparison to other types of fluid motion, or fluid motion must be suppressed or eliminated. In the latter case, the experiments are sensitive to steady and low frequency accelerations. For experiments such as the directional solidification of melts with two or more components, determination of the velocity response alone is insufficient to assess the sensitivity. The effect of the velocity on the composition and temperature field must be considered, particularly in the vicinity of the melt-crystal interface. As far as the response to transient disturbances is concerned, the sensitivity is determined by both the magnitude and frequency of the acceleration and the characteristic momentum and solute diffusion times. The microgravity environment, a numerical analysis of low gravity tolerance of the Bridgman-Stockbarger technique, and modeling crystal growth by physical vapor transport in closed ampoules are discussed.
How does the sensitivity of climate affect stratospheric solar radiation management?
NASA Astrophysics Data System (ADS)
Ricke, K.; Rowlands, D. J.; Ingram, W.; Keith, D.; Morgan, M. G.
2011-12-01
If implementation of proposals to engineer the climate through solar radiation management (SRM) ever occurs, it is likely to be contingent upon climate sensitivity. Despite this, no modeling studies have examined how the effectiveness of SRM forcings differs between the typical Atmosphere-Ocean General Circulation Models (AOGCMs) with climate sensitivities close to the Coupled Model Intercomparison Project (CMIP) mean and ones with high climate sensitivities. Here, we use a perturbed physics ensemble modeling experiment to examine variations in the response of climate to SRM under different climate sensitivities. When SRM is used as a substitute for mitigation its ability to maintain the current climate state gets worse with increased climate sensitivity and with increased concentrations of greenhouse gases. However, our results also demonstrate that the potential of SRM to slow climate change, even at the regional level, grows with climate sensitivity. On average, SRM reduces regional rates of temperature change by more than 90 percent and rates of precipitation change by more than 50 percent in these higher sensitivity model configurations. To investigate how SRM might behave in models with high climate sensitivity that are also consistent with recent observed climate change we perform a "perturbed physics" ensemble (PPE) modelling experiment with the climateprediction.net (cpdn) version of the HadCM3L AOGCM. Like other perturbed physics climate modelling experiments, we simulate past and future climate scenarios using a wide range of model parameter combinations that both reproduce past climate within a specified level of accuracy and simulate future climates with a wide range of climate sensitivities. We chose 43 members ("model versions") from a subset of the 1,550 from the British Broadcasting Corporation (BBC) climateprediction.net project that have data that allow restarts. We use our results to explore how much assessments of SRM that use best-estimate models, and so near-median climate sensitivity, may be ignoring important contingencies associated with implementing SRM in reality. A primary motivation for studying SRM via the injection of aerosols in the stratosphere is to evaluate its potential effectiveness as "insurance" in the case of higher-than-expected climate response to global warming. We find that this is precisely when SRM appears to be least effective in returning regional climates to their baseline states and reducing regional rates of precipitation change. On the other hand, given the very high regional temperature anomalies associated with rising greenhouse gas concentrations in high sensitivity models, it is also where SRM is most effective in reducing rates of change relative to a no SRM alternative.
Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.
2015-01-01
The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Qing-Guo; Wang, Sai; Zhao, Wen, E-mail: huangqg@itp.ac.cn, E-mail: wangsai@itp.ac.cn, E-mail: wzhao7@ustc.edu.cn
2015-10-01
By taking into account the contamination of foreground radiations, we employ the Fisher matrix to forecast the future sensitivity on the tilt of power spectrum of primordial tensor perturbations for several ground-based (AdvACT, CLASS, Keck/BICEP3, Simons Array, SPT-3G), balloon-borne (EBEX, Spider) and satellite (CMBPol, COrE, LiteBIRD) experiments of B-mode polarizations. For the fiducial model n{sub t}=0, our results show that the satellite experiments give good sensitivity on the tensor tilt n{sub t} to the level σ{sub n{sub t}}∼<0.1 for r∼>2×10{sup −3}, while the ground-based and balloon-borne experiments give worse sensitivity. By considering the BICEP2/Keck Array and Planck (BKP) constraint onmore » the tensor-to-scalar ratio r, we see that it is impossible for these experiments to test the consistency relation n{sub t}=−r/8 in the canonical single-field slow-roll inflation models.« less
A Practical Model of Quartz Crystal Microbalance in Actual Applications.
Huang, Xianhe; Bai, Qingsong; Hu, Jianguo; Hou, Dong
2017-08-03
A practical model of quartz crystal microbalance (QCM) is presented, which considers both the Gaussian distribution characteristic of mass sensitivity and the influence of electrodes on the mass sensitivity. The equivalent mass sensitivity of 5 MHz and 10 MHz AT-cut QCMs with different sized electrodes were calculated according to this practical model. The equivalent mass sensitivity of this practical model is different from the Sauerbrey's mass sensitivity, and the error between them increases sharply as the electrode radius decreases. A series of experiments which plate rigid gold film onto QCMs were carried out and the experimental results proved this practical model is more valid and correct rather than the classical Sauerbrey equation. The practical model based on the equivalent mass sensitivity is convenient and accurate in actual measurements.
Sensitivity of the Boundary Plasma to the Plasma-Material Interface
Canik, John M.; Tang, X. -Z.
2017-01-01
While the sensitivity of the scrape-off layer and divertor plasma to the highly uncertain cross-field transport assumptions is widely recognized, the plasma is also sensitive to the details of the plasma-material interface (PMI) models used as part of comprehensive predictive simulations. Here in this paper, these PMI sensitivities are studied by varying the relevant sub-models within the SOLPS plasma transport code. Two aspects are explored: the sheath model used as a boundary condition in SOLPS, and fast particle reflection rates for ions impinging on a material surface. Both of these have been the study of recent high-fidelity simulation efforts aimedmore » at improving the understanding and prediction of these phenomena. It is found that in both cases quantitative changes to the plasma solution result from modification of the PMI model, with a larger impact in the case of the reflection coefficient variation. Finally, this indicates the necessity to better quantify the uncertainties within the PMI models themselves, and perform thorough sensitivity analysis to propagate these throughout the boundary model; this is especially important for validation against experiment, where the error in the simulation is a critical and less-studied piece of the code-experiment comparison.« less
NASA Astrophysics Data System (ADS)
Bindschadler, Robert
2013-04-01
The SeaRISE (Sea-level Response to Ice Sheet Evolution) project achieved ice-sheet model ensemble responses to a variety of prescribed changes to surface mass balance, basal sliding and ocean boundary melting. Greenland ice sheet models are more sensitive than Antarctic ice sheet models to likely atmospheric changes in surface mass balance, while Antarctic models are most sensitive to basal melting of its ice shelves. An experiment approximating the IPCC's RCP8.5 scenario produces first century contributions to sea level of 22.3 and 7.3 cm from Greenland and Antarctica, respectively, with a range among models of 62 and 17 cm, respectively. By 200 years, these projections increase to 53.2 and 23.4 cm, respectively, with ranges of 79 and 57 cm. The considerable range among models was not only in the magnitude of ice lost, but also in the spatial pattern of response to identical forcing. Despite this variation, the response of any single model to a large range in the forcing intensity was remarkably linear in most cases. Additionally, the results of sensitivity experiments to single types of forcing (i.e., only one of the surface mass balance, or basal sliding, or ocean boundary melting) could be summed to accurately predict any model's result for an experiment when multiple forcings were applied simultaneously. This suggests a limited amount of feedback through the ice sheet's internal dynamics between these types of forcing over the time scale of a few centuries (SeaRISE experiments lasted 500 years).
Field warming experiments shed light on the wheat yield response to temperature in China
Zhao, Chuang; Piao, Shilong; Huang, Yao; Wang, Xuhui; Ciais, Philippe; Huang, Mengtian; Zeng, Zhenzhong; Peng, Shushi
2016-01-01
Wheat growth is sensitive to temperature, but the effect of future warming on yield is uncertain. Here, focusing on China, we compiled 46 observations of the sensitivity of wheat yield to temperature change (SY,T, yield change per °C) from field warming experiments and 102 SY,T estimates from local process-based and statistical models. The average SY,T from field warming experiments, local process-based models and statistical models is −0.7±7.8(±s.d.)% per °C, −5.7±6.5% per °C and 0.4±4.4% per °C, respectively. Moreover, SY,T is different across regions and warming experiments indicate positive SY,T values in regions where growing-season mean temperature is low, and water supply is not limiting, and negative values elsewhere. Gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project appear to capture the spatial pattern of SY,T deduced from warming observations. These results from local manipulative experiments could be used to improve crop models in the future. PMID:27853151
Electrostatic Discharge Initiation Experiments using PVDF Pressure Transducers
1991-12-01
ignition sensitivity. The results are discussed within the context of a preliminary model of electrostatic initiation. iii/iv NAVSWC TR 91-666 CONTENTS...Chapter Page 1 INTRODUCTION .......................................... 1-1 TWO PHASE IGNITION MODEL ....................... 1-1 SENSITIZING FACTORS...is necessary to establish effective techniques to reduce the hazards associated with ESD ignition. TWO-PHASE IGNITION MODEL A model has been proposed
Sensitivity study of a dynamic thermodynamic sea ice model
NASA Astrophysics Data System (ADS)
Holland, David M.; Mysak, Lawrence A.; Manak, Davinder K.; Oberhuber, Josef M.
1993-02-01
A numerical simulation of the seasonal sea ice cover in the Arctic Ocean and the Greenland, Iceland, and Norwegian seas is presented. The sea ice model is extracted from Oberhuber's (1990) coupled sea ice-mixed layer-isopycnal general circulation model and is written in spherical coordinates. The advantage of such a model over previous sea ice models is that it can be easily coupled to either global atmospheric or ocean general circulation models written in spherical coordinates. In this model, the thermodynamics are a modification of that of Parkinson and Washington (1979), while the dynamics use the full Hibler (1979) viscous-plastic rheology. Monthly thermodynamic and dynamic forcing fields for the atmosphere and ocean are specified. The simulations of the seasonal cycle of ice thickness, compactness, and velocity, for a control set of parameters, compare favorably with the known seasonal characteristics of these fields. A sensitivity study of the control simulation of the seasonal sea ice cover is presented. The sensitivity runs are carried out under three different themes, namely, numerical conditions, parameter values, and physical processes. This last theme refers to experiments in which physical processes are either newly added or completely removed from the model. Approximately 80 sensitivity runs have been performed in which a change from the control run environment has been implemented. Comparisons have been made between the control run and a particular sensitivity run based on time series of the seasonal cycle of the domain-averaged ice thickness, compactness, areal coverage, and kinetic energy. In addition, spatially varying fields of ice thickness, compactness, velocity, and surface temperature for each season are presented for selected experiments. A brief description and discussion of the more interesting experiments are presented. The simulation of the seasonal cycle of Arctic sea ice cover is shown to be robust.
NASA Astrophysics Data System (ADS)
Solman, Silvina A.; Pessacg, Natalia L.
2012-01-01
In this study the capability of the MM5 model in simulating the main mode of intraseasonal variability during the warm season over South America is evaluated through a series of sensitivity experiments. Several 3-month simulations nested into ERA40 reanalysis were carried out using different cumulus schemes and planetary boundary layer schemes in an attempt to define the optimal combination of physical parameterizations for simulating alternating wet and dry conditions over La Plata Basin (LPB) and the South Atlantic Convergence Zone regions, respectively. The results were compared with different observational datasets and model evaluation was performed taking into account the spatial distribution of monthly precipitation and daily statistics of precipitation over the target regions. Though every experiment was able to capture the contrasting behavior of the precipitation during the simulated period, precipitation was largely underestimated particularly over the LPB region, mainly due to a misrepresentation in the moisture flux convergence. Experiments using grid nudging of the winds above the planetary boundary layer showed a better performance compared with those in which no constrains were imposed to the regional circulation within the model domain. Overall, no single experiment was found to perform the best over the entire domain and during the two contrasting months. The experiment that outperforms depends on the area of interest, being the simulation using the Grell (Kain-Fritsch) cumulus scheme in combination with the MRF planetary boundary layer scheme more adequate for subtropical (tropical) latitudes. The ensemble of the sensitivity experiments showed a better performance compared with any individual experiment.
NASA Astrophysics Data System (ADS)
Zou, Guang'an; Wang, Qiang; Mu, Mu
2016-09-01
Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balazs, Csaba; Conrad, Jan; Farmer, Ben
Imaging atmospheric Cherenkov telescopes (IACTs) that are sensitive to potential γ-ray signals from dark matter (DM) annihilation above ~50 GeV will soon be superseded by the Cherenkov Telescope Array (CTA). CTA will have a point source sensitivity an order of magnitude better than currently operating IACTs and will cover a broad energy range between 20 GeV and 300 TeV. Using effective field theory and simplified models to calculate γ-ray spectra resulting from DM annihilation, we compare the prospects to constrain such models with CTA observations of the Galactic center with current and near-future measurements at the Large Hadron Collider (LHC)more » and direct detection experiments. Here, for DM annihilations via vector or pseudoscalar couplings, CTA observations will be able to probe DM models out of reach of the LHC, and, if DM is coupled to standard fermions by a pseudoscalar particle, beyond the limits of current direct detection experiments.« less
Balazs, Csaba; Conrad, Jan; Farmer, Ben; ...
2017-10-04
Imaging atmospheric Cherenkov telescopes (IACTs) that are sensitive to potential γ-ray signals from dark matter (DM) annihilation above ~50 GeV will soon be superseded by the Cherenkov Telescope Array (CTA). CTA will have a point source sensitivity an order of magnitude better than currently operating IACTs and will cover a broad energy range between 20 GeV and 300 TeV. Using effective field theory and simplified models to calculate γ-ray spectra resulting from DM annihilation, we compare the prospects to constrain such models with CTA observations of the Galactic center with current and near-future measurements at the Large Hadron Collider (LHC)more » and direct detection experiments. Here, for DM annihilations via vector or pseudoscalar couplings, CTA observations will be able to probe DM models out of reach of the LHC, and, if DM is coupled to standard fermions by a pseudoscalar particle, beyond the limits of current direct detection experiments.« less
NASA Astrophysics Data System (ADS)
Fomin, A. K.; Serebrov, A. P.; Zherebtsov, O. M.; Leonova, E. N.; Chaikovskii, M. E.
2017-01-01
We propose an experiment on search for neutron-antineutron oscillations based on the storage of ultracold neutrons (UCN) in a material trap. The sensitivity of the experiment mostly depends on the trap size and the amount of UCN in it. In Petersburg Nuclear Physics Institute (PNPI) a high-intensity UCN source is projected at the WWR-M reactor, which must provide UCN density 2-3 orders of magnitude higher than existing sources. The results of simulations of the designed experimental scheme show that the sensitivity can be increased by ˜ 10-40 times compared to sensitivity of previous experiment depending on the model of neutron reflection from walls.
NASA Astrophysics Data System (ADS)
Tjiputra, Jerry F.; Polzin, Dierk; Winguth, Arne M. E.
2007-03-01
An adjoint method is applied to a three-dimensional global ocean biogeochemical cycle model to optimize the ecosystem parameters on the basis of SeaWiFS surface chlorophyll observation. We showed with identical twin experiments that the model simulated chlorophyll concentration is sensitive to perturbation of phytoplankton and zooplankton exudation, herbivore egestion as fecal pellets, zooplankton grazing, and the assimilation efficiency parameters. The assimilation of SeaWiFS chlorophyll data significantly improved the prediction of chlorophyll concentration, especially in the high-latitude regions. Experiments that considered regional variations of parameters yielded a high seasonal variance of ecosystem parameters in the high latitudes, but a low variance in the tropical regions. These experiments indicate that the adjoint model is, despite the many uncertainties, generally capable to optimize sensitive parameters and carbon fluxes in the euphotic zone. The best fit regional parameters predict a global net primary production of 36 Pg C yr-1, which lies within the range suggested by Antoine et al. (1996). Additional constraints of nutrient data from the World Ocean Atlas showed further reduction in the model-data misfit and that assimilation with extensive data sets is necessary.
Pulsars Probe the Low-Frequency Gravitational Sky: Pulsar Timing Arrays Basics and Recent Results
NASA Astrophysics Data System (ADS)
Tiburzi, Caterina
2018-03-01
Pulsar Timing Array experiments exploit the clock-like behaviour of an array of millisecond pulsars, with the goal of detecting low-frequency gravitational waves. Pulsar Timing Array experiments have been in operation over the last decade, led by groups in Europe, Australia, and North America. These experiments use the most sensitive radio telescopes in the world, extremely precise pulsar timing models and sophisticated detection algorithms to increase the sensitivity of Pulsar Timing Arrays. No detection of gravitational waves has been made to date with this technique, but Pulsar Timing Array upper limits already contributed to rule out some models of galaxy formation. Moreover, a new generation of radio telescopes, such as the Five hundred metre Aperture Spherical Telescope and, in particular, the Square Kilometre Array, will offer a significant improvement to the Pulsar Timing Array sensitivity. In this article, we review the basic concepts of Pulsar Timing Array experiments, and discuss the latest results from the established Pulsar Timing Array collaborations.
Prospects for testing Lorentz and CPT symmetry with antiprotons
NASA Astrophysics Data System (ADS)
Vargas, Arnaldo J.
2018-03-01
A brief overview of the prospects of testing Lorentz and CPT symmetry with antimatter experiments is presented. The models discussed are applicable to atomic spectroscopy experiments, Penning-trap experiments and gravitational tests. Comments about the sensitivity of the most recent antimatter experiments to the models reviewed here are included. This article is part of the Theo Murphy meeting issue `Antiproton physics in the ELENA era'.
NASA Astrophysics Data System (ADS)
Hoover, D. L.; Wilcox, K.; Young, K. E.
2017-12-01
Droughts are projected to increase in frequency and intensity with climate change, which may have dramatic and prolonged effects on ecosystem structure and function. There are currently hundreds of published, ongoing, and new drought experiments worldwide aimed to assess ecosystem sensitivities to drought and identify the mechanisms governing ecological resistance and resilience. However, to date, the results from these experiments have varied widely, and thus patterns of drought sensitivities have been difficult to discern. This lack of consensus at the field scale, limits the abilities of experiments to help improve land surface models, which often fail to realistically simulate ecological responses to extreme events. This is unfortunate because models offer an alternative, yet complementary approach to increase the spatial and temporal assessment of ecological sensitivities to drought that are not possible in the field due to logistical and financial constraints. Here we examined 89 published drought experiments, along with their associated historical precipitation records to (1) identify where and how drought experiments have been imposed, (2) determine the extremity of drought treatments in the context of historical climate, and (3) assess the influence of precipitation variability on drought experiments. We found an overall bias in drought experiments towards short-term, extreme experiments in water-limited ecosystems. When placed in the context of local historical precipitation, most experimental droughts were extreme, with 61% below the 5th, and 43% below the 1st percentile. Furthermore, we found that interannual precipitation variability had a large and potentially underappreciated effect on drought experiments due to the co-varying nature of control and drought treatments. Thus detecting ecological effects in experimental droughts is strongly influenced by the interaction between drought treatment magnitude, precipitation variability, and key physiological thresholds. The results from this study have important implication for the design and interpretation of drought experiments as well as integrating field results with land surface models.
Comparison of simulator fidelity model predictions with in-simulator evaluation data
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.
1983-01-01
A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.
NASA Astrophysics Data System (ADS)
Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef
2016-12-01
Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.
A storage ring experiment to detect a proton electric dipole moment
Anastassopoulos, V.; Andrianov, S.; Baartman, R.; ...
2016-11-29
We describe a new experiment to detect a permanent electric dipole moment of the proton with a sensitivity of 10 $-$29e cm by using polarized “magic” momentum 0.7 GeV/c protons in an all-electric storage ring. Systematic errors relevant to the experiment are discussed and techniques to address them are presented. The measurement is sensitive to new physics beyond the Standard Model at the scale of 3000 TeV.
A storage ring experiment to detect a proton electric dipole moment.
Anastassopoulos, V; Andrianov, S; Baartman, R; Baessler, S; Bai, M; Benante, J; Berz, M; Blaskiewicz, M; Bowcock, T; Brown, K; Casey, B; Conte, M; Crnkovic, J D; D'Imperio, N; Fanourakis, G; Fedotov, A; Fierlinger, P; Fischer, W; Gaisser, M O; Giomataris, Y; Grosse-Perdekamp, M; Guidoboni, G; Hacıömeroğlu, S; Hoffstaetter, G; Huang, H; Incagli, M; Ivanov, A; Kawall, D; Kim, Y I; King, B; Koop, I A; Lazarus, D M; Lebedev, V; Lee, M J; Lee, S; Lee, Y H; Lehrach, A; Lenisa, P; Levi Sandri, P; Luccio, A U; Lyapin, A; MacKay, W; Maier, R; Makino, K; Malitsky, N; Marciano, W J; Meng, W; Meot, F; Metodiev, E M; Miceli, L; Moricciani, D; Morse, W M; Nagaitsev, S; Nayak, S K; Orlov, Y F; Ozben, C S; Park, S T; Pesce, A; Petrakou, E; Pile, P; Podobedov, B; Polychronakos, V; Pretz, J; Ptitsyn, V; Ramberg, E; Raparia, D; Rathmann, F; Rescia, S; Roser, T; Kamal Sayed, H; Semertzidis, Y K; Senichev, Y; Sidorin, A; Silenko, A; Simos, N; Stahl, A; Stephenson, E J; Ströher, H; Syphers, M J; Talman, J; Talman, R M; Tishchenko, V; Touramanis, C; Tsoupas, N; Venanzoni, G; Vetter, K; Vlassis, S; Won, E; Zavattini, G; Zelenski, A; Zioutas, K
2016-11-01
A new experiment is described to detect a permanent electric dipole moment of the proton with a sensitivity of 10 -29 e ⋅ cm by using polarized "magic" momentum 0.7 GeV/c protons in an all-electric storage ring. Systematic errors relevant to the experiment are discussed and techniques to address them are presented. The measurement is sensitive to new physics beyond the standard model at the scale of 3000 TeV.
A storage ring experiment to detect a proton electric dipole moment
NASA Astrophysics Data System (ADS)
Anastassopoulos, V.; Andrianov, S.; Baartman, R.; Baessler, S.; Bai, M.; Benante, J.; Berz, M.; Blaskiewicz, M.; Bowcock, T.; Brown, K.; Casey, B.; Conte, M.; Crnkovic, J. D.; D'Imperio, N.; Fanourakis, G.; Fedotov, A.; Fierlinger, P.; Fischer, W.; Gaisser, M. O.; Giomataris, Y.; Grosse-Perdekamp, M.; Guidoboni, G.; Hacıömeroǧlu, S.; Hoffstaetter, G.; Huang, H.; Incagli, M.; Ivanov, A.; Kawall, D.; Kim, Y. I.; King, B.; Koop, I. A.; Lazarus, D. M.; Lebedev, V.; Lee, M. J.; Lee, S.; Lee, Y. H.; Lehrach, A.; Lenisa, P.; Levi Sandri, P.; Luccio, A. U.; Lyapin, A.; MacKay, W.; Maier, R.; Makino, K.; Malitsky, N.; Marciano, W. J.; Meng, W.; Meot, F.; Metodiev, E. M.; Miceli, L.; Moricciani, D.; Morse, W. M.; Nagaitsev, S.; Nayak, S. K.; Orlov, Y. F.; Ozben, C. S.; Park, S. T.; Pesce, A.; Petrakou, E.; Pile, P.; Podobedov, B.; Polychronakos, V.; Pretz, J.; Ptitsyn, V.; Ramberg, E.; Raparia, D.; Rathmann, F.; Rescia, S.; Roser, T.; Kamal Sayed, H.; Semertzidis, Y. K.; Senichev, Y.; Sidorin, A.; Silenko, A.; Simos, N.; Stahl, A.; Stephenson, E. J.; Ströher, H.; Syphers, M. J.; Talman, J.; Talman, R. M.; Tishchenko, V.; Touramanis, C.; Tsoupas, N.; Venanzoni, G.; Vetter, K.; Vlassis, S.; Won, E.; Zavattini, G.; Zelenski, A.; Zioutas, K.
2016-11-01
A new experiment is described to detect a permanent electric dipole moment of the proton with a sensitivity of 10-29 e ṡ cm by using polarized "magic" momentum 0.7 GeV/c protons in an all-electric storage ring. Systematic errors relevant to the experiment are discussed and techniques to address them are presented. The measurement is sensitive to new physics beyond the standard model at the scale of 3000 TeV.
Sensitivity of southern hemisphere westerly wind to boundary conditions for the last glacial maximum
NASA Astrophysics Data System (ADS)
Jun, S. Y.; Kim, S. J.; Kim, B. M.
2017-12-01
To examine the change in SH westerly wind in the LGM, we performed LGM simulation with sensitivity experiments by specifying the LGM sea ice in the Southern Ocean (SO), ice sheet over Antarctica, and tropical pacific sea surface temperature to CAM5 atmosphere general circulation model (GCM). The SH westerly response to LGM boundary conditions in the CAM5 was compared with those from CMIP5 LGM simulations. In the CAM5 LGM simulation, the SH westerly wind substantially increases between 40°S and 65°S, while the zonal-mean zonal wind decreases at latitudes higher than 65°S. The position of the SH maximum westerly wind moves poleward by about 8° in the LGM simulation. Sensitivity experiments suggest that the increase in SH westerly winds is mainly due to the increase in sea ice in the SO that accounts for 60% of total wind change. In the CMIP5-PMIP3 LGM experiments, most of the models show the slight increase and poleward shift of the SH westerly wind as in the CAM5 experiment. The increased and poleward shifted westerly wind in the LGM obtained in the current model result is consistent with previous model results and some lines of proxy evidence, though opposite model responses and proxy evidence exist for the SH westerly wind change.
Prospects for testing Lorentz and CPT symmetry with antiprotons.
Vargas, Arnaldo J
2018-03-28
A brief overview of the prospects of testing Lorentz and CPT symmetry with antimatter experiments is presented. The models discussed are applicable to atomic spectroscopy experiments, Penning-trap experiments and gravitational tests. Comments about the sensitivity of the most recent antimatter experiments to the models reviewed here are included.This article is part of the Theo Murphy meeting issue 'Antiproton physics in the ELENA era'. © 2018 The Author(s).
A New Approach for Coupled GCM Sensitivity Studies
NASA Astrophysics Data System (ADS)
Kirtman, B. P.; Duane, G. S.
2011-12-01
A new multi-model approach for coupled GCM sensitivity studies is presented. The purpose of the sensitivity experiments is to understand why two different coupled models have such large differences in their respective climate simulations. In the application presented here, the differences between the coupled models using the Center for Ocean-Land-Atmosphere Studies (COLA) and the National Center for Atmospheric Research (NCAR) atmospheric general circulation models (AGCMs) are examined. The intent is to isolate which component of the air-sea fluxes is most responsible for the differences between the coupled models and for the errors in their respective coupled simulations. The procedure is to simultaneously couple the two different atmospheric component models to a single ocean general circulation model (OGCM), in this case the Modular Ocean Model (MOM) developed at the Geophysical Fluid Dynamics Laboratory (GFDL). Each atmospheric component model experiences the same SST produced by the OGCM, but the OGCM is simultaneously coupled to both AGCMs using a cross coupling strategy. In the first experiment, the OGCM is coupled to the heat and fresh water flux from the NCAR AGCM (Community Atmospheric Model; CAM) and the momentum flux from the COLA AGCM. Both AGCMs feel the same SST. In the second experiment, the OGCM is coupled to the heat and fresh water flux from the COLA AGCM and the momentum flux from the CAM AGCM. Again, both atmospheric component models experience the same SST. By comparing these two experimental simulations with control simulations where only one AGCM is used, it is possible to argue which of the flux components are most responsible for the differences in the simulations and their respective errors. Based on these sensitivity experiments we conclude that the tropical ocean warm bias in the COLA coupled model is due to errors in the heat flux, and that the erroneous westward shift in the tropical Pacific cold tongue minimum in the NCAR model is due errors in the momentum flux. All the coupled simulations presented here have warm biases along the eastern boundary of the tropical oceans suggesting that the problem is common to both AGCMs. In terms of interannual variability in the tropical Pacific, the CAM momentum flux is responsible for the erroneous westward extension of the sea surface temperature anomalies (SSTA) and errors in the COLA momentum flux cause the erroneous eastward migration of the El Niño-Southern Oscillation (ENSO) events. These conclusions depend on assuming that the error due to the OGCM can be neglected.
NASA Astrophysics Data System (ADS)
Harshan, Suraj
The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.
A storage ring experiment to detect a proton electric dipole moment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastassopoulos, V.; Andrianov, S.; Baartman, R.
2016-11-01
A new experiment is described to detect a permanent electric dipole moment of the proton with a sensitivity ofmore » $$10^{-29}e\\cdot$$cm by using polarized "magic" momentum $0.7$~GeV/c protons in an all-electric storage ring. Systematic errors relevant to the experiment are discussed and techniques to address them are presented. The measurement is sensitive to new physics beyond the Standard Model at the scale of 3000~TeV.« less
Sensitivity studies and laboratory measurements for the laser heterodyne spectrometer experiment
NASA Technical Reports Server (NTRS)
Allario, F.; Katzberg, S. J.; Larsen, J. C.
1980-01-01
Several experiments involving spectral scanning interferometers and gas filter correlation radiometers (ref. 2) using limb scanning solar occultation techniques under development for measurements of stratospheric trace gases from Spacelab and satellite platforms are described. An experiment to measure stratospheric trace constituents by Laser Heterodyne Spectroscopy, a summary of sensitivity analyses, and supporting laboratory measurements are presented for O3, ClO, and H2O2 in which the instrument transfer function is modeled using a detailed optical receiver design.
Modified Petri net model sensitivity to workload manipulations
NASA Technical Reports Server (NTRS)
White, S. A.; Mackinnon, D. P.; Lyman, J.
1986-01-01
Modified Petri Nets (MPNs) are investigated as a workload modeling tool. The results of an exploratory study of the sensitivity of MPNs to work load manipulations in a dual task are described. Petri nets have been used to represent systems with asynchronous, concurrent and parallel activities (Peterson, 1981). These characteristics led some researchers to suggest the use of Petri nets in workload modeling where concurrent and parallel activities are common. Petri nets are represented by places and transitions. In the workload application, places represent operator activities and transitions represent events. MPNs have been used to formally represent task events and activities of a human operator in a man-machine system. Some descriptive applications demonstrate the usefulness of MPNs in the formal representation of systems. It is the general hypothesis herein that in addition to descriptive applications, MPNs may be useful for workload estimation and prediction. The results are reported of the first of a series of experiments designed to develop and test a MPN system of workload estimation and prediction. This first experiment is a screening test of MPN model general sensitivity to changes in workload. Positive results from this experiment will justify the more complicated analyses and techniques necessary for developing a workload prediction system.
Inductive reasoning about causally transmitted properties.
Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D; Tenenbaum, Joshua B
2008-11-01
Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates' context-sensitive use of taxonomic and food web knowledge to guide reasoning about causal transmission and shows good qualitative agreement between model predictions and human inferences. A second experiment demonstrates strong quantitative and qualitative fits to inferences about a more complex artificial food web. A third experiment investigates human reasoning about complex novel food webs where species have known taxonomic relations. Results demonstrate a double-dissociation between the predictions of our causal model and a related taxonomic model [Kemp, C., & Tenenbaum, J. B. (2003). Learning domain structures. In Proceedings of the 25th annual conference of the cognitive science society]: the causal model predicts human inferences about diseases but not genes, while the taxonomic model predicts human inferences about genes but not diseases. We contrast our framework with previous models of category-based induction and previous formal instantiations of intuitive theories, and outline challenges in developing a complete model of context-sensitive reasoning.
Crossmodal Semantic Priming by Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity
ERIC Educational Resources Information Center
Chen, Yi-Chuan; Spence, Charles
2011-01-01
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…
A Comparison of Climate Feedback Strength between CO2 Doubling and LGM Experiments
NASA Astrophysics Data System (ADS)
Yoshimori, M.; Yokohata, T.; Abe-Ouchi, A.
2008-12-01
Studies of past climate potentially provide a constraint on the uncertainty of climate sensitivity, but previous studies warn against a simple scaling to the future. The climate sensitivity is determined by various feedback processes and they may vary with climate states and forcings. In this study, we investigate similarities and differences of feedbacks for a CO2 doubling, a last glacial maximum (LGM), and LGM greenhouse gas (GHG) forcing experiments, using an atmospheric general circulation model coupled to a slab ocean model. After computing the radiative forcing, the individual feedback strengths: water vapor, lapse rate, albedo, and cloud feedbacks, are evaluated explicitly. For this particular model, the difference in the climate sensitivity among experiments is attributed to the shortwave cloud feedback in which there is a tendency that it becomes weaker or even negative in the cooling experiments. No significant difference is found in the water vapor feedback between warming and cooling experiments by GHGs despite the nonlinear dependence of the Clausius-Clapeyron relation on temperature. The weaker water vapor feedback in the LGM experiment due to a relatively weaker tropical forcing is compensated by the stronger lapse rate feedback due to a relatively stronger extratropical forcing. A hypothesis is proposed which explains the asymmetric cloud response between warming and cooling experiments associated with a displacement of the region of mixed- phase clouds. The difference in the total feedback strength between experiments is, however, relatively small compared to the current intermodel spread, and does not necessarily preclude the use of LGM climate as a future constraint.
Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2015-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.
Calibration of 3D ALE finite element model from experiments on friction stir welding of lap joints
NASA Astrophysics Data System (ADS)
Fourment, Lionel; Gastebois, Sabrina; Dubourg, Laurent
2016-10-01
In order to support the design of such a complex process like Friction Stir Welding (FSW) for the aeronautic industry, numerical simulation software requires (1) developing an efficient and accurate Finite Element (F.E.) formulation that allows predicting welding defects, (2) properly modeling the thermo-mechanical complexity of the FSW process and (3) calibrating the F.E. model from accurate measurements from FSW experiments. This work uses a parallel ALE formulation developed in the Forge® F.E. code to model the different possible defects (flashes and worm holes), while pin and shoulder threads are modeled by a new friction law at the tool / material interface. FSW experiments require using a complex tool with scroll on shoulder, which is instrumented for providing sensitive thermal data close to the joint. Calibration of unknown material thermal coefficients, constitutive equations parameters and friction model from measured forces, torques and temperatures is carried out using two F.E. models, Eulerian and ALE, to reach a satisfactory agreement assessed by the proper sensitivity of the simulation to process parameters.
Computational modeling of mediator oxidation by oxygen in an amperometric glucose biosensor.
Simelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Razumienė, Julija
2014-02-07
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior.
Computational Modeling of Mediator Oxidation by Oxygen in an Amperometric Glucose Biosensor
Šimelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Julija, Razumienė
2014-01-01
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior. PMID:24514882
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Chen, Baode; Tao, Wei-Kuo; Lau, William K. M. (Technical Monitor)
2002-01-01
The sensitivities to surface friction and the Coriolis parameter in tropical cyclogenesis are studied using an axisymmetric version of the Goddard cloud ensemble model. Our experiments demonstrate that tropical cyclogenesis can still occur without surface friction. However, the resulting tropical cyclone has very unrealistic structure. Surface friction plays an important role of giving the tropical cyclones their observed smaller size and diminished intensity. Sensitivity of the cyclogenesis process to surface friction. in terms of kinetic energy growth, has different signs in different phases of the tropical cyclone. Contrary to the notion of Ekman pumping efficiency, which implies a preference for the highest Coriolis parameter in the growth rate if all other parameters are unchanged, our experiments show no such preference.
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue
2018-06-01
Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Dell'Oca, A.
2017-12-01
We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.
NASA Astrophysics Data System (ADS)
Shaw, Jeremy A.; Daescu, Dacian N.
2017-08-01
This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.
Environmental sensitivity: equivocal illness in the context of place.
Fletcher, Christopher M
2006-03-01
This article presents a phenomenologically oriented description of the interaction of illness experience, social context, and place. This is used to explore an outbreak of environmental sensitivities in Nova Scotia, Canada. Environmental Sensitivity (ES) is a popular designation for bodily reactions to mundane environmental stimuli that are insignificant for most people. Mainstream medicine cannot support the popular models of this disease process and consequently illness experience is subject to ambiguity and contestation. As an 'equivocal illness', ES generates considerable social action around the nature, meaning and validity of suffering. Sense of place plays an important role in this process. In this case, the meanings that accrue to illness experience and that produce salient popular disease etiology are grounded in the experience and social construction of the Nova Scotian landscape over time. Shifting representations of place are reflected in illness experience and the meanings that arise around illness are emplaced in landscape.
NASA Astrophysics Data System (ADS)
Hartman, M. T.; Rivère, A.; Battesti, R.; Rizzo, C.
2017-12-01
In this work we present data characterizing the sensitivity of the Biréfringence Magnetique du Vide (BMV) instrument. BMV is an experiment attempting to measure vacuum magnetic birefringence (VMB) via the measurement of an ellipticity induced in a linearly polarized laser field propagating through a birefringent region of vacuum in the presence of an external magnetic field. Correlated measurements of laser noise alongside the measurement in the main detection channel allow us to separate measured sensing noise from the inherent birefringence noise of the apparatus. To this end, we model different sources of sensing noise for cavity-enhanced polarimetry experiments, such as BMV. Our goal is to determine the main sources of noise, clarifying the limiting factors of such an apparatus. We find our noise models are compatible with the measured sensitivity of BMV. In this context, we compare the phase sensitivity of separate-arm interferometers to that of a polarimetry apparatus for the discussion of current and future VMB measurements.
Distinguishing bias from sensitivity effects in multialternative detection tasks.
Sridharan, Devarajan; Steinmetz, Nicholas A; Moore, Tirin; Knudsen, Eric I
2014-08-21
Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. © 2014 ARVO.
Distinguishing bias from sensitivity effects in multialternative detection tasks
Sridharan, Devarajan; Steinmetz, Nicholas A.; Moore, Tirin; Knudsen, Eric I.
2014-01-01
Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. PMID:25146574
Modeling nearshore morphological evolution at seasonal scale
Walstra, D.-J.R.; Ruggiero, P.; Lesser, G.; Gelfenbaum, G.
2006-01-01
A process-based model is compared with field measurements to test and improve our ability to predict nearshore morphological change at seasonal time scales. The field experiment, along the dissipative beaches adjacent to Grays Harbor, Washington USA, successfully captured the transition between the high-energy erosive conditions of winter and the low-energy beach-building conditions typical of summer. The experiment documented shoreline progradation on the order of 20 m and as much as 175 m of onshore bar migration. Significant alongshore variability was observed in the morphological response of the sandbars over a 4 km reach of coast. A detailed sensitivity analysis suggests that the model results are more sensitive to adjusting the sediment transport associated with asymmetric oscillatory wave motions than to adjusting the transport due to mean currents. Initial results suggest that alongshore variations in the initial bathymetry are partially responsible for the observed alongshore variable morphological response during the experiment. Copyright ASCE 2006.
de Gooijer, C D; Wijffels, R H; Tramper, J
1991-07-01
The modeling of the growth of Nitrobacter agilis cell immobilized in kappa-carrageenan is presented. A detailed description is given of the modeling of internal diffusion and growth of cells in the support matrix in addition to external mass transfer resistance. The model predicts the substrate and biomass profiles in the support as well as the macroscopic oxygen consumption rate of the immobilized biocatalyst in time. The model is tested by experiments with continuously operated airlift loop reactors containing cells immobilized in kappa-carrageenan. The model describes experimental data very well. It is clearly shown that external mass transfer may not be neglected. Furthermore, a sensitivity analysis of the parameters at their values during the experiments revealed that apart from the radius of the spheres and the substrate bulk concentration, the external mass transfer resistance coefficient is the most sensitive parameter for our case.
NASA Astrophysics Data System (ADS)
Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.; Chang, Ping
2017-10-01
We alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ˜8-50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the global average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.
NASA Astrophysics Data System (ADS)
Liu, Yan; Hussain, Tariq; Huang, Fenglei; Duan, Zhuoping
2016-07-01
All solid explosives in practical use are more or less porous. Although it is known that the change in porosity affects the shock sensitivity of solid explosives, the effect of small changes in porosity on the sensitivity needs to be determined for safe and efficient use of explosive materials. In this study, the influence of a small change in porosity on shock initiation and the subsequent detonation growth process of a plastic-bonded explosive PBXC03, composed of 87% cyclotetramethylene-tetranitramine (HMX), 7% triaminotrinitrobenzene (TATB), and 6% Viton by weight, are investigated by shock to detonation transition experiments. Two explosive formulations of PBXC03 having the same initial grain sizes pressed to 98 and 99% of theoretical mass density (1.873 g/cm3) respectively are tested using the in situ manganin piezoresistive pressure gauge technique. Numerical modeling of the experiments is performed using an ignition and growth reactive flow model. Reasonable agreement with the experimental results is obtained by increasing the growth term coefficient in the Lee-Tarver ignition and growth model with porosity. Combining the experimental and simulation results shows that the shock sensitivity increases with porosity for PBXC03 having the same explosive initial grain sizes for the pressures (about 3.1 GPa) applied in the experiments.
AFFINE-CORRECTED PARADISE: FREE-BREATHING PATIENT-ADAPTIVE CARDIAC MRI WITH SENSITIVITY ENCODING
Sharif, Behzad; Bresler, Yoram
2013-01-01
We propose a real-time cardiac imaging method with parallel MRI that allows for free breathing during imaging and does not require cardiac or respiratory gating. The method is based on the recently proposed PARADISE (Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding) scheme. The new acquisition method adapts the PARADISE k-t space sampling pattern according to an affine model of the respiratory motion. The reconstruction scheme involves multi-channel time-sequential imaging with time-varying channels. All model parameters are adapted to the imaged patient as part of the experiment and drive both data acquisition and cine reconstruction. Simulated cardiac MRI experiments using the realistic NCAT phantom show high quality cine reconstructions and robustness to modeling inaccuracies. PMID:24390159
NASA Astrophysics Data System (ADS)
Thomas, R. Q.; Bonan, G. B.; Goodale, C. L.
2012-12-01
In many forest ecosystems, nitrogen deposition is increasing carbon storage and reducing climate warming from fossil fuel emissions. Accurately modeling the forest carbon sequestration response to elevated nitrogen deposition using global biogeochemical models coupled to climate models is therefore important. Here, we use observations of the forest carbon response to both nitrogen fertilization experiments and nitrogen deposition gradients to test and improve a global biogeochemical model (CLM-CN 4.0). We introduce a series of model modifications to the CLM-CN that 1) creates a more closed nitrogen cycle with reduced nitrogen fixation and N gas loss and 2) includes buffering of plant nitrogen uptake and buffering of soil nitrogen available for plants and microbial processes. Overall, the modifications improved the comparison of the model predictions to the observational data by increasing the carbon storage response to historical nitrogen deposition (1850-2004) in temperate forest ecosystems by 144% and reducing the response to nitrogen fertilization. The increased sensitivity to nitrogen deposition was primarily attributable to greater retention of nitrogen deposition in the ecosystem and a greater role of synergy between nitrogen deposition and rising atmospheric CO2. Based on our results, we suggest that nitrogen retention should be an important attribute investigated in model inter-comparisons. To understand the specific ecosystem processes that contribute to the sensitivity of carbon storage to nitrogen deposition, we examined sensitivity to nitrogen deposition in a set of intermediary models that isolate the key differences in model structure between the CLM-CN 4.0 and the modified version. We demonstrate that the nitrogen deposition response was most sensitive to the implementation of a more closed nitrogen cycle and buffered plant uptake of soil mineral nitrogen, and less sensitive to modifications of the canopy scaling of photosynthesis, soil buffering of available nitrogen, and plant buffering of labile nitrogen. By comparing carbon storage sensitivity to observational data from both nitrogen deposition gradients and nitrogen fertilization experiments, we show different observed estimates of sensitivity between these two approaches could be explained by differences in the magnitude and time-scale of nitrogen additions.
NASA Astrophysics Data System (ADS)
Pohl, Benjamin; Douville, Hervé
2011-10-01
The CNRM atmospheric general circulation model Arpege-Climat is relaxed towards atmospheric reanalyses outside the 10°S-32°N 30°W-50°E domain in order to disentangle the regional versus large-scale sources of climatological biases and interannual variability of the West African monsoon (WAM). On the one hand, the main climatological features of the monsoon, including the spatial distribution of summer precipitation, are only weakly improved by the nudging, thereby suggesting the regional origin of the Arpege-Climat biases. On the other hand, the nudging technique is relatively efficient to control the interannual variability of the WAM dynamics, though the impact on rainfall variability is less clear. Additional sensitivity experiments focusing on the strong 1994 summer monsoon suggest that the weak sensitivity of the model biases is not an artifact of the nudging design, but the evidence that regional physical processes are the main limiting factors for a realistic simulation of monsoon circulation and precipitation in the Arpege-Climat model. Sensitivity experiments to soil moisture boundary conditions are also conducted and highlight the relevance of land-atmosphere coupling for the amplification of precipitation biases. Nevertheless, the land surface hydrology is not the main explanation for the model errors that are rather due to deficiencies in the atmospheric physics. The intraseasonal timescale and the model internal variability are discussed in a companion paper.
Noise sensitivity and loudness derivative index for urban road traffic noise annoyance computation.
Gille, Laure-Anne; Marquis-Favre, Catherine; Weber, Reinhard
2016-12-01
Urban road traffic composed of powered-two-wheelers (PTWs), buses, heavy, and light vehicles is a major source of noise annoyance. In order to assess annoyance models considering different acoustical and non-acoustical factors, a laboratory experiment on short-term annoyance due to urban road traffic noise was conducted. At the end of the experiment, participants were asked to rate their noise sensitivity and to describe the noise sequences they heard. This verbalization task highlights that annoyance ratings are highly influenced by the presence of PTWs and by different acoustical features: noise intensity, irregular temporal amplitude variation, regular amplitude modulation, and spectral content. These features, except irregular temporal amplitude variation, are satisfactorily characterized by the loudness, the total energy of tonal components and the sputtering and nasal indices. Introduction of the temporal derivative of loudness allows successful modeling of perceived amplitude variations. Its contribution to the tested annoyance models is high and seems to be higher than the contribution of mean loudness index. A multilevel regression is performed to assess annoyance models using selected acoustical indices and noise sensitivity. Three models are found to be promising for further studies that aim to enhance current annoyance models.
Studer, M; Stewart, J; Egloff, N; Zürcher, E; von Känel, R; Brodbeck, J; Grosse Holtforth, M
2017-02-01
Increased pain sensitivity is characteristic for patients with chronic pain disorder with somatic and psychological factors (F45.41). Persistent stress can induce, sustain, and intensify pain sensitivity, thereby modulating pain perception. In this context, it would be favorable to investigate which psychosocial stressors are empirically linked to pain sensitivity. The aim of this study was to examine the relationship between psychosocial stressors and pain sensitivity in a naturalistic sample of patients with chronic pain disorder with somatic and psychological factors (F45.41). We assessed 166 patients with chronic pain disorder with somatic and psychological factors (F45.41) at entry into an inpatient pain clinic. Pain sensitivity was measured with a pain provocation test (Algopeg) at the middle finger and earlobe. Stressors assessed were exposure to war experiences, adverse childhood experiences, illness-related inability to work, relationship problems, and potentially life-threatening accidents. Correlation analyses and structural equation modeling were used to examine which stressors showed the strongest prediction of pain sensitivity. Patients exhibited generally heightened pain sensitivity. Both exposure to war and illness-related inability to work showed significant bivariate correlations with pain sensitivity. In addition to age, they also predicted a further increase in pain sensitivity in the structural equation model. Bearing in mind the limitations of this cross-sectional study, these findings may contribute to a better understanding of the link between psychosocial stressors and pain sensitivity.
Lorentz and CPT Tests with Atoms
NASA Astrophysics Data System (ADS)
Vargas Silva, Arnaldo J.
The prospects for using atomic-spectroscopy experiments to test Lorentz and CPT symmetry are investigated. Phenomenological models for Lorentz violation studied in this work include ones with contributions from all quadratic operators for a Dirac fermion in the Lagrange density of the Standard-Model Extension (SME), without restriction on the operator mass dimension. The systems considered include atoms composed of conventional matter, antimatter, and second-generation particles. Generic expressions for the Lorentz-violating energy shifts applicable to a broad range of atomic transitions are obtained. Signals for Lorentz violation that can in principle be studied in spectroscopic experiments are identified from the theoretical corrections to the spectrum. Some of these signals include sidereal and annual variations of atomic transition frequencies measured in a laboratory on the surface of the Earth. Other possibilities include effects produced by changing the orientation of the applied magnetic field or by realizing space-based experiments. Discrepancies in the experimental values for fundamental constants and energy levels based on self-consistent predictions from the Standard Model also offer potential signals for Lorentz violation. The sensitivities of different experiments to distinct sets of coefficients for Lorentz violation are considered. Using atoms composed of different particle species allows measurements of coefficients for Lorentz violation in different fermion sectors of the SME. Experiments comparing hydrogen and antihydrogen can discriminate between coefficients for Lorentz violation that are associated with CPT-odd or CPT-even operators. Additionally, certain systems and transitions are more sensitive to nonminimal operators, while others are particularly sensitive to minimal ones.
Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes
NASA Astrophysics Data System (ADS)
Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias
2015-04-01
Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.
Plausible rice yield losses under future climate warming.
Zhao, Chuang; Piao, Shilong; Wang, Xuhui; Huang, Yao; Ciais, Philippe; Elliott, Joshua; Huang, Mengtian; Janssens, Ivan A; Li, Tao; Lian, Xu; Liu, Yongwen; Müller, Christoph; Peng, Shushi; Wang, Tao; Zeng, Zhenzhong; Peñuelas, Josep
2016-12-19
Rice is the staple food for more than 50% of the world's population 1-3 . Reliable prediction of changes in rice yield is thus central for maintaining global food security. This is an extraordinary challenge. Here, we compare the sensitivity of rice yield to temperature increase derived from field warming experiments and three modelling approaches: statistical models, local crop models and global gridded crop models. Field warming experiments produce a substantial rice yield loss under warming, with an average temperature sensitivity of -5.2 ± 1.4% K -1 . Local crop models give a similar sensitivity (-6.3 ± 0.4% K -1 ), but statistical and global gridded crop models both suggest less negative impacts of warming on yields (-0.8 ± 0.3% and -2.4 ± 3.7% K -1 , respectively). Using data from field warming experiments, we further propose a conditional probability approach to constrain the large range of global gridded crop model results for the future yield changes in response to warming by the end of the century (from -1.3% to -9.3% K -1 ). The constraint implies a more negative response to warming (-8.3 ± 1.4% K -1 ) and reduces the spread of the model ensemble by 33%. This yield reduction exceeds that estimated by the International Food Policy Research Institute assessment (-4.2 to -6.4% K -1 ) (ref. 4). Our study suggests that without CO 2 fertilization, effective adaptation and genetic improvement, severe rice yield losses are plausible under intensive climate warming scenarios.
Elliott, Elizabeth J.; Yu, Sungduk; Kooperman, Gabriel J.; ...
2016-05-01
The sensitivities of simulated mesoscale convective systems (MCSs) in the central U.S. to microphysics and grid configuration are evaluated here in a global climate model (GCM) that also permits global-scale feedbacks and variability. Since conventional GCMs do not simulate MCSs, studying their sensitivities in a global framework useful for climate change simulations has not previously been possible. To date, MCS sensitivity experiments have relied on controlled cloud resolving model (CRM) studies with limited domains, which avoid internal variability and neglect feedbacks between local convection and larger-scale dynamics. However, recent work with superparameterized (SP) GCMs has shown that eastward propagating MCS-likemore » events are captured when embedded CRMs replace convective parameterizations. This study uses a SP version of the Community Atmosphere Model version 5 (SP-CAM5) to evaluate MCS sensitivities, applying an objective empirical orthogonal function algorithm to identify MCS-like events, and harmonizing composite storms to account for seasonal and spatial heterogeneity. A five-summer control simulation is used to assess the magnitude of internal and interannual variability relative to 10 sensitivity experiments with varied CRM parameters, including ice fall speed, one-moment and two-moment microphysics, and grid spacing. MCS sensitivities were found to be subtle with respect to internal variability, and indicate that ensembles of over 100 storms may be necessary to detect robust differences in SP-GCMs. Furthermore, these results emphasize that the properties of MCSs can vary widely across individual events, and improving their representation in global simulations with significant internal variability may require comparison to long (multidecadal) time series of observed events rather than single season field campaigns.« less
The LUX-Zeplin Dark Matter Detector
NASA Astrophysics Data System (ADS)
Mock, Jeremy; Lux-Zeplin (Lz) Collaboration
2016-03-01
The LUX-ZEPLIN (LZ) detector is a second generation dark matter experiment that will operate at the 4850 foot level of the Sanford Underground Research Experiment as a follow-up to the LUX detector, currently the world's most sensitive WIMP direct detection experiment. The LZ detector will contain 7 tonnes of active liquid xenon with a 5.6 tonne fiducial mass in the TPC. The TPC is surrounded by an active, instrumented, liquid-xenon ``skin'' region to veto gammas, then a layer of liquid scintillator to veto neutrons, all contained within a water shield. Modeling the detector is key to understanding the expected background, which in turn leads to a better understanding of the projected sensitivity, currently expected to be 2e-48 cm2 for a 50 GeV WIMP. I will discuss the current status of the LZ experiment as well as its projected sensitivity.
NASA Astrophysics Data System (ADS)
Godinez, H. C.; Rougier, E.; Osthus, D.; Srinivasan, G.
2017-12-01
Fracture propagation play a key role for a number of application of interest to the scientific community. From dynamic fracture processes like spall and fragmentation in metals and detection of gas flow in static fractures in rock and the subsurface, the dynamics of fracture propagation is important to various engineering and scientific disciplines. In this work we implement a global sensitivity analysis test to the Hybrid Optimization Software Suite (HOSS), a multi-physics software tool based on the combined finite-discrete element method, that is used to describe material deformation and failure (i.e., fracture and fragmentation) under a number of user-prescribed boundary conditions. We explore the sensitivity of HOSS for various model parameters that influence how fracture are propagated through a material of interest. The parameters control the softening curve that the model relies to determine fractures within each element in the mesh, as well a other internal parameters which influence fracture behavior. The sensitivity method we apply is the Fourier Amplitude Sensitivity Test (FAST), which is a global sensitivity method to explore how each parameter influence the model fracture and to determine the key model parameters that have the most impact on the model. We present several sensitivity experiments for different combination of model parameters and compare against experimental data for verification.
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
Linear Mathematical Model for Seam Tracking with an Arc Sensor in P-GMAW Processes
Liu, Wenji; Li, Liangyu; Hong, Ying; Yue, Jianfeng
2017-01-01
Arc sensors have been used in seam tracking and widely studied since the 80s and commercial arc sensing products for T and V shaped grooves have been developed. However, it is difficult to use these arc sensors in narrow gap welding because the arc stability and sensing accuracy are not satisfactory. Pulse gas melting arc welding (P-GMAW) has been successfully applied in narrow gap welding and all position welding processes, so it is worthwhile to research P-GMAW arc sensing technology. In this paper, we derived a linear mathematical P-GMAW model for arc sensing, and the assumptions for the model are verified through experiments and finite element methods. Finally, the linear characteristics of the mathematical model were investigated. In torch height changing experiments, uphill experiments, and groove angle changing experiments the P-GMAW arc signals all satisfied the linear rules. In addition, the faster the welding speed, the higher the arc signal sensitivities; the smaller the groove angle, the greater the arc sensitivities. The arc signal variation rate needs to be modified according to the welding power, groove angles, and weaving or rotate speed. PMID:28335425
Linear Mathematical Model for Seam Tracking with an Arc Sensor in P-GMAW Processes.
Liu, Wenji; Li, Liangyu; Hong, Ying; Yue, Jianfeng
2017-03-14
Arc sensors have been used in seam tracking and widely studied since the 80s and commercial arc sensing products for T and V shaped grooves have been developed. However, it is difficult to use these arc sensors in narrow gap welding because the arc stability and sensing accuracy are not satisfactory. Pulse gas melting arc welding (P-GMAW) has been successfully applied in narrow gap welding and all position welding processes, so it is worthwhile to research P-GMAW arc sensing technology. In this paper, we derived a linear mathematical P-GMAW model for arc sensing, and the assumptions for the model are verified through experiments and finite element methods. Finally, the linear characteristics of the mathematical model were investigated. In torch height changing experiments, uphill experiments, and groove angle changing experiments the P-GMAW arc signals all satisfied the linear rules. In addition, the faster the welding speed, the higher the arc signal sensitivities; the smaller the groove angle, the greater the arc sensitivities. The arc signal variation rate needs to be modified according to the welding power, groove angles, and weaving or rotate speed.
NASA Technical Reports Server (NTRS)
Stordal, Frode; Garcia, Rolando R.
1987-01-01
The 1-1/2-D model of Holton (1986), which is actually a highly truncated two-dimensional model, describes latitudinal variations of tracer mixing ratios in terms of their projections onto second-order Legendre polynomials. The present study extends the work of Holton by including tracers with photochemical production in the stratosphere (O3 and NOy). It also includes latitudinal variations in the photochemical sources and sinks, improving slightly the calculated global mean profiles for the long-lived tracers studied by Holton and improving substantially the latitudinal behavior of ozone. Sensitivity tests of the dynamical parameters in the model are performed, showing that the response of the model to changes in vertical residual meridional winds and horizontal diffusion coefficients is similar to that of a full two-dimensional model. A simple ozone perturbation experiment shows the model's ability to reproduce large-scale latitudinal variations in total ozone column depletions as well as ozone changes in the chemically controlled upper stratosphere.
NASA Technical Reports Server (NTRS)
Kageyama, Masa; Albani, Samuel; Braconnot, Pascale; Harrison, Sandy P.; Hopcroft, Peter O.; Ivanovic, Ruza F.; Lambert, Fabrice; Marti, Olivier; Peltier, W. Richard; Peterschmitt, Jean-Yves;
2017-01-01
The Last Glacial Maximum (LGM, 21,000 years ago) is one of the suite of paleoclimate simulations included in the current phase of the Coupled Model Intercomparison Project (CMIP6). It is an interval when insolation was similar to the present, but global ice volume was at a maximum, eustatic sea level was at or close to a minimum, greenhouse gas concentrations were lower, atmospheric aerosol loadings were higher than today, and vegetation and land-surface characteristics were different from today. The LGM has been a focus for the Paleoclimate Modelling Intercomparison Project (PMIP) since its inception, and thus many of the problems that might be associated with simulating such a radically different climate are well documented. The LGM state provides an ideal case study for evaluating climate model performance because the changes in forcing and temperature between the LGM and pre-industrial are of the same order of magnitude as those projected for the end of the 21st century. Thus, the CMIP6 LGM experiment could provide additional information that can be used to constrain estimates of climate sensitivity. The design of the Tier 1 LGM experiment (lgm) includes an assessment of uncertainties in boundary conditions, in particular through the use of different reconstructions of the ice sheets and of the change in dust forcing. Additional (Tier 2) sensitivity experiments have been designed to quantify feedbacks associated with land-surface changes and aerosol loadings, and to isolate the role of individual forcings. Model analysis and evaluation will capitalize on the relative abundance of paleoenvironmental observations and quantitative climate reconstructions already available for the LGM.
NASA Astrophysics Data System (ADS)
Kageyama, Masa; Albani, Samuel; Braconnot, Pascale; Harrison, Sandy P.; Hopcroft, Peter O.; Ivanovic, Ruza F.; Lambert, Fabrice; Marti, Olivier; Peltier, W. Richard; Peterschmitt, Jean-Yves; Roche, Didier M.; Tarasov, Lev; Zhang, Xu; Brady, Esther C.; Haywood, Alan M.; LeGrande, Allegra N.; Lunt, Daniel J.; Mahowald, Natalie M.; Mikolajewicz, Uwe; Nisancioglu, Kerim H.; Otto-Bliesner, Bette L.; Renssen, Hans; Tomas, Robert A.; Zhang, Qiong; Abe-Ouchi, Ayako; Bartlein, Patrick J.; Cao, Jian; Li, Qiang; Lohmann, Gerrit; Ohgaito, Rumi; Shi, Xiaoxu; Volodin, Evgeny; Yoshida, Kohei; Zhang, Xiao; Zheng, Weipeng
2017-11-01
The Last Glacial Maximum (LGM, 21 000 years ago) is one of the suite of paleoclimate simulations included in the current phase of the Coupled Model Intercomparison Project (CMIP6). It is an interval when insolation was similar to the present, but global ice volume was at a maximum, eustatic sea level was at or close to a minimum, greenhouse gas concentrations were lower, atmospheric aerosol loadings were higher than today, and vegetation and land-surface characteristics were different from today. The LGM has been a focus for the Paleoclimate Modelling Intercomparison Project (PMIP) since its inception, and thus many of the problems that might be associated with simulating such a radically different climate are well documented. The LGM state provides an ideal case study for evaluating climate model performance because the changes in forcing and temperature between the LGM and pre-industrial are of the same order of magnitude as those projected for the end of the 21st century. Thus, the CMIP6 LGM experiment could provide additional information that can be used to constrain estimates of climate sensitivity. The design of the Tier 1 LGM experiment (lgm) includes an assessment of uncertainties in boundary conditions, in particular through the use of different reconstructions of the ice sheets and of the change in dust forcing. Additional (Tier 2) sensitivity experiments have been designed to quantify feedbacks associated with land-surface changes and aerosol loadings, and to isolate the role of individual forcings. Model analysis and evaluation will capitalize on the relative abundance of paleoenvironmental observations and quantitative climate reconstructions already available for the LGM.
The importance of wind-flux feedbacks during the November CINDY-DYNAMO MJO event
NASA Astrophysics Data System (ADS)
Riley Dellaripa, Emily; Maloney, Eric; van den Heever, Susan
2015-04-01
High-resolution, large-domain cloud resolving model (CRM) simulations probing the importance of wind-flux feedbacks to Madden-Julian Oscillation (MJO) convection are performed for the November 2011 CINDY-DYNAMO MJO event. The work is motivated by observational analysis from RAMA buoys in the Indian Ocean and TRMM precipitation retrievals that show a positive correlation between MJO precipitation and wind-induced surface fluxes, especially latent heat fluxes, during and beyond the CINDY-DYNAMO time period. Simulations are done using Colorado State University's Regional Atmospheric Modeling System (RAMS). The domain setup is oceanic and spans 1000 km x 1000 km with 1.5 km horizontal resolution and 65 stretched vertical levels centered on the location of Gan Island - one of the major CINDY-DYNAMO observation points. The model is initialized with ECMWF reanalysis and Aqua MODIS sea surface temperatures. Nudging from ECMWF reanalysis is applied at the domain periphery to encourage realistic evolution of MJO convection. The control experiment is run for the entire month of November so both suppressed and active, as well as, transitional phases of the MJO are modeled. In the control experiment, wind-induced surface fluxes are activated through the surface bulk aerodynamic formula and allowed to evolve organically. Sensitivity experiments are done by restarting the control run one week into the simulation and controlling the wind-induced flux feedbacks. In one sensitivity experiment, wind-induced surface flux feedbacks are completely denied, while in another experiment the winds are kept constant at the control simulations mean surface wind speed. The evolution of convection, especially on the mesoscale, is compared between the control and sensitivity simulations.
NASA Technical Reports Server (NTRS)
Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.
1991-01-01
Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.
Experimental Searches for Exotic Short-Range Forces Using Mechanical Oscillators
NASA Astrophysics Data System (ADS)
Weisman, Evan
Experimental searches for forces beyond gravity and electromagnetism at short range have attracted a great deal of attention over the last decade. In this thesis I describe the test mass development for two new experiments searching for forces below 1 mm. Both modify a previous experiment that used 1 kHz mechanical oscillators as test masses with a stiff conducting shield between them to suppress backgrounds, a promising technique for probing exceptionally small distances at the limit of instrumental thermal noise. To further reduce thermal noise, one experiment will use plated silicon test masses at cryogenic temperatures. The other experiment, which searches for spin-dependent interactions, will apply the spin-polarizable material Dy3Fe5O 12 to the test mass surfaces. This material exhibits orbital compensation of the magnetism associated with its intrinsic electron spin, minimizing magnetic backgrounds. Several plated silicon test mass prototypes were fabricated using photolithography (useful in both experiments), and spin-dependent materials were synthesized with a simple chemical recipe. Both silicon and spin-dependent test masses demonstrate the mechanical and magnetic properties necessary for sensitive experiments. I also describe sensitivity calculations of another proposed spin-dependent experiment, based on a modified search for the electron electric dipole moment, which show unprecedented sensitivity to exotic monopole-dipole forces. Inspired by a finite element model, a study attempting to maximize detector quality factor versus geometry is also presented, with experimental results so far not explained by the model.
The visual discrimination of bending.
Norman, J Farley; Wiesemann, Elizabeth Y; Norman, Hideko F; Taylor, M Jett; Craft, Warren D
2007-01-01
The sensitivity of observers to nonrigid bending was evaluated in two experiments. In both experiments, observers were required to discriminate on any given trial which of two bending rods was more elastic. In experiment 1, both rods bent within the same oriented plane, and bent either in a frontoparallel plane or bent in depth. In experiment 2, the two rods within any given trial bent in different, randomly chosen orientations in depth. The results of both experiments revealed that human observers are sensitive to, and can reliably detect, relatively small differences in bending (the average Weber fraction across experiments 1 and 2 was 9.0%). The performance of the human observers was compared to that of models that based their elasticity judgments upon either static projected curvature or mean and maximal projected speed. Despite the fact that all of the observers reported compelling 3-D perceptions of bending in depth, their judgments were both qualitatively and quantitatively consistent with the performance of the models. This similarity suggests that relatively straightforward information about the elasticity of simple bending objects is available in projected retinal images.
Numerical experiments with a wind- and buoyancy-driven two-and-a-half-layer upper ocean model
NASA Astrophysics Data System (ADS)
Cherniawsky, J. Y.; Yuen, C. W.; Lin, C. A.; Mysak, L. A.
1990-09-01
We describe numerical experiments with a limited domain (15°-67°N, 65° west to east) coarse-resolution two-and-a-half-layer upper ocean model. The model consists of two active variable density layers: a Niiler and Kraus (1977) type mixed layer and a pycnocline layer, which overlays a semipassive deep ocean. The mixed layer is forced with a cosine wind stress and Haney type heat and precipitation-evaporation fluxes, which were derived from zonally averaged climatological (Levitus, 1982) surface temperatures and salinities for the North Atlantic. The second layer is forced from below with (1) Newtonian cooling to climatological temperatures and salinities at the lower boundary, (2) convective adjustment, which occurs whenever the density of the second layer is unstable with respect to climatology, and (3) mass entrainment in areas of strong upwelling, when the deep ocean ventilates through the bottom surface. The sensitivity of this model to changes in its internal (mixed layer) and external (e.g., a Newtonian coupling coefficient) parameters is investigated and compared to the results from a control experiment. We find that the model is not overly sensitive to changes in most of the parameters that were tested, albeit these results may depend to some extent on the choice of the control experiment.
Rössler, Wulf; Ajdacic-Gross, Vladeta; Rodgers, Stephanie; Haker, Helene; Müller, Mario
2016-04-01
Childhood trauma is a risk factor for the onset of schizophrenic psychosis. Because the psychosis phenotype can be described as a continuum with varying levels of severity and persistence, childhood trauma might likewise increase the risk for psychotic experiences below the diagnostic threshold. But the impact of stressful experiences depends upon its subjective appraisal. Therefore, varying degrees of stress sensitivity possibly mediate how childhood trauma impacts in the end upon the occurrence of subclinical psychotic experiences. We investigated this research question in a representative community cohort of 1500 participants. A questionnaire, comprising five domains of physical and emotional neglect, as well as physical, emotional, and sexual abuse, was used to assess childhood trauma. Based on different symptoms of subclinical psychotic experiences, we conducted a latent profile analysis (LPA) to derive distinct profiles for such experiences. Path modeling was performed to identify the direct and indirect (via stress sensitivity) pathways from childhood trauma to subclinical psychotic experiences. The LPA revealed four classes - unaffected, anomalous perceptions, odd beliefs and behavior, and combined anomalous perceptions/odd beliefs and behavior, that - except for sexual abuse - were all linked to childhood trauma. Moreover, except for physical abuse, childhood trauma was significantly associated with stress sensitivity. Thus, our results revealed that the pathways from emotional neglect/abuse and physical neglect to subclinical psychotic experiences were mediated by stress sensitivity. In conclusion, we can state that subclinical psychotic experiences are affected by childhood traumatic experiences in particular through the pathway of a heightened subjective stress appraisal. Copyright © 2016 Elsevier B.V. All rights reserved.
Rejection Sensitivity, Jealousy, and the Relationship to Interpersonal Aggression.
Murphy, Anna M; Russell, Gemma
2018-07-01
The development and maintenance of interpersonal relationships lead individuals to risk rejection in the pursuit of acceptance. Some individuals are predisposed to experience a hypersensitivity to rejection that is hypothesized to be related to jealous and aggressive reactions within interpersonal relationships. The current study used convenience sampling to recruit 247 young adults to evaluate the relationship between rejection sensitivity, jealousy, and aggression. A mediation model was used to test three hypotheses: Higher scores of rejection sensitivity would be positively correlated to higher scores of aggression (Hypothesis 1); higher scores of rejection sensitivity would be positively correlated to higher scores of jealousy (Hypothesis 2); jealousy would mediate the relationship between rejection sensitivity and aggression (Hypothesis 3). Study results suggest a tendency for individuals with high rejection sensitivity to experience higher levels of jealousy, and subsequently have a greater propensity for aggression, than individuals with low rejection sensitivity. Future research that substantiates a link between hypersensitivity to rejection, jealousy, and aggression may provide an avenue for prevention, education, or intervention in reducing aggression within interpersonal relationships.
Seasonal hydrologic responses to climate change in the Pacific Northwest
NASA Astrophysics Data System (ADS)
Vano, Julie A.; Nijssen, Bart; Lettenmaier, Dennis P.
2015-04-01
Increased temperatures and changes in precipitation will result in fundamental changes in the seasonal distribution of streamflow in the Pacific Northwest and will have serious implications for water resources management. To better understand local impacts of regional climate change, we conducted model experiments to determine hydrologic sensitivities of annual, seasonal, and monthly runoff to imposed annual and seasonal changes in precipitation and temperature. We used the Variable Infiltration Capacity (VIC) land-surface hydrology model applied at 1/16° latitude-longitude spatial resolution over the Pacific Northwest (PNW), a scale sufficient to support analyses at the hydrologic unit code eight (HUC-8) basin level. These experiments resolve the spatial character of the sensitivity of future water supply to precipitation and temperature changes by identifying the seasons and locations where climate change will have the biggest impact on runoff. The PNW exhibited a diversity of responses, where transitional (intermediate elevation) watersheds experience the greatest seasonal shifts in runoff in response to cool season warming. We also developed a methodology that uses these hydrologic sensitivities as basin-specific transfer functions to estimate future changes in long-term mean monthly hydrographs directly from climate model output of precipitation and temperature. When principles of linearity and superposition apply, these transfer functions can provide feasible first-order estimates of the likely nature of future seasonal streamflow change without performing downscaling and detailed model simulations.
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Non-robust numerical simulations of analogue extension experiments
NASA Astrophysics Data System (ADS)
Naliboff, John; Buiter, Susanne
2016-04-01
Numerical and analogue models of lithospheric deformation provide significant insight into the tectonic processes that lead to specific structural and geophysical observations. As these two types of models contain distinct assumptions and tradeoffs, investigations drawing conclusions from both can reveal robust links between first-order processes and observations. Recent studies have focused on detailed comparisons between numerical and analogue experiments in both compressional and extensional tectonics, sometimes involving multiple lithospheric deformation codes and analogue setups. While such comparisons often show good agreement on first-order deformation styles, results frequently diverge on second-order structures, such as shear zone dip angles or spacing, and in certain cases even on first-order structures. Here, we present finite-element experiments that are designed to directly reproduce analogue "sandbox" extension experiments at the cm-scale. We use material properties and boundary conditions that are directly taken from analogue experiments and use a Drucker-Prager failure model to simulate shear zone formation in sand. We find that our numerical experiments are highly sensitive to numerous numerical parameters. For example, changes to the numerical resolution, velocity convergence parameters and elemental viscosity averaging commonly produce significant changes in first- and second-order structures accommodating deformation. The sensitivity of the numerical simulations to small parameter changes likely reflects a number of factors, including, but not limited to, high angles of internal friction assigned to sand, complex, unknown interactions between the brittle sand (used as an upper crust equivalent) and viscous silicone (lower crust), highly non-linear strain weakening processes and poor constraints on the cohesion of sand. Our numerical-analogue comparison is hampered by (a) an incomplete knowledge of the fine details of sand failure and sand properties, and (b) likely limitations to the use of a continuum Drucker-Prager model for representing shear zone formation in sand. In some cases our numerical experiments provide reasonable fits to first-order structures observed in the analogue experiments, but the numerical sensitivity to small parameter variations leads us to conclude that the numerical experiments are not robust.
Sensitivity of PBX-9502 after ratchet growth
NASA Astrophysics Data System (ADS)
Mulford, Roberta N.; Swift, Damian
2012-03-01
Ratchet growth, or irreversible thermal expansion of the TATB-based plastic-bonded explosive PBX-9502, leads to increased sensitivity, as a result of increased porosity. The observed increase of between 3.1 and 3.5 volume percent should increase sensitivity according to the published Pop-plots for PBX-9502 [1]. Because of the variable size, shape, and location of the increased porosity, the observed sensitivity of the ratchet-grown sample is less than the sensitivity of a sample pressed to the same density. Modeling of the composite, using a quasi-harmonic EOS for unreacted components [2] and a robust porosity model for variations in density [3], allowed comparison of the initiation observed in experiment with behavior modeled as a function of density. An Arrhenius model was used to describe reaction, and the EOS for products was generated using the CHEETAH code [4]. A 1-D Lagrangian hydrocode was used to model in-material gauge records and the measured turnover to detonation, predicting greater sensitivity to density than observed for ratchet-grown material. This observation is consistent with gauge records indicating intermittent growth of the reactive wave, possibly due to inhomogeneities in density, as observed in SEM images of the material [5].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.
Here, we alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ~8–50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the globalmore » average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.« less
Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.; ...
2017-09-13
Here, we alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ~8–50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the globalmore » average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.« less
Sensitivity experiments with a one-dimensional coupled plume - iceflow model
NASA Astrophysics Data System (ADS)
Beckmann, Johanna; Perette, Mahé; Alexander, David; Calov, Reinhard; Ganopolski, Andrey
2016-04-01
Over the last few decades Greenland Ice sheet mass balance has become increasingly negative, caused by enhanced surface melting and speedup of the marine-terminating outlet glaciers at the ice sheet margins. Glaciers speedup has been related, among other factors, to enhanced submarine melting, which in turn is caused by warming of the surrounding ocean and less obviously, by increased subglacial discharge. While ice-ocean processes potentially play an important role in recent and future mass balance changes of the Greenland Ice Sheet, their physical understanding remains poorly understood. In this work we performed numerical experiments with a one-dimensional plume model coupled to a one-dimensional iceflow model. First we investigated the sensitivity of submarine melt rate to changes in ocean properties (ocean temperature and salinity), to the amount of subglacial discharge and to the glacier's tongue geometry itself. A second set of experiments investigates the response of the coupled model, i.e. the dynamical response of the outlet glacier to altered submarine melt, which results in new glacier geometry and updated melt rates.
NASA Astrophysics Data System (ADS)
Rasa, Ehsan; Foglia, Laura; Mackay, Douglas M.; Scow, Kate M.
2013-11-01
Conservative tracer experiments can provide information useful for characterizing various subsurface transport properties. This study examines the effectiveness of three different types of transport observations for sensitivity analysis and parameter estimation of a three-dimensional site-specific groundwater flow and transport model: conservative tracer breakthrough curves (BTCs), first temporal moments of BTCs ( m 1), and tracer cumulative mass discharge ( M d) through control planes combined with hydraulic head observations ( h). High-resolution data obtained from a 410-day controlled field experiment at Vandenberg Air Force Base, California (USA), have been used. In this experiment, bromide was injected to create two adjacent plumes monitored at six different transects (perpendicular to groundwater flow) with a total of 162 monitoring wells. A total of 133 different observations of transient hydraulic head, 1,158 of BTC concentration, 23 of first moment, and 36 of mass discharge were used for sensitivity analysis and parameter estimation of nine flow and transport parameters. The importance of each group of transport observations in estimating these parameters was evaluated using sensitivity analysis, and five out of nine parameters were calibrated against these data. Results showed the advantages of using temporal moment of conservative tracer BTCs and mass discharge as observations for inverse modeling.
Siongco, Angela Cheska; Hohenegger, Cathy; Stevens, Bjorn
2017-02-09
A realistic simulation of the tropical Atlantic precipitation distribution remains a challenge for atmospheric general circulation models, owing to their too coarse resolution that makes it necessary to parameterize convection. During boreal summer, models tend to underestimate the northward shift of the tropical Atlantic rain belt, leading to deficient precipitation over land and an anomalous precipitation maximum over the west Atlantic ocean. In this study, the model ECHAM6 is used to test the sensitivity of the precipitation biases to convective parameterization and horizontal resolution. Two sets of sensitivity experiments are performed. In the first set of experiments, modifications are appliedmore » to the convection scheme in order to investigate the relative roles of the trigger, entrainment, and closure formulations. In the second set, the model is run at high resolution with low-resolution boundary conditions in order to identify the relative contributions of a high-resolution atmosphere, orography, and surface. Results show that the dry bias over land in the model can be reduced by weakening the entrainment rate over land. Over ocean, it is found that the anomalous precipitation maximum occurs because of model choices that decrease the sensitivity of convection to the monsoon circulation in the east Atlantic. A reduction of the west Atlantic precipitation bias can be achieved by (i) using a moisture convergence closure, (ii) increasing the resolution of orography, or (iii) enhancing the production of deep convection in the east Atlantic. As a result, the biases over land and over ocean do not impact each other.« less
2012-06-02
regional climate model downscaling , J. Geophys. Res., 117, D11103, doi:10.1029/2012JD017692. 1. Introduction [2] Modeling studies and data analyses...based on ground and satellite data have demonstrated that the land surface state variables, such as soil moisture, snow, vegetation, and soil temperature... downscaling rather than simply applying reanal- ysis data as LBC for both Eta control and sensitivity experiments as done in many RCM sensitivity studies
Hayes, Brett K; Stephens, Rachel G; Ngo, Jeremy; Dunn, John C
2018-02-01
Three-experiments examined the number of qualitatively different processing dimensions needed to account for inductive and deductive reasoning. In each study, participants were presented with arguments that varied in logical validity and consistency with background knowledge (believability), and evaluated them according to deductive criteria (whether the conclusion was necessarily true given the premises) or inductive criteria (whether the conclusion was plausible given the premises). We examined factors including working memory load (Experiments 1 and 2), individual working memory capacity (Experiments 1 and 2), and decision time (Experiment 3), which according to dual-processing theories, modulate the contribution of heuristic and analytic processes to reasoning. A number of empirical dissociations were found. Argument validity affected deduction more than induction. Argument believability affected induction more than deduction. Lower working memory capacity reduced sensitivity to argument validity and increased sensitivity to argument believability, especially under induction instructions. Reduced decision time led to decreased sensitivity to argument validity. State-trace analyses of each experiment, however, found that only a single underlying dimension was required to explain patterns of inductive and deductive judgments. These results show that the dissociations, which have traditionally been seen as supporting dual-processing models of reasoning, are consistent with a single-process model that assumes a common evidentiary scale for induction and deduction. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Model-independent comparison of annual modulation and total rate with direct detection experiments
NASA Astrophysics Data System (ADS)
Kahlhoefer, Felix; Reindl, Florian; Schäffner, Karoline; Schmidt-Hoberg, Kai; Wild, Sebastian
2018-05-01
The relative sensitivity of different direct detection experiments depends sensitively on the astrophysical distribution and particle physics nature of dark matter, prohibiting a model-independent comparison. The situation changes fundamentally if two experiments employ the same target material. We show that in this case one can compare measurements of an annual modulation and exclusion bounds on the total rate while making no assumptions on astrophysics and no (or only very general) assumptions on particle physics. In particular, we show that the dark matter interpretation of the DAMA/LIBRA signal can be conclusively tested with COSINUS, a future experiment employing the same target material. We find that if COSINUS excludes a dark matter scattering rate of about 0.01 kg‑1 days‑1 with an energy threshold of 1.8 keV and resolution of 0.2 keV, it will rule out all explanations of DAMA/LIBRA in terms of dark matter scattering off sodium and/or iodine.
On the use of through-fall exclusion experiments to filter model hypotheses.
NASA Astrophysics Data System (ADS)
Fisher, R.
2015-12-01
One key threat to the continued existence of large tropical forest carbon reservoirs is the increasing severity of drought across Amazonian forests, observed both in climate model predictions, in recent extreme drought events and in the more chronic lengthening of the dry season of South Eastern Amazonia. Model comprehension of these systems is in it's infancy, particularly with regard to the sensitivities of model output to the representation of hydraulic strategies in tropical forest systems. Here we use data from the ongoing 14 year old Caxiuana through-fall exclusion experiment, in Eastern Brazil, to filter a set of representations of the costs and benefits of alternative hydraulic strategies. In representations where there is a high resource cost to hydraulic resilience, the trait filtering CLM4.5(ED) model selects vegetation types that are sensitive to drought. Conversely, where drought tolerance is inexpensive, a more robust ecosystem emerges from the vegetation dynamic prediction. Thus, there is an impact of trait trade-off relationships on rainforest drought tolerance. It is possible to constrain the more realistic scenarios using outputs from the drought experiments. Better prediction would likely result from a more comprehensive understanding of the costs and benefits of alternative plant strategies.
Long-term sensitivity of soil carbon turnover to warming.
Knorr, W; Prentice, I C; House, J I; Holland, E A
2005-01-20
The sensitivity of soil carbon to warming is a major uncertainty in projections of carbon dioxide concentration and climate. Experimental studies overwhelmingly indicate increased soil organic carbon (SOC) decomposition at higher temperatures, resulting in increased carbon dioxide emissions from soils. However, recent findings have been cited as evidence against increased soil carbon emissions in a warmer world. In soil warming experiments, the initially increased carbon dioxide efflux returns to pre-warming rates within one to three years, and apparent carbon pool turnover times are insensitive to temperature. It has already been suggested that the apparent lack of temperature dependence could be an artefact due to neglecting the extreme heterogeneity of soil carbon, but no explicit model has yet been presented that can reconcile all the above findings. Here we present a simple three-pool model that partitions SOC into components with different intrinsic turnover rates. Using this model, we show that the results of all the soil-warming experiments are compatible with long-term temperature sensitivity of SOC turnover: they can be explained by rapid depletion of labile SOC combined with the negligible response of non-labile SOC on experimental timescales. Furthermore, we present evidence that non-labile SOC is more sensitive to temperature than labile SOC, implying that the long-term positive feedback of soil decomposition in a warming world may be even stronger than predicted by global models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
Decisions reduce sensitivity to subsequent information.
Bronfman, Zohar Z; Brezis, Noam; Moran, Rani; Tsetsos, Konstantinos; Donner, Tobias; Usher, Marius
2015-07-07
Behavioural studies over half a century indicate that making categorical choices alters beliefs about the state of the world. People seem biased to confirm previous choices, and to suppress contradicting information. These choice-dependent biases imply a fundamental bound of human rationality. However, it remains unclear whether these effects extend to lower level decisions, and only little is known about the computational mechanisms underlying them. Building on the framework of sequential-sampling models of decision-making, we developed novel psychophysical protocols that enable us to dissect quantitatively how choices affect the way decision-makers accumulate additional noisy evidence. We find robust choice-induced biases in the accumulation of abstract numerical (experiment 1) and low-level perceptual (experiment 2) evidence. These biases deteriorate estimations of the mean value of the numerical sequence (experiment 1) and reduce the likelihood to revise decisions (experiment 2). Computational modelling reveals that choices trigger a reduction of sensitivity to subsequent evidence via multiplicative gain modulation, rather than shifting the decision variable towards the chosen alternative in an additive fashion. Our results thus show that categorical choices alter the evidence accumulation mechanism itself, rather than just its outcome, rendering the decision-maker less sensitive to new information. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Chromatic detection from cone photoreceptors to V1 neurons to behavior in rhesus monkeys
Hass, Charles A.; Angueyra, Juan M.; Lindbloom-Brown, Zachary; Rieke, Fred; Horwitz, Gregory D.
2015-01-01
Chromatic sensitivity cannot exceed limits set by noise in the cone photoreceptors. To determine how close neurophysiological and psychophysical chromatic sensitivity come to these limits, we developed a parameter-free model of stimulus encoding in the cone outer segments, and we compared the sensitivity of the model to the psychophysical sensitivity of monkeys performing a detection task and to the sensitivity of individual V1 neurons. Modeled cones had a temporal impulse response and a noise power spectrum that were derived from in vitro recordings of macaque cones, and V1 recordings were made during performance of the detection task. The sensitivity of the simulated cone mosaic, the V1 neurons, and the monkeys were tightly yoked for low-spatiotemporal-frequency isoluminant modulations, indicating high-fidelity signal transmission for this class of stimuli. Under the conditions of our experiments and the assumptions for our model, the signal-to-noise ratio for these stimuli dropped by a factor of ∼3 between the cones and perception. Populations of weakly correlated V1 neurons narrowly exceeded the monkeys' chromatic sensitivity but fell well short of the cones' chromatic sensitivity, suggesting that most of the behavior-limiting noise lies between the cone outer segments and the output of V1. The sensitivity gap between the cones and behavior for achromatic stimuli was larger than for chromatic stimuli, indicating greater postreceptoral noise. The cone mosaic model provides a means to compare visual sensitivity across disparate stimuli and to identify sources of noise that limit visual sensitivity. PMID:26523737
Chromatic detection from cone photoreceptors to V1 neurons to behavior in rhesus monkeys.
Hass, Charles A; Angueyra, Juan M; Lindbloom-Brown, Zachary; Rieke, Fred; Horwitz, Gregory D
2015-01-01
Chromatic sensitivity cannot exceed limits set by noise in the cone photoreceptors. To determine how close neurophysiological and psychophysical chromatic sensitivity come to these limits, we developed a parameter-free model of stimulus encoding in the cone outer segments, and we compared the sensitivity of the model to the psychophysical sensitivity of monkeys performing a detection task and to the sensitivity of individual V1 neurons. Modeled cones had a temporal impulse response and a noise power spectrum that were derived from in vitro recordings of macaque cones, and V1 recordings were made during performance of the detection task. The sensitivity of the simulated cone mosaic, the V1 neurons, and the monkeys were tightly yoked for low-spatiotemporal-frequency isoluminant modulations, indicating high-fidelity signal transmission for this class of stimuli. Under the conditions of our experiments and the assumptions for our model, the signal-to-noise ratio for these stimuli dropped by a factor of ∼3 between the cones and perception. Populations of weakly correlated V1 neurons narrowly exceeded the monkeys' chromatic sensitivity but fell well short of the cones' chromatic sensitivity, suggesting that most of the behavior-limiting noise lies between the cone outer segments and the output of V1. The sensitivity gap between the cones and behavior for achromatic stimuli was larger than for chromatic stimuli, indicating greater postreceptoral noise. The cone mosaic model provides a means to compare visual sensitivity across disparate stimuli and to identify sources of noise that limit visual sensitivity.
Experimental and modeling studies of small molecule chemistry in expanding spherical flames
NASA Astrophysics Data System (ADS)
Santner, Jeffrey
Accurate models of flame chemistry are required in order to predict emissions and flame properties, such that clean, efficient engines can be designed more easily. There are three primary methods used to improve such combustion chemistry models - theoretical reaction rate calculations, elementary reaction rate experiments, and combustion system experiments. This work contributes to model improvement through the third method - measurements and analysis of the laminar burning velocity at constraining conditions. Modern combustion systems operate at high pressure with strong exhaust gas dilution in order to improve efficiency and reduce emissions. Additionally, flames under these conditions are sensitized to elementary reaction rates such that measurements constrain modeling efforts. Measurement conditions of the present work operate within this intersection between applications and fundamental science. Experiments utilize a new pressure-release, heated spherical combustion chamber with a variety of fuels (high hydrogen content fuels, formaldehyde (via 1,3,5-trioxane), and C2 fuels) at pressures from 0.5--25 atm, often with dilution by water vapor or carbon dioxide to flame temperatures below 2000 K. The constraining ability of these measurements depends on their uncertainty. Thus, the present work includes a novel analytical estimate of the effects of thermal radiative heat loss on burning velocity measurements in spherical flames. For 1,3,5-trioxane experiments, global measurements are sufficiently sensitive to elementary reaction rates that optimization techniques are employed to indirectly measure the reaction rates of HCO consumption. Besides the influence of flame chemistry on propagation, this work also explores the chemistry involved in production of nitric oxide, a harmful pollutant, within flames. We find significant differences among available chemistry models, both in mechanistic structure and quantitative reaction rates. There is a lack of well-defined measurements of nitric oxide formation at high temperatures, contributing to disagreement between chemical models. This work accomplishes several goals. It identifies disagreements in pollutant formation chemistry. It creates a novel database of burning velocity measurements at relevant, sensitive conditions. It presents a simple, conservative estimate of radiation-induced measurement uncertainty in spherical flames. Finally, it utilizes systems-level flame experiments to indirectly measure elementary reaction rates.
New Directions in the Study of Early Experience.
ERIC Educational Resources Information Center
Bertenthal, Bennett I; Campos, Joseph J.
1987-01-01
Reviews Greenough, Black, and Wallace's (1987) conceptual framework for understanding the effects of early experience and sensitive periods on development, and illustrates the applicability of their model with recent data on the consequences for animals and human infants of the acquistion of self-produced locomotion. (BN)
Analyzing the Discovery Potential for Light Dark Matter.
Izaguirre, Eder; Krnjaic, Gordan; Schuster, Philip; Toro, Natalia
2015-12-18
In this Letter, we determine the present status of sub-GeV thermal dark matter annihilating through standard model mixing, with special emphasis on interactions through the vector portal. Within representative simple models, we carry out a complete and precise calculation of the dark matter abundance and of all available constraints. We also introduce a concise framework for comparing different experimental approaches, and use this comparison to identify important ranges of dark matter mass and couplings to better explore in future experiments. The requirement that dark matter be a thermal relic sets a sharp sensitivity target for terrestrial experiments, and so we highlight complementary experimental approaches that can decisively reach this milestone sensitivity over the entire sub-GeV mass range.
Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.
2005-01-01
The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.
Assessment of Forecast Sensitivity to Observation and Its Application to Satellite Radiances
NASA Astrophysics Data System (ADS)
Ide, K.
2017-12-01
The Forecast sensitivity to observation provides practical and useful metric for the assessment of observation impact without conducting computationally intensive data denial experiments. Quite often complex data assimilation systems use a simplified version of the forecast sensitivity formulation based on ensembles. In this talk, we first present the comparison of forecast sensitivity for 4DVar, Hybrid-4DEnVar, and 4DEnKF with or without such simplifications using a highly nonlinear model. We then present the results of ensemble forecast sensitivity to satellite radiance observations for Hybrid-4DEnVart using NOAA's Global Forecast System.
NASA Astrophysics Data System (ADS)
Christian, Kenneth E.; Brune, William H.; Mao, Jingqiu; Ren, Xinrong
2018-02-01
Making sense of modeled atmospheric composition requires not only comparison to in situ measurements but also knowing and quantifying the sensitivity of the model to its input factors. Using a global sensitivity method involving the simultaneous perturbation of many chemical transport model input factors, we find the model uncertainty for ozone (O3), hydroxyl radical (OH), and hydroperoxyl radical (HO2) mixing ratios, and apportion this uncertainty to specific model inputs for the DC-8 flight tracks corresponding to the NASA Intercontinental Chemical Transport Experiment (INTEX) campaigns of 2004 and 2006. In general, when uncertainties in modeled and measured quantities are accounted for, we find agreement between modeled and measured oxidant mixing ratios with the exception of ozone during the Houston flights of the INTEX-B campaign and HO2 for the flights over the northernmost Pacific Ocean during INTEX-B. For ozone and OH, modeled mixing ratios were most sensitive to a bevy of emissions, notably lightning NOx, various surface NOx sources, and isoprene. HO2 mixing ratios were most sensitive to CO and isoprene emissions as well as the aerosol uptake of HO2. With ozone and OH being generally overpredicted by the model, we find better agreement between modeled and measured vertical profiles when reducing NOx emissions from surface as well as lightning sources.
A clinical perspective on a pain neuroscience education approach to manual therapy.
Louw, Adriaan; Nijs, Jo; Puentedura, Emilio J
2017-07-01
In recent years, there has been an increased interest in pain neuroscience education (PNE) in physical therapy. There is growing evidence for the efficacy of PNE to decrease pain, disability, fear-avoidance, pain catastrophization, limited movement, and health care utilization in people struggling with pain. PNE teaches people in pain more about the biology and physiology of their pain experience including processes such as central sensitization, peripheral sensitization, allodynia, inhibition, facilitation, neuroplasticity and more. PNE's neurobiological model often finds itself at odds with traditional biomedical models used in physical therapy. Traditional biomedical models, focusing on anatomy, pathoanatomy, and biomechanics have been shown to have limited efficacy in helping people understand their pain, especially chronic pain, and may in fact even increase a person's pain experience by increasing fear-avoidance and pain catastrophization. An area of physical therapy where the biomedical model is used a lot is manual therapy. This contrast between PNE and manual therapy has seemingly polarized followers from each approach to see PNE as a 'hands-off' approach even having clinicians categorize patients as either in need of receiving PNE (with no hands-on), or hands-on with no PNE. In this paper, we explore the notion of PNE and manual therapy co-existing. PNE research has shown to have immediate effects of various clinical signs and symptoms associated with central sensitization. Using a model of sensitization (innocuous, noxious, and allodynia), we argue that PNE can be used in a manual therapy model, especially treating someone where the nervous system has become increasingly hypervigilant. Level of Evidence : VII.
NASA Astrophysics Data System (ADS)
Gorringe, T. P.; Hertzog, D. W.
2015-09-01
The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.
Involving mental health service users in suicide-related research: a qualitative inquiry model.
Lees, David; Procter, Nicholas; Fassett, Denise; Handley, Christine
2016-03-01
To describe the research model developed and successfully deployed as part of a multi-method qualitative study investigating suicidal service-users' experiences of mental health nursing care. Quality mental health care is essential to limiting the occurrence and burden of suicide, however there is a lack of relevant research informing practice in this context. Research utilising first-person accounts of suicidality is of particular importance to expanding the existing evidence base. However, conducting ethical research to support this imperative is challenging. The model discussed here illustrates specific and more generally applicable principles for qualitative research regarding sensitive topics and involving potentially vulnerable service-users. Researching into mental health service users with first-person experience of suicidality requires stakeholder and institutional support, researcher competency, and participant recruitment, consent, confidentiality, support and protection. Research with service users into their experiences of sensitive issues such as suicidality can result in rich and valuable data, and may also provide positive experiences of collaboration and inclusivity. If challenges are not met, objectification and marginalisation of service-users may be reinforced, and limitations in the evidence base and service provision may be perpetuated.
Effects of Asymmetric Cultural Experiences on the Auditory Pathway Evidence from Music
Wong, Patrick C. M.; Perrachione, Tyler K.; Margulis, Elizabeth Hellmuth
2009-01-01
Cultural experiences come in many different forms, such as immersion in a particular linguistic community, exposure to faces of people with different racial backgrounds, or repeated encounters with music of a particular tradition. In most circumstances, these cultural experiences are asymmetric, meaning one type of experience occurs more frequently than other types (e.g., a person raised in India will likely encounter the Indian todi scale more so than a Westerner). In this paper, we will discuss recent findings from our laboratories that reveal the impact of short- and long-term asymmetric musical experiences on how the nervous system responds to complex sounds. We will discuss experiments examining how musical experience may facilitate the learning of a tone language, how musicians develop neural circuitries that are sensitive to musical melodies played on their instrument of expertise, and how even everyday listeners who have little formal training are particularly sensitive to music of their own culture(s). An understanding of these cultural asymmetries is useful in formulating a more comprehensive model of auditory perceptual expertise that considers how experiences shape auditory skill levels. Such a model has the potential to aid in the development of rehabilitation programs for the efficacious treatment of neurologic impairments. PMID:19673772
A Sensitivity Analysis of fMRI Balloon Model.
Zayane, Chadia; Laleg-Kirati, Taous Meriem
2015-01-01
Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.
Moment-based metrics for global sensitivity analysis of hydrological systems
NASA Astrophysics Data System (ADS)
Dell'Oca, Aronne; Riva, Monica; Guadagnini, Alberto
2017-12-01
We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE), other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of) analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.
NASA Astrophysics Data System (ADS)
Riley, W. J.; Tang, J.
2014-12-01
We hypothesize that the large observed variability in decomposition temperature sensitivity and carbon use efficiency arises from interactions between temperature, microbial biogeochemistry, and mineral surface sorptive reactions. To test this hypothesis, we developed a numerical model that integrates the Dynamic Energy Budget concept for microbial physiology, microbial trait-based community structure and competition, process-specific thermodynamically based temperature sensitivity, a non-linear mineral sorption isotherm, and enzyme dynamics. We show, because mineral surfaces interact with substrates, enzymes, and microbes, both temperature sensitivity and microbial carbon use efficiency are hysteretic and highly variable. Further, by mimicking the traditional approach to interpreting soil incubation observations, we demonstrate that the conventional labile and recalcitrant substrate characterization for temperature sensitivity is flawed. In a 4 K temperature perturbation experiment, our fully dynamic model predicted more variable but weaker carbon-climate feedbacks than did the static temperature sensitivity and carbon use efficiency model when forced with yearly, daily, and hourly variable temperatures. These results imply that current earth system models likely over-estimate the response of soil carbon stocks to global warming.
NASA Technical Reports Server (NTRS)
Rind, D.; Healy, R.; Parkinson, C.; Martinson, D.
1995-01-01
As a first step in investigating the effects of sea ice changes on the climate sensitivity to doubled atmospheric CO2, the authors use a standard simple sea ice model while varying the sea ice distributions and thicknesses in the control run. Thinner ice amplifies the atmospheric temperature senstivity in these experiments by about 15% (to a warming of 4.8 C), because it is easier for the thinner ice to be removed as the climate warms. Thus, its impact on sensitivity is similar to that of greater sea ice extent in the control run, which provides more opportunity for sea ice reduction. An experiment with sea ice not allowed to change between the control and doubled CO2 simulations illustrates that the total effect of sea ice on surface air temperature changes, including cloud cover and water vapor feedbacks that arise in response to sea ice variations, amounts to 37% of the temperature sensitivity to the CO2 doubling, accounting for 1.56 C of the 4.17 C global warming. This is about four times larger than the sea ice impact when no feedbacks are allowed. The different experiments produce a range of results for southern high latitudes with the hydrologic budget over Antarctica implying sea level increases of varying magnitude or no change. These results highlight the importance of properly constraining the sea ice response to climate perturbations, necessitating the use of more realistic sea ice and ocean models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, Elizabeth J.; Yu, Sungduk; Kooperman, Gabriel J.
The sensitivities of simulated mesoscale convective systems (MCSs) in the central U.S. to microphysics and grid configuration are evaluated here in a global climate model (GCM) that also permits global-scale feedbacks and variability. Since conventional GCMs do not simulate MCSs, studying their sensitivities in a global framework useful for climate change simulations has not previously been possible. To date, MCS sensitivity experiments have relied on controlled cloud resolving model (CRM) studies with limited domains, which avoid internal variability and neglect feedbacks between local convection and larger-scale dynamics. However, recent work with superparameterized (SP) GCMs has shown that eastward propagating MCS-likemore » events are captured when embedded CRMs replace convective parameterizations. This study uses a SP version of the Community Atmosphere Model version 5 (SP-CAM5) to evaluate MCS sensitivities, applying an objective empirical orthogonal function algorithm to identify MCS-like events, and harmonizing composite storms to account for seasonal and spatial heterogeneity. A five-summer control simulation is used to assess the magnitude of internal and interannual variability relative to 10 sensitivity experiments with varied CRM parameters, including ice fall speed, one-moment and two-moment microphysics, and grid spacing. MCS sensitivities were found to be subtle with respect to internal variability, and indicate that ensembles of over 100 storms may be necessary to detect robust differences in SP-GCMs. Furthermore, these results emphasize that the properties of MCSs can vary widely across individual events, and improving their representation in global simulations with significant internal variability may require comparison to long (multidecadal) time series of observed events rather than single season field campaigns.« less
NASA Astrophysics Data System (ADS)
Roth, Aurora; Hock, Regine; Schuler, Thomas V.; Bieniek, Peter A.; Pelto, Mauri; Aschwanden, Andy
2018-03-01
Assessing and modeling precipitation in mountainous areas remains a major challenge in glacier mass balance modeling. Observations are typically scarce and reanalysis data and similar climate products are too coarse to accurately capture orographic effects. Here we use the linear theory of orographic precipitation model (LT model) to downscale winter precipitation from a regional climate model over the Juneau Icefield, one of the largest ice masses in North America (>4000 km2), for the period 1979-2013. The LT model is physically-based yet computationally efficient, combining airflow dynamics and simple cloud microphysics. The resulting 1 km resolution precipitation fields show substantially reduced precipitation on the northeastern portion of the icefield compared to the southwestern side, a pattern that is not well captured in the coarse resolution (20 km) WRF data. Net snow accumulation derived from the LT model precipitation agrees well with point observations across the icefield. To investigate the robustness of the LT model results, we perform a series of sensitivity experiments varying hydrometeor fall speeds, the horizontal resolution of the underlying grid, and the source of the meteorological forcing data. The resulting normalized spatial precipitation pattern is similar for all sensitivity experiments, but local precipitation amounts vary strongly, with greatest sensitivity to variations in snow fall speed. Results indicate that the LT model has great potential to provide improved spatial patterns of winter precipitation for glacier mass balance modeling purposes in complex terrain, but ground observations are necessary to constrain model parameters to match total amounts.
USDA-ARS?s Scientific Manuscript database
The sensitivity of trajectories from experiments in which volumetric values of soil moisture were changed with respect to control values were analyzed during three different synoptic episodes in June 2006. The MM5 and Noah land surface models were used to simulate the response of the planetary boun...
ERIC Educational Resources Information Center
Caldwell-Harris, Catherine L.; Lancaster, Alia; Ladd, D. Robert; Dediu, Dan; Christiansen, Morten H.
2015-01-01
This study examined whether musical training, ethnicity, and experience with a natural tone language influenced sensitivity to tone while listening to an artificial tone language. The language was designed with three tones, modeled after level-tone African languages. Participants listened to a 15-min random concatenation of six 3-syllable words.…
Residual acceleration data on IML-1: Development of a data reduction and dissemination plan
NASA Technical Reports Server (NTRS)
Rogers, Melissa J. B.; Alexander, J. Iwan D.
1993-01-01
The research performed consisted of three stages: (1) identification of sensitive IML-1 experiments and sensitivity ranges by order of magnitude estimates, numerical modeling, and investigator input; (2) research and development towards reduction, supplementation, and dissemination of residual acceleration data; and (3) implementation of the plan on existing acceleration databases.
Peterson, Daniel J; Gill, W Drew; Dose, John M; Hoover, Donald B; Pauly, James R; Cummins, Elizabeth D; Burgess, Katherine C; Brown, Russell W
2017-05-15
Neonatal quinpirole (NQ) treatment to rats increases dopamine D2 receptor sensitivity persistent throughout the animal's lifetime. In Experiment 1, we analyzed the role of α7 and α4β2 nicotinic receptors (nAChRs) in nicotine behavioral sensitization and on the brain-derived neurotrophic factor (BDNF) response to nicotine in NQ- and neonatally saline (NS)-treated rats. In Experiment 2, we analyzed changes in α7 and α4β2 nAChR density in the nucleus accumbens (NAcc) and dorsal striatum in NQ and NS animals sensitized to nicotine. Male and female Sprague-Dawley rats were neonatally treated with quinpirole (1mg/kg) or saline from postnatal days (P)1-21. Animals were given ip injections of either saline or nicotine (0.5mg/kg free base) every second day from P33 to P49 and tested on behavioral sensitization. Before each injection, animals were ip administered the α7 nAChR antagonist methyllycaconitine (MLA; 2 or 4mg/kg) or the α4β2 nAChR antagonist dihydro beta erythroidine (DhβE; 1 or 3mg/kg). Results revealed NQ enhanced nicotine sensitization that was blocked by DhβE. MLA blocked the enhanced nicotine sensitization in NQ animals, but did not block nicotine sensitization. NQ enhanced the NAcc BDNF response to nicotine which was blocked by both antagonists. In Experiment 2, NQ enhanced nicotine sensitization and enhanced α4β2, but not α7, nAChR upregulation in the NAcc. These results suggest a relationship between accumbal BDNF and α4β2 nAChRs and their role in the behavioral response to nicotine in the NQ model which has relevance to schizophrenia, a behavioral disorder with high rates of tobacco smoking. Copyright © 2017. Published by Elsevier B.V.
Peterson, Daniel J.; Gill, W. Drew; Dose, John M.; Hoover, Donald B.; Pauly, James R.; Cummins, Elizabeth D.; Burgess, Katherine C.; Brown, Russell W.
2017-01-01
Neonatal quinpirole (NQ) treatment to rats increases dopamine D2 receptor sensitivity persistent throughout the animal’s lifetime. In Experiment 1, we analyzed the role of α7 and α4β2 nicotinic receptors (nAChRs) in nicotine behavioral sensitization and on the brain-derived neurotrophic factor (BDNF) response to nicotine in NQ- and neonatally saline (NS)-treated rats. In Experiment 2, we analyzed changes in α7 and α4β2 nAChR density in the nucleus accumbens (NAcc) and dorsal striatum in NQ and NS animals sensitized to nicotine. Male and female Sprague-Dawley rats were neonatally treated with quinpirole (1 mg/kg) or saline from postnatal days (P)1–21. Animals were given ip injections of either saline or nicotine (0.5 mg/kg free base) every second day from P33 to P49 and tested on behavioral sensitization. Before each injection, animals were ip administered the α7 nAChR antagonist methyllycaconitine (MLA; 2 or 4 mg/kg) or the α4β2 nAChR antagonist dihydro beta erythroidine (DhβE; 1 or 3 mg/kg). Results revealed NQ enhanced nicotine sensitization that was blocked by DhβE. MLA blocked the enhanced nicotine sensitization in NQ animals, but did not block nicotine sensitization. NQ enhanced the NAcc BDNF response to nicotine which was blocked by both antagonists. In Experiment 2, NQ enhanced nicotine sensitization and enhanced α4β2, but not 7, nAChR upregulation in the NAcc. These results suggest a relationship between accumbal BDNF and α4β2 nAChRs and their role in the behavioral response to nicotine in the NQ model which has relevance to schizophrenia, a behavioral disorder with high rates of tobacco smoking. PMID:28235586
Development and Testing of Neutron Cross Section Covariance Data for SCALE 6.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Williams, Mark L; Wiarda, Dorothea
2015-01-01
Neutron cross-section covariance data are essential for many sensitivity/uncertainty and uncertainty quantification assessments performed both within the TSUNAMI suite and more broadly throughout the SCALE code system. The release of ENDF/B-VII.1 included a more complete set of neutron cross-section covariance data: these data form the basis for a new cross-section covariance library to be released in SCALE 6.2. A range of testing is conducted to investigate the properties of these covariance data and ensure that the data are reasonable. These tests include examination of the uncertainty in critical experiment benchmark model k eff values due to nuclear data uncertainties, asmore » well as similarity assessments of irradiated pressurized water reactor (PWR) and boiling water reactor (BWR) fuel with suites of critical experiments. The contents of the new covariance library, the testing performed, and the behavior of the new covariance data are described in this paper. The neutron cross-section covariances can be combined with a sensitivity data file generated using the TSUNAMI suite of codes within SCALE to determine the uncertainty in system k eff caused by nuclear data uncertainties. The Verified, Archived Library of Inputs and Data (VALID) maintained at Oak Ridge National Laboratory (ORNL) contains over 400 critical experiment benchmark models, and sensitivity data are generated for each of these models. The nuclear data uncertainty in k eff is generated for each experiment, and the resulting uncertainties are tabulated and compared to the differences in measured and calculated results. The magnitude of the uncertainty for categories of nuclides (such as actinides, fission products, and structural materials) is calculated for irradiated PWR and BWR fuel to quantify the effect of covariance library changes between the SCALE 6.1 and 6.2 libraries. One of the primary applications of sensitivity/uncertainty methods within SCALE is the assessment of similarities between benchmark experiments and safety applications. This is described by a c k value for each experiment with each application. Several studies have analyzed typical c k values for a range of critical experiments compared with hypothetical irradiated fuel applications. The c k value is sensitive to the cross-section covariance data because the contribution of each nuclide is influenced by its uncertainty; large uncertainties indicate more likely bias sources and are thus given more weight. Changes in c k values resulting from different covariance data can be used to examine and assess underlying data changes. These comparisons are performed for PWR and BWR fuel in storage and transportation systems.« less
Double Beta Decay - Physics Beyond the Standard Model Now, and in Future (GENIUS)
NASA Astrophysics Data System (ADS)
Klapdor-Kleingrothaus, H. V.
Nuclear double beta decay provides an extraordinarily broad potential to search for beyond Standard Model physics, probing already now the TeV scale, on which new physics should manifest itself. These possibilities are reviewed here. First, the results of present generation experiments are presented. The most sensitive one of them - the Heidelberg-Moscow experiment in the Gran Sasso - probes the electron mass now in the sub eV region and will reach a limit of ˜ 0.1 eV in a few years. Basing to a large extent on the theoretical work of the Heidelberg Double Beta Group in the last two years, results are obtained also for SUSY models (R-parity breaking, sneutrino mass), leptoquarks (leptoquark-Higgs coupling), com-positeness, right-handed W boson mass and others. These results are comfortably competitive to corresponding results from high-energy accelerators like TEVA-TRON, HERA, etc. Second, future perspectives of ʲʲ research are discussed. A new Heidelberg experimental proposal (GENIUS) is presented which would allow to increase the sensitivity for Majorana neutrino masses from the present level of at best 0.1 eV down to 0.01 or even 0.001 eV. Its physical potential would be a breakthrough into the multi-TeV range for many beyond standard models. Its sensitivity for neutrino oscillation parameters would be larger than of all present terrestrial neutrino oscillation experiments and of those planned for the future. It would further, already in a first step, cover almost the full MSSM parameter space for prediction of neutralinos as cold dark matter, making the experiment competitive to LHC in the search for supersymmetry.
NASA Astrophysics Data System (ADS)
Jiang, Shanchao; Wang, Jing; Sui, Qingmei
2015-11-01
One novel distinguishable circumferential inclined direction tilt sensor is demonstrated by incorporating two strain sensitivity fiber Bragg gratings (FBGs) with two orthogonal triangular cantilever beam and using one fiber Bragg grating (FBG) as temperature compensation element. According to spatial vector and space geometry, theory calculation model of the proposed FBG tilt sensor which can be used to obtain the azimuth and tile angle of the inclined direction is established. To obtain its measuring characteristics, calibration experiment on one prototype of the proposed FBG tilt sensor is carried out. After temperature sensitivity experiment data analysis, the proposed FBG tilt sensor exhibits excellent temperature compensation characteristics. In 2-D tilt angle experiment, tilt measurement sensitivities of these two strain sensitivity FBGs are 140.85°/nm and 101.01°/nm over a wide range of 60º. Further, azimuth and tile angle of the inclined direction can be obtained by the proposed FBG tilt sensor which is verified in circumferential angle experiment. Experiment data show that relative errors of azimuth are 0.55% (positive direction) and 1.14% (negative direction), respectively, and relative errors of tilt angle are all less than 3%. Experiment results confirm that the proposed distinguishable circumferential inclined direction tilt sensor based on FBG can achieve azimuth and tile angle measurement with wide measuring range and high accuracy.
Dunthorn, Jason; Dyer, Robert M; Neerchal, Nagaraj K; McHenry, Jonathan S; Rajkondawar, Parimal G; Steingraber, Gary; Tasch, Uri
2015-11-01
Lameness remains a significant cause of production losses, a growing welfare concern and may be a greater economic burden than clinical mastitis . A growing need for accurate, continuous automated detection systems continues because US prevalence of lameness is 12.5% while individual herds may experience prevalence's of 27.8-50.8%. To that end the first force-plate system restricted to the vertical dimension identified lame cows with 85% specificity and 52% sensitivity. These results lead to the hypothesis that addition of transverse and longitudinal dimensions could improve sensitivity of lameness detection. To address the hypothesis we upgraded the original force plate system to measure ground reaction forces (GRFs) across three directions. GRFs and locomotion scores were generated from randomly selected cows and logistic regression was used to develop a model that characterised relationships of locomotion scores to the GRFs. This preliminary study showed 76 variables across 3 dimensions produced a model with greater than 90% sensitivity, specificity, and area under the receiver operating curve (AUC). The result was a marked improvement on the 52% sensitivity, and 85% specificity previously observed with the 1 dimensional model or the 45% sensitivities reported with visual observations. Validation of model accuracy continues with the goal to finalise accurate automated methods of lameness detection.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Modelling the effect of GRP78 on anti-oestrogen sensitivity and resistance in breast cancer
Parmar, Jignesh H.; Cook, Katherine L.; Shajahan-Haq, Ayesha N.; Clarke, Pamela A. G.; Tavassoly, Iman; Clarke, Robert; Tyson, John J.; Baumann, William T.
2013-01-01
Understanding the origins of resistance to anti-oestrogen drugs is of critical importance to many breast cancer patients. Recent experiments show that knockdown of GRP78, a key gene in the unfolded protein response (UPR), can re-sensitize resistant cells to anti-oestrogens, and overexpression of GRP78 in sensitive cells can cause them to become resistant. These results appear to arise from the operation and interaction of three cellular systems: the UPR, autophagy and apoptosis. To determine whether our current mechanistic understanding of these systems is sufficient to explain the experimental results, we built a mathematical model of the three systems and their interactions. We show that the model is capable of reproducing previously published experimental results and some new data gathered specifically for this paper. The model provides us with a tool to better understand the interactions that bring about anti-oestrogen resistance and the effects of GRP78 on both sensitive and resistant breast cancer cells. PMID:24511377
Constitutive equation of friction based on the subloading-surface concept
Ueno, Masami; Kuwayama, Takuya; Suzuki, Noriyuki; Yonemura, Shigeru; Yoshikawa, Nobuo
2016-01-01
The subloading-friction model is capable of describing static friction, the smooth transition from static to kinetic friction and the recovery to static friction after sliding stops or sliding velocity decreases. This causes a negative rate sensitivity (i.e. a decrease in friction resistance with increasing sliding velocity). A generalized subloading-friction model is formulated in this article by incorporating the concept of overstress for viscoplastic sliding velocity into the subloading-friction model to describe not only negative rate sensitivity but also positive rate sensitivity (i.e. an increase in friction resistance with increasing sliding velocity) at a general sliding velocity ranging from quasi-static to impact sliding. The validity of the model is verified by numerical experiments and comparisons with test data obtained from friction tests using a lubricated steel specimen. PMID:27493570
Spatial Language and the Embedded Listener Model in Parents’ Input to Children
Ferrara, Katrina; Silva, Malena; Wilson, Colin; Landau, Barbara
2015-01-01
Language is a collaborative act: in order to communicate successfully, speakers must generate utterances that are not only semantically valid, but also sensitive to the knowledge state of the listener. Such sensitivity could reflect use of an “embedded listener model,” where speakers choose utterances on the basis of an internal model of the listeners’ conceptual and linguistic knowledge. In this paper, we ask whether parents’ spatial descriptions incorporate an embedded listener model that reflects their children’s understanding of spatial relations and spatial terms. Adults described the positions of targets in spatial arrays to their children or to the adult experimenter. Arrays were designed so that targets could not be identified unless spatial relationships within the array were encoded and described. Parents of 3–4 year-old children encoded relationships in ways that were well-matched to their children’s level of spatial language. These encodings differed from those of the same relationships in speech to the adult experimenter (Experiment 1). By contrast, parents of individuals with severe spatial impairments (Williams syndrome) did not show clear evidence of sensitivity to their children’s level of spatial language (Experiment 2). The results provide evidence for an embedded listener model in the domain of spatial language, and indicate conditions under which the ability to model listener knowledge may be more challenging. PMID:26717804
Spatial Language and the Embedded Listener Model in Parents' Input to Children.
Ferrara, Katrina; Silva, Malena; Wilson, Colin; Landau, Barbara
2016-11-01
Language is a collaborative act: To communicate successfully, speakers must generate utterances that are not only semantically valid but also sensitive to the knowledge state of the listener. Such sensitivity could reflect the use of an "embedded listener model," where speakers choose utterances on the basis of an internal model of the listener's conceptual and linguistic knowledge. In this study, we ask whether parents' spatial descriptions incorporate an embedded listener model that reflects their children's understanding of spatial relations and spatial terms. Adults described the positions of targets in spatial arrays to their children or to the adult experimenter. Arrays were designed so that targets could not be identified unless spatial relationships within the array were encoded and described. Parents of 3-4-year-old children encoded relationships in ways that were well-matched to their children's level of spatial language. These encodings differed from those of the same relationships in speech to the adult experimenter (Experiment 1). In contrast, parents of individuals with severe spatial impairments (Williams syndrome) did not show clear evidence of sensitivity to their children's level of spatial language (Experiment 2). The results provide evidence for an embedded listener model in the domain of spatial language and indicate conditions under which the ability to model listener knowledge may be more challenging. Copyright © 2015 Cognitive Science Society, Inc.
The Active Role of the Ocean in the Temporal Evolution of Climate Sensitivity
Garuba, Oluwayemi A.; Lu, Jian; Liu, Fukai; ...
2017-11-30
Here, the temporal evolution of the effective climate sensitivity is shown to be influenced by the changing pattern of sea surface temperature (SST) and ocean heat uptake (OHU), which in turn have been attributed to ocean circulation changes. A set of novel experiments are performed to isolate the active role of the ocean by comparing a fully coupled CO 2 quadrupling community Earth System Model (CESM) simulation against a partially coupled one, where the effect of the ocean circulation change and its impact on surface fluxes are disabled. The active OHU is responsible for the reduced effective climate sensitivity andmore » weaker surface warming response in the fully coupled simulation. The passive OHU excites qualitatively similar feedbacks to CO 2 quadrupling in a slab ocean model configuration due to the similar SST spatial pattern response in both experiments. Additionally, the nonunitary forcing efficacy of the active OHU (1.7) explains the very different net feedback parameters in the fully and partially coupled responses.« less
The Active Role of the Ocean in the Temporal Evolution of Climate Sensitivity
NASA Astrophysics Data System (ADS)
Garuba, Oluwayemi A.; Lu, Jian; Liu, Fukai; Singh, Hansi A.
2018-01-01
The temporal evolution of the effective climate sensitivity is shown to be influenced by the changing pattern of sea surface temperature (SST) and ocean heat uptake (OHU), which in turn have been attributed to ocean circulation changes. A set of novel experiments are performed to isolate the active role of the ocean by comparing a fully coupled CO2 quadrupling community Earth System Model (CESM) simulation against a partially coupled one, where the effect of the ocean circulation change and its impact on surface fluxes are disabled. The active OHU is responsible for the reduced effective climate sensitivity and weaker surface warming response in the fully coupled simulation. The passive OHU excites qualitatively similar feedbacks to CO2 quadrupling in a slab ocean model configuration due to the similar SST spatial pattern response in both experiments. Additionally, the nonunitary forcing efficacy of the active OHU (1.7) explains the very different net feedback parameters in the fully and partially coupled responses.
The Active Role of the Ocean in the Temporal Evolution of Climate Sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garuba, Oluwayemi A.; Lu, Jian; Liu, Fukai
Here, the temporal evolution of the effective climate sensitivity is shown to be influenced by the changing pattern of sea surface temperature (SST) and ocean heat uptake (OHU), which in turn have been attributed to ocean circulation changes. A set of novel experiments are performed to isolate the active role of the ocean by comparing a fully coupled CO 2 quadrupling community Earth System Model (CESM) simulation against a partially coupled one, where the effect of the ocean circulation change and its impact on surface fluxes are disabled. The active OHU is responsible for the reduced effective climate sensitivity andmore » weaker surface warming response in the fully coupled simulation. The passive OHU excites qualitatively similar feedbacks to CO 2 quadrupling in a slab ocean model configuration due to the similar SST spatial pattern response in both experiments. Additionally, the nonunitary forcing efficacy of the active OHU (1.7) explains the very different net feedback parameters in the fully and partially coupled responses.« less
Predictions of Cockpit Simulator Experimental Outcome Using System Models
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1984-01-01
This study involved predicting the outcome of a cockpit simulator experiment where pilots used cockpit displays of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. The experiments were run on the NASA Ames Research Center multicab cockpit simulator facility. Prior to the experiments, a mathematical model of the pilot/aircraft/CDTI flight system was developed which included relative in-trail and vertical dynamics between aircraft in the approach string. This model was used to construct a digital simulation of the string dynamics including response to initial position errors. The model was then used to predict the outcome of the in-trail following cockpit simulator experiments. Outcome included performance and sensitivity to different separation criteria. The experimental results were then used to evaluate the model and its prediction accuracy. Lessons learned in this modeling and prediction study are noted.
NASA Astrophysics Data System (ADS)
Samper, J.; Dewonck, S.; Zheng, L.; Yang, Q.; Naves, A.
Diffusion of inert and reactive tracers (DIR) is an experimental program performed by ANDRA at Bure underground research laboratory in Meuse/Haute Marne (France) to characterize diffusion and retention of radionuclides in Callovo-Oxfordian (C-Ox) argillite. In situ diffusion experiments were performed in vertical boreholes to determine diffusion and retention parameters of selected radionuclides. C-Ox clay exhibits a mild diffusion anisotropy due to stratification. Interpretation of in situ diffusion experiments is complicated by several non-ideal effects caused by the presence of a sintered filter, a gap between the filter and borehole wall and an excavation disturbed zone (EdZ). The relevance of such non-ideal effects and their impact on estimated clay parameters have been evaluated with numerical sensitivity analyses and synthetic experiments having similar parameters and geometric characteristics as real DIR experiments. Normalized dimensionless sensitivities of tracer concentrations at the test interval have been computed numerically. Tracer concentrations are found to be sensitive to all key parameters. Sensitivities are tracer dependent and vary with time. These sensitivities are useful to identify which are the parameters that can be estimated with less uncertainty and find the times at which tracer concentrations begin to be sensitive to each parameter. Synthetic experiments generated with prescribed known parameters have been interpreted automatically with INVERSE-CORE 2D and used to evaluate the relevance of non-ideal effects and ascertain parameter identifiability in the presence of random measurement errors. Identifiability analysis of synthetic experiments reveals that data noise makes difficult the estimation of clay parameters. Parameters of clay and EdZ cannot be estimated simultaneously from noisy data. Models without an EdZ fail to reproduce synthetic data. Proper interpretation of in situ diffusion experiments requires accounting for filter, gap and EdZ. Estimates of the effective diffusion coefficient and the porosity of clay are highly correlated, indicating that these parameters cannot be estimated simultaneously. Accurate estimation of De and porosities of clay and EdZ is only possible when the standard deviation of random noise is less than 0.01. Small errors in the volume of the circulation system do not affect clay parameter estimates. Normalized sensitivities as well as the identifiability analysis of synthetic experiments provide additional insight on inverse estimation of in situ diffusion experiments and will be of great benefit for the interpretation of real DIR in situ diffusion experiments.
Modeling and simulation of deformation of hydrogels responding to electric stimulus.
Li, Hua; Luo, Rongmo; Lam, K Y
2007-01-01
A model for simulation of pH-sensitive hydrogels is refined in this paper to extend its application to electric-sensitive hydrogels, termed the refined multi-effect-coupling electric-stimulus (rMECe) model. By reformulation of the fixed-charge density and consideration of finite deformation, the rMECe model is able to predict the responsive deformations of the hydrogels when they are immersed in a bath solution subject to externally applied electric field. The rMECe model consists of nonlinear partial differential governing equations with chemo-electro-mechanical coupling effects and the fixed-charge density with electric-field effect. By comparison between simulation and experiment extracted from literature, the model is verified to be accurate and stable. The rMECe model performs quantitatively for deformation analysis of the electric-sensitive hydrogels. The influences of several physical parameters, including the externally applied electric voltage, initial fixed-charge density, hydrogel strip thickness, ionic strength and valence of surrounding solution, are discussed in detail on the displacement and average curvature of the hydrogels.
NASA Astrophysics Data System (ADS)
Sierra, Carlos A.; Trumbore, Susan E.; Davidson, Eric A.; Vicca, Sara; Janssens, I.
2015-03-01
The sensitivity of soil organic matter decomposition to global environmental change is a topic of prominent relevance for the global carbon cycle. Decomposition depends on multiple factors that are being altered simultaneously as a result of global environmental change; therefore, it is important to study the sensitivity of the rates of soil organic matter decomposition with respect to multiple and interacting drivers. In this manuscript, we present an analysis of the potential response of decomposition rates to simultaneous changes in temperature and moisture. To address this problem, we first present a theoretical framework to study the sensitivity of soil organic matter decomposition when multiple driving factors change simultaneously. We then apply this framework to models and data at different levels of abstraction: (1) to a mechanistic model that addresses the limitation of enzyme activity by simultaneous effects of temperature and soil water content, the latter controlling substrate supply and oxygen concentration for microbial activity; (2) to different mathematical functions used to represent temperature and moisture effects on decomposition in biogeochemical models. To contrast model predictions at these two levels of organization, we compiled different data sets of observed responses in field and laboratory studies. Then we applied our conceptual framework to: (3) observations of heterotrophic respiration at the ecosystem level; (4) laboratory experiments looking at the response of heterotrophic respiration to independent changes in moisture and temperature; and (5) ecosystem-level experiments manipulating soil temperature and water content simultaneously.
NASA Technical Reports Server (NTRS)
Wigley, D. A.
1982-01-01
Nitronic 40 was chosen for the construction of Pathfinder I, an R & D model for use in the National Transonic Facility, because of its good mechanical properties at cryogenic temperatures. Nitronic 40 contains delta ferrite and is in a sensitized condition. Heat treatments carried out to remove residual stresses also caused further sensitization. Experiments showed that heat treatment followed by cryoquenching removed the sensitization without creating residual stresses. Heat treatment at temperatures of 2200 F was used to remove the delta ferrite but with little success and at the cost of massive grain growth. The implications of using degraded Nitronic 40 for cryogenic wind tunnel models are discussed, together with possible acceptance criteria.
NASA Astrophysics Data System (ADS)
Ban, G.; Bison, G.; Bodek, K.; Daum, M.; Fertl, M.; Franke, B.; Grujić, Z. D.; Heil, W.; Horras, M.; Kasprzak, M.; Kermaidic, Y.; Kirch, K.; Koch, H.-C.; Komposch, S.; Kozela, A.; Krempel, J.; Lauss, B.; Lefort, T.; Mtchedlishvili, A.; Pignol, G.; Piegsa, F. M.; Prashanth, P.; Quéméner, G.; Rawlik, M.; Rebreyend, D.; Ries, D.; Roccia, S.; Rozpedzik, D.; Schmidt-Wellenburg, P.; Severijns, N.; Weis, A.; Wyszynski, G.; Zejma, J.; Zsigmond, G.
2018-07-01
We report on a laser based 199Hg co-magnetometer deployed in an experiment searching for a permanent electric dipole moment of the neutron. We demonstrate a more than five times increased signal to-noise-ratio in a direct comparison measurement with its 204Hg discharge bulb-based predecessor. An improved data model for the extraction of important system parameters such as the degrees of absorption and polarization is derived. Laser- and lamp-based data-sets can be consistently described by the improved model which permits to compare measurements using the two different light sources and to explain the increase in magnetometer performance. The laser-based magnetometer satisfies the magnetic field sensitivity requirements for the next generation nEDM experiments.
Testing the Model of Stigma Communication with a Factorial Experiment in an Interpersonal Context
Smith, Rachel A.
2014-01-01
Stigmas may regulate intergroup relationships; they may also influence interpersonal actions. This study extends the previous test of the model of stigma communication (Smith, 2012) with a factorial experiment in which the outcomes refer to a hypothetical acquaintance. New affective reactions, sympathy and frustration, and a new personality trait, disgust sensitivity, were explored. In addition, perceived severity and susceptibility of the infection were included as alternative mechanisms explaining the effects. The results (n = 318) showed that message content, message reactions (emotional and cognitive), and disgust sensitivity predicted intentions to regulate the infected acquaintance’s interactions and lifestyle (R2 = .79) and participants’ likelihood of telling others about the acquaintance’s infection (R2 = .35). The findings generally provided support for MSC and directions for improvement. PMID:25425853
SVM-based automatic diagnosis method for keratoconus
NASA Astrophysics Data System (ADS)
Gao, Yuhong; Wu, Qiang; Li, Jing; Sun, Jiande; Wan, Wenbo
2017-06-01
Keratoconus is a progressive cornea disease that can lead to serious myopia and astigmatism, or even to corneal transplantation, if it becomes worse. The early detection of keratoconus is extremely important to know and control its condition. In this paper, we propose an automatic diagnosis algorithm for keratoconus to discriminate the normal eyes and keratoconus ones. We select the parameters obtained by Oculyzer as the feature of cornea, which characterize the cornea both directly and indirectly. In our experiment, 289 normal cases and 128 keratoconus cases are divided into training and test sets respectively. Far better than other kernels, the linear kernel of SVM has sensitivity of 94.94% and specificity of 97.87% with all the parameters training in the model. In single parameter experiment of linear kernel, elevation with 92.03% sensitivity and 98.61% specificity and thickness with 97.28% sensitivity and 97.82% specificity showed their good classification abilities. Combining elevation and thickness of the cornea, the proposed method can reach 97.43% sensitivity and 99.19% specificity. The experiments demonstrate that the proposed automatic diagnosis method is feasible and reliable.
Modelling of resonant MEMS magnetic field sensor with electromagnetic induction sensing
NASA Astrophysics Data System (ADS)
Liu, Song; Xu, Huaying; Xu, Dehui; Xiong, Bin
2017-06-01
This paper presents an analytical model of resonant MEMS magnetic field sensor with electromagnetic induction sensing. The resonant structure vibrates in square extensional (SE) mode. By analyzing the vibration amplitude and quality factor of the resonant structure, the magnetic field sensitivity as a function of device structure parameters and encapsulation pressure is established. The developed analytical model has been verified by comparing calculated results with experiment results and the deviation between them is only 10.25%, which shows the feasibility of the proposed device model. The model can provide theoretical guidance for further design optimization of the sensor. Moreover, a quantitative study of the magnetic field sensitivity is conducted with respect to the structure parameters and encapsulation pressure based on the proposed model.
ERIC Educational Resources Information Center
Carlson, Laurie A.; Harper, Kelly S.
2011-01-01
Service provision to gay, lesbian, bisexual, and transgender (GLBT) older adults is a dynamic and sensitive area, requiring rigorous and extensive inquiry and action. Examining the readiness and assets of organizations serving GLBT older adults requires not only heart and sensitivity but also resources and a clear vision. The Community Readiness…
ERIC Educational Resources Information Center
Vermeersch, Hans; T'Sjoen, Guy; Kaufman, Jean-Marc; Vincke, John; Bracke, Piet
2010-01-01
Based on Boyce and Ellis's model on "context" and "biological sensitivity to the context", this article analyzes the interaction between the experience of daily hassles and experimentally induced cardiovascular reactivity as an indicator of stress reactivity, in explaining risk taking and self-esteem. This study found, in a…
NASA Astrophysics Data System (ADS)
Dukhovskoy, Dmitry; Bourassa, Mark
2017-04-01
Ocean processes in the Nordic Seas and northern North Atlantic are strongly controlled by air-sea heat and momentum fluxes. The predominantly cyclonic, large-scale atmospheric circulation brings the deep ocean layer up to the surface preconditioning the convective sites in the Nordic Seas for deep convection. In winter, intensive cooling and possibly salt flux from newly formed sea ice erodes the near-surface stratification and the mixed layer merges with the deeper domed layer, exposing the very weakly stratified deep water mass to direct interaction with the atmosphere. Surface wind is one of the atmospheric parameters required for estimating momentum and turbulent heat fluxes to the sea ice and ocean surface. In the ocean models forced by atmospheric analysis, errors in surface wind fields result in errors in air-sea heat and momentum fluxes, water mass formation, ocean circulation, as well as volume and heat transport in the straits. The goal of the study is to assess discrepancies across the wind vector fields from reanalysis data sets and scatterometer-derived gridded products over the Nordic Seas and northern North Atlantic and to demonstrate possible implications of these differences for ocean modeling. The analyzed data sets include the reanalysis data from the National Center for Environmental Prediction Reanalysis 2 (NCEPR2), Climate Forecast System Reanalysis (CFSR), Arctic System Reanalysis (ASR) and satellite wind products Cross-Calibrated Multi-Platform (CCMP) wind product version 1.1 and recently released version 2.0, and Remote Sensing Systems QuikSCAT data. Large-scale and mesoscale characteristics of winds are compared at interannual, seasonal, and synoptic timescales. Numerical sensitivity experiments are conducted with a coupled ice-ocean model forced by different wind fields. The sensitivity experiments demonstrate differences in the net surface heat fluxes during storm events. Next, it is hypothesized that discrepancies in the wind vorticity fields should manifest different behaviors of the isopycnals in the Nordic Seas. Time evolution of isopycnal depths in the sensitivity experiments forced by different wind fields is discussed. Results of these sensitivity experiments demonstrate a relationship between the isopycnal surfaces and the wind stress curl. The numerical experiments are also analyzed to investigate the relationship between the East Greenland Current and the wind stress curl over the Nordic Seas. The transport of the current at this location has substantial contribution from wind-driven large-scale circulation. This wind-driven part of the East Greenland Current is a western-intensified return flow of a wind-driven cyclonic gyre in the central Nordic Seas. The numerical experiments with different wind fields reveal notable sensitivity of the East Greenland Current to differences in the wind forcing.
A new framework for climate sensitivity and prediction: a modelling perspective
NASA Astrophysics Data System (ADS)
Ragone, Francesco; Lucarini, Valerio; Lunkeit, Frank
2016-03-01
The sensitivity of climate models to increasing CO2 concentration and the climate response at decadal time-scales are still major factors of uncertainty for the assessment of the long and short term effects of anthropogenic climate change. While the relative slow progress on these issues is partly due to the inherent inaccuracies of numerical climate models, this also hints at the need for stronger theoretical foundations to the problem of studying climate sensitivity and performing climate change predictions with numerical models. Here we demonstrate that it is possible to use Ruelle's response theory to predict the impact of an arbitrary CO2 forcing scenario on the global surface temperature of a general circulation model. Response theory puts the concept of climate sensitivity on firm theoretical grounds, and addresses rigorously the problem of predictability at different time-scales. Conceptually, these results show that performing climate change experiments with general circulation models is a well defined problem from a physical and mathematical point of view. Practically, these results show that considering one single CO2 forcing scenario is enough to construct operators able to predict the response of climatic observables to any other CO2 forcing scenario, without the need to perform additional numerical simulations. We also introduce a general relationship between climate sensitivity and climate response at different time scales, thus providing an explicit definition of the inertia of the system at different time scales. This technique allows also for studying systematically, for a large variety of forcing scenarios, the time horizon at which the climate change signal (in an ensemble sense) becomes statistically significant. While what we report here refers to the linear response, the general theory allows for treating nonlinear effects as well. These results pave the way for redesigning and interpreting climate change experiments from a radically new perspective.
The sensitivity of numerically simulated climates to land-surface boundary conditions
NASA Technical Reports Server (NTRS)
Mintz, Y.
1982-01-01
Eleven sensitivity experiments that were made with general circulation models to see how land-surface boundary conditions can influence the rainfall, temperature, and motion fields of the atmosphere are discussed. In one group of experiments, different soil moistures or albedos are prescribed as time-invariant boundary conditions. In a second group, different soil moistures or different albedos are initially prescribed, and the soil moisture (but not the albedo) is allowed to change with time according to the governing equations for soil moisture. In a third group, the results of constant versus time-dependent soil moistures are compared.
New Target for Cosmic Axion Searches.
Baumann, Daniel; Green, Daniel; Wallisch, Benjamin
2016-10-21
Future cosmic microwave background experiments have the potential to probe the density of relativistic species at the subpercent level. This sensitivity allows light thermal relics to be detected up to arbitrarily high decoupling temperatures. Conversely, the absence of a detection would require extra light species never to have been in equilibrium with the Standard Model. In this Letter, we exploit this feature to demonstrate the sensitivity of future cosmological observations to the couplings of axions to photons, gluons, and charged fermions. In many cases, the constraints achievable from cosmology will surpass existing bounds from laboratory experiments and astrophysical observations by orders of magnitude.
Maniscalco, Brian; Peters, Megan A K; Lau, Hakwan
2016-04-01
Zylberberg et al. [Zylberberg, Barttfeld, & Sigman (Frontiers in Integrative Neuroscience, 6; 79, 2012), Frontiers in Integrative Neuroscience 6:79] found that confidence decisions, but not perceptual decisions, are insensitive to evidence against a selected perceptual choice. We present a signal detection theoretic model to formalize this insight, which gave rise to a counter-intuitive empirical prediction: that depending on the observer's perceptual choice, increasing task performance can be associated with decreasing metacognitive sensitivity (i.e., the trial-by-trial correspondence between confidence and accuracy). The model also provides an explanation as to why metacognitive sensitivity tends to be less than optimal in actual subjects. These predictions were confirmed robustly in a psychophysics experiment. In a second experiment we found that, in at least some subjects, the effects were replicated even under performance feedback designed to encourage optimal behavior. However, some subjects did show improvement under feedback, suggesting the tendency to ignore evidence against a selected perceptual choice may be a heuristic adopted by the perceptual decision-making system, rather than reflecting inherent biological limitations. We present a Bayesian modeling framework that explains why this heuristic strategy may be advantageous in real-world contexts.
Spectral envelope sensitivity of musical instrument sounds.
Gunawan, David; Sen, D
2008-01-01
It is well known that the spectral envelope is a perceptually salient attribute in musical instrument timbre perception. While a number of studies have explored discrimination thresholds for changes to the spectral envelope, the question of how sensitivity varies as a function of center frequency and bandwidth for musical instruments has yet to be addressed. In this paper a two-alternative forced-choice experiment was conducted to observe perceptual sensitivity to modifications made on trumpet, clarinet and viola sounds. The experiment involved attenuating 14 frequency bands for each instrument in order to determine discrimination thresholds as a function of center frequency and bandwidth. The results indicate that perceptual sensitivity is governed by the first few harmonics and sensitivity does not improve when extending the bandwidth any higher. However, sensitivity was found to decrease if changes were made only to the higher frequencies and continued to decrease as the distorted bandwidth was widened. The results are analyzed and discussed with respect to two other spectral envelope discrimination studies in the literature as well as what is predicted from a psychoacoustic model.
A Small Range Six-Axis Accelerometer Designed with High Sensitivity DCB Elastic Element
Sun, Zhibo; Liu, Jinhao; Yu, Chunzhan; Zheng, Yili
2016-01-01
This paper describes a small range six-axis accelerometer (the measurement range of the sensor is ±g) with high sensitivity DCB (Double Cantilever Beam) elastic element. This sensor is developed based on a parallel mechanism because of the reliability. The accuracy of sensors is affected by its sensitivity characteristics. To improve the sensitivity, a DCB structure is applied as the elastic element. Through dynamic analysis, the dynamic model of the accelerometer is established using the Lagrange equation, and the mass matrix and stiffness matrix are obtained by a partial derivative calculation and a conservative congruence transformation, respectively. By simplifying the structure of the accelerometer, a model of the free vibration is achieved, and the parameters of the sensor are designed based on the model. Through stiffness analysis of the DCB structure, the deflection curve of the beam is calculated. Compared with the result obtained using a finite element analysis simulation in ANSYS Workbench, the coincidence rate of the maximum deflection is 89.0% along the x-axis, 88.3% along the y-axis and 87.5% along the z-axis. Through strain analysis of the DCB elastic element, the sensitivity of the beam is obtained. According to the experimental result, the accuracy of the theoretical analysis is found to be 90.4% along the x-axis, 74.9% along the y-axis and 78.9% along the z-axis. The measurement errors of linear accelerations ax, ay and az in the experiments are 2.6%, 0.6% and 1.31%, respectively. The experiments prove that accelerometer with DCB elastic element performs great sensitive and precision characteristics. PMID:27657089
Light-mediated predation by northern squawfish on juvenile Chinook salmon
Petersen, James H.; Gadomski, Dena M.
1994-01-01
Northern squawfish Ptychocheilus oregonensis cause significant mortality of juvenile salmon in the lower Columbia River Basin (U.S.A.). The effects of light intensity on this predator-prey interaction were examined with laboratory experiments and modelling studies. In laboratory experiments, the rate of capture of subyearling chinook salmon Oncorhynchus tshawytscha by northern squawfish was inversely related to light intensity. In a large raceway, about five times more salmon were captured during 4 h periods of relative darkness (0–03 Ix) than during periods with high light intensity (160 Ix). The rate of predation could be manipulated by increasing or decreasing light intensity.A simulation model was developed for visual predators that encounter, attack, and capture juvenile salmon, whose schooling behaviour was light-sensitive. The model was fitted to laboratory results using a Monte Carlo filtering procedure. Model-predicted predation rate was especially sensitive to the visual range of predators at low light intensity and to predator search speed at high light. Modelling results also suggested that predation by northern squawfish on juvenile salmon may be highest across a narrow window of fight intensity.
Prospect theory reflects selective allocation of attention.
Pachur, Thorsten; Schulte-Mecklenbeck, Michael; Murphy, Ryan O; Hertwig, Ralph
2018-02-01
There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants' predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPT's loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants' choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants' attention to losses or gains, causing systematic differences in CPT's loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if model's capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Dümenil Gates, Lydia; Ließ, Stefan
2001-10-01
For two reasons it is important to study the sensitivity of the global climate to changes in the vegetation cover over land. First, in the real world, changes in the vegetation cover may have regional and global implications. Second, in numerical simulations, the sensitivity of the simulated climate may depend on the specific parameterization schemes employed in the model and on the model's large-scale systematic errors. The Max-Planck-Institute's global general circulation model ECHAM4 has been used to study the sensitivity of the local and global climate during a full annual cycle to deforestation and afforestation in the Mediterranean region. The deforestation represents an extreme desertification scenario for this region. The changes in the afforestation experiment are based on the pattern of the vegetation cover 2000 years before present when the climate in the Mediterranean was more humid. The comparison of the deforestation integration to the control shows a slight cooling at the surface and reduced precipitation during the summer as a result of less evapotranspiration of plants and less evaporation from the assumption of eroded soils. There is no significant signal during the winter season due to the stronger influence of the mid-latitude baroclinic disturbances. In general, the results of the afforestation experiment are opposite to those of the deforestation case. A significant response was found in the vicinity of grid points where the land surface characteristics were modified. The response in the Sahara in the afforestation experiment is in agreement with the results from other general circulation model studies.
NASA Astrophysics Data System (ADS)
Park, Subok; Badano, Aldo; Gallas, Brandon D.; Myers, Kyle J.
2007-03-01
Previously, a non-prewhitening matched filter (NPWMF) incorporating a model for the contrast sensitivity of the human visual system was introduced for modeling human performance in detection tasks with different viewing angles and white-noise backgrounds by Badano et al. But NPWMF observers do not perform well detection tasks involving complex backgrounds since they do not account for random backgrounds. A channelized-Hotelling observer (CHO) using difference-of-Gaussians (DOG) channels has been shown to track human performance well in detection tasks using lumpy backgrounds. In this work, a CHO with DOG channels, incorporating the model of the human contrast sensitivity, was developed similarly. We call this new observer a contrast-sensitive CHO (CS-CHO). The Barten model was the basis of our human contrast sensitivity model. A scalar was multiplied to the Barten model and varied to control the thresholding effect of the contrast sensitivity on luminance-valued images and hence the performance-prediction ability of the CS-CHO. The performance of the CS-CHO was compared to the average human performance from the psychophysical study by Park et al., where the task was to detect a known Gaussian signal in non-Gaussian distributed lumpy backgrounds. Six different signal-intensity values were used in this study. We chose the free parameter of our model to match the mean human performance in the detection experiment at the strongest signal intensity. Then we compared the model to the human at five different signal-intensity values in order to see if the performance of the CS-CHO matched human performance. Our results indicate that the CS-CHO with the chosen scalar for the contrast sensitivity predicts human performance closely as a function of signal intensity.
NASA Technical Reports Server (NTRS)
Lim, J. T.; Wilkerson, G. G.; Raper, C. D. Jr; Gold, H. J.
1990-01-01
A differential equation model of vegetative growth of the soya bean plant (Glycine max (L.) Merrill cv. Ransom') was developed to account for plant growth in a phytotron system under variation of root temperature and nitrogen concentration in nutrient solution. The model was tested by comparing model outputs with data from four different experiments. Model predictions agreed fairly well with measured plant performance over a wide range of root temperatures and over a range of nitrogen concentrations in nutrient solution between 0.5 and 10.0 mmol NO3- in the phytotron environment. Sensitivity analyses revealed that the model was most sensitive to changes in parameters relating to carbohydrate concentration in the plant and nitrogen uptake rate.
HCIT Contrast Performance Sensitivity Studies: Simulation Versus Experiment
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Shaklan, Stuart; Krist, John; Cady, Eric J.; Kern, Brian; Balasubramanian, Kunjithapatham
2013-01-01
Using NASA's High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory, we have experimentally investigated the sensitivity of dark hole contrast in a Lyot coronagraph for the following factors: 1) Lateral and longitudinal translation of an occulting mask; 2) An opaque spot on the occulting mask; 3) Sizes of the controlled dark hole area. Also, we compared the measured results with simulations obtained using both MACOS (Modeling and Analysis for Controlled Optical Systems) and PROPER optical analysis programs with full three-dimensional near-field diffraction analysis to model HCIT's optical train and coronagraph.
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
NASA Astrophysics Data System (ADS)
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
Ely, D. Matthew
2006-01-01
Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.
NASA Technical Reports Server (NTRS)
Baker, W. E.; Paegle, J.
1983-01-01
An examination is undertaken of the sensitivity of short term Southern Hemisphere circulation prediction to tropical wind data and tropical latent heat release. The data assimilation experiments employ the Goddard Laboratory for Atmospheric Sciences' fourth-order general circulation model. Two of the experiments are identical, but for the fact that one uses tropical wind data while the other does not. A third experiment contains the identical initial conditions of forecasts with tropical winds, while suppressing tropical latent heat release.
Optimism Experiment and Development of Space-qualified Seismometers in France
NASA Technical Reports Server (NTRS)
Lognonne, P.; Karczewski, J. F.
1993-01-01
The OPTIMISM experiment will put two magnetometers and two seismometers on the Martian floor in 1995, within the framework of the Mars '94 mission. The seismometers are put within the two small surface stations. The seismometer sensitivity will be better than 10 exp -9 g at 1 Hz, 2 orders of magnitude higher than the Viking seismometer sensitivity. A priori waveform modeling for seismic signals on Mars shows that it will be sufficient to detect quakes with a seismic moment greater than 10 exp 15 Nm everywhere on Mars. Such events, according to the hypothesis of a thermoelastic cooling of the Martian lithosphere, are expected to occur at a rate close to one per week and may therefore be observed within the l-year lifetime of the experiment. Other aspects of the experiment are discussed.
Compassionate School Model: Creating Trauma Sensitive Schools
ERIC Educational Resources Information Center
Wilson, Mary A.
2013-01-01
Children who are victims of adverse childhood experiences may display behaviors in school that hinder their ability to develop socially and academically. The purpose of this research study was to determine the potential effectiveness of the Compassionate School training model. This study was a program evaluation that examined staff training…
Acquisition of Automatic Imitation Is Sensitive to Sensorimotor Contingency
ERIC Educational Resources Information Center
Cook, Richard; Press, Clare; Dickinson, Anthony; Heyes, Cecilia
2010-01-01
The associative sequence learning model proposes that the development of the mirror system depends on the same mechanisms of associative learning that mediate Pavlovian and instrumental conditioning. To test this model, two experiments used the reduction of automatic imitation through incompatible sensorimotor training to assess whether mirror…
EVALUATION AND SENSITIVITY ANALYSES RESULTS OF THE MESOPUFF II MODEL WITH CAPTEX MEASUREMENTS
The MESOPUFF II regional Lagrangian puff model has been evaluated and tested against measurements from the Cross-Appalachian Tracer Experiment (CAPTEX) data base in an effort to assess its abilIty to simulate the transport and dispersion of a nonreactive, nondepositing tracer plu...
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
Surface filling-in and contour interpolation contribute independently to Kanizsa figure formation.
Chen, Siyi; Glasauer, Stefan; Müller, Hermann J; Conci, Markus
2018-04-30
To explore mechanisms of object integration, the present experiments examined how completion of illusory contours and surfaces modulates the sensitivity of localizing a target probe. Observers had to judge whether a briefly presented dot probe was located inside or outside the region demarcated by inducer elements that grouped to form variants of an illusory, Kanizsa-type figure. From the resulting psychometric functions, we determined observers' discrimination thresholds as a sensitivity measure. Experiment 1 showed that sensitivity was systematically modulated by the amount of surface and contour completion afforded by a given configuration. Experiments 2 and 3 presented stimulus variants that induced an (occluded) object without clearly defined bounding contours, which gave rise to a relative sensitivity increase for surface variations on their own. Experiments 4 and 5 were performed to rule out that these performance modulations were simply attributable to variable distances between critical local inducers or to costs in processing an interrupted contour. Collectively, the findings provide evidence for a dissociation between surface and contour processing, supporting a model of object integration in which completion is instantiated by feedforward processing that independently renders surface filling-in and contour interpolation and a feedback loop that integrates these outputs into a complete whole. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Stress in adolescence and drugs of abuse in rodent models: Role of dopamine, CRF, and HPA axis
Burke, Andrew R.; Miczek, Klaus A.
2014-01-01
Rationale Research on adolescence and drug abuse increased substantially in the past decade. However, drug-addiction related behaviors following stressful experiences during adolescence are less studied. We focus on rodent models of adolescent stress cross-sensitization to drugs of abuse. Objectives Review the ontogeny of behavior, dopamine, corticotropin-releasing factor (CRF), and the hypothalamic pituitary adrenal (HPA) axis in adolescent rodents. We evaluate evidence that stressful experiences during adolescence engender hypersensitivity to drugs of abuse and offer potential neural mechanisms. Results and Conclusions Much evidence suggests that final maturation of behavior, dopamine systems, and HPA axis occurs during adolescence. Stress during adolescence increases amphetamine- and ethanol-stimulated locomotion, preference, and self-administration under many conditions. The influence of adolescent stress on subsequent cocaine- and nicotine-stimulated locomotion and preference is less clear. The type of adolescent stress, temporal interval between stress and testing, species, sex, and the drug tested are key methodological determinants for successful cross-sensitization procedures. The sensitization of the mesolimbic dopamine system is proposed to underlie stress cross-sensitization to drugs of abuse in both adolescents and adults through modulation by CRF. Reduced levels of mesocortical dopamine appear to be a unique consequence of social stress during adolescence. Adolescent stress may reduce the final maturation of cortical dopamine through D2 dopamine receptor regulation of dopamine synthesis or glucocorticoid-facilitated pruning of cortical dopamine fibers. Certain rodent models of adolescent adversity are useful for determining neural mechanisms underlying the cross-sensitization to drugs of abuse. PMID:24370534
Cui, Jian; Zhao, Xue-Hong; Wang, Yan; Xiao, Ya-Bing; Jiang, Xue-Hui; Dai, Li
2014-01-01
Flow injection-hydride generation-atomic fluorescence spectrometry was a widely used method in the industries of health, environmental, geological and metallurgical fields for the merit of high sensitivity, wide measurement range and fast analytical speed. However, optimization of this method was too difficult as there exist so many parameters affecting the sensitivity and broadening. Generally, the optimal conditions were sought through several experiments. The present paper proposed a mathematical model between the parameters and sensitivity/broadening coefficients using the law of conservation of mass according to the characteristics of hydride chemical reaction and the composition of the system, which was proved to be accurate as comparing the theoretical simulation and experimental results through the test of arsanilic acid standard solution. Finally, this paper has put a relation map between the parameters and sensitivity/broadening coefficients, and summarized that GLS volume, carrier solution flow rate and sample loop volume were the most factors affecting sensitivity and broadening coefficients. Optimizing these three factors with this relation map, the relative sensitivity was advanced by 2.9 times and relative broadening was reduced by 0.76 times. This model can provide a theoretical guidance for the optimization of the experimental conditions.
Unpacking buyer-seller differences in valuation from experience: A cognitive modeling approach.
Pachur, Thorsten; Scheibehenne, Benjamin
2017-12-01
People often indicate a higher price for an object when they own it (i.e., as sellers) than when they do not (i.e., as buyers)-a phenomenon known as the endowment effect. We develop a cognitive modeling approach to formalize, disentangle, and compare alternative psychological accounts (e.g., loss aversion, loss attention, strategic misrepresentation) of such buyer-seller differences in pricing decisions of monetary lotteries. To also be able to test possible buyer-seller differences in memory and learning, we study pricing decisions from experience, obtained with the sampling paradigm, where people learn about a lottery's payoff distribution from sequential sampling. We first formalize different accounts as models within three computational frameworks (reinforcement learning, instance-based learning theory, and cumulative prospect theory), and then fit the models to empirical selling and buying prices. In Study 1 (a reanalysis of published data with hypothetical decisions), models assuming buyer-seller differences in response bias (implementing a strategic-misrepresentation account) performed best; models assuming buyer-seller differences in choice sensitivity or memory (implementing a loss-attention account) generally fared worst. In a new experiment involving incentivized decisions (Study 2), models assuming buyer-seller differences in both outcome sensitivity (as proposed by a loss-aversion account) and response bias performed best. In both Study 1 and 2, the models implemented in cumulative prospect theory performed best. Model recovery studies validated our cognitive modeling approach, showing that the models can be distinguished rather well. In summary, our analysis supports a loss-aversion account of the endowment effect, but also reveals a substantial contribution of simple response bias.
Singularity problems of the power law for modeling creep compliance
NASA Technical Reports Server (NTRS)
Dillard, D. A.; Hiel, C.
1985-01-01
An explanation is offered for the extreme sensitivity that has been observed in the power law parameters of the T300/934 graphite epoxy material systems during experiments to evaluate the system's viscoelastic response. It is shown that the singularity associated with the power law can explain the sensitivity as well as the observed variability in the calculated parameters. Techniques for minimizing errors are suggested.
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
Improved Upper Ocean/Sea Ice Modeling in the GISS GCM for Investigating Climate Change
NASA Technical Reports Server (NTRS)
1997-01-01
This project built on our previous results in which we highlighted the importance of sea ice in overall climate sensitivity by determining that for both warming and cooling climates, when sea ice was not allowed to change, climate sensitivity was reduced by 35-40%. We also modified the Goddard Institute for Space Studies (GISS) 8 deg x lO deg atmospheric General Circulation Model (GCM) to include an upper-ocean/sea-ice model involving the Semtner three-layer ice/snow thermodynamic model, the Price et al. (1986) ocean mixed layer model and a general upper ocean vertical advection/diffusion scheme for maintaining and fluxing properties across the pycnocline. This effort, in addition to improving the sea ice representation in the AGCM, revealed a number of sensitive components of the sea ice/ocean system. For example, the ability to flux heat through the ice/snow properly is critical in order to resolve the surface temperature properly, since small errors in this lead to unrestrained climate drift. The present project, summarized in this report, had as its objectives: (1) introducing a series of sea ice and ocean improvements aimed at overcoming remaining weaknesses in the GCM sea ice/ocean representation, and (2) performing a series of sensitivity experiments designed to evaluate the climate sensitivity of the revised model to both Antarctic and Arctic sea ice, determine the sensitivity of the climate response to initial ice distribution, and investigate the transient response to doubling CO2.
NASA Technical Reports Server (NTRS)
Watkins, A. Neal; Buck, Gregory M.; Leighty, Bradley D.; Lipford, William E.; Oglesby, Donald M.
2008-01-01
Pressure Sensitive Paint (PSP) and Temperature Sensitive Paint (TSP) were used to visualize and quantify the surface interactions of reaction control system (RCS) jets on the aft body of capsule reentry vehicle shapes. The first model tested was an Apollo-like configuration and was used to focus primarily on the effects of the forward facing roll and yaw jets. The second model tested was an early Orion Crew Module configuration blowing only out of its forward-most yaw jet, which was expected to have the most intense aerodynamic heating augmentation on the model surface. This paper will present the results from the experiments, which show that with proper system design, both PSP and TSP are effective tools for studying these types of interaction in hypersonic testing environments.
Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
Chen, Yi-Chuan; Spence, Charles
2011-10-01
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.
NASA Astrophysics Data System (ADS)
Kawase, Hiroaki; Hara, Masayuki; Yoshikane, Takao; Ishizaki, Noriko N.; Uno, Fumichika; Hatsushika, Hiroaki; Kimura, Fujio
2013-11-01
Sea of Japan side of Central Japan is one of the heaviest snowfall areas in the world. We investigate near-future snow cover changes on the Sea of Japan side using a regional climate model. We perform the pseudo global warming (PGW) downscaling based on the five global climate models (GCMs). The changes in snow cover strongly depend on the elevation; decrease in the ratios of snow cover is larger in the lower elevations. The decrease ratios of the maximum accumulated snowfall in the short term, such as 1 day, are smaller than those in the long term, such as 1 week. We conduct the PGW experiments focusing on specific periods when a 2 K warming at 850 hPa is projected by the individual GCMs (PGW-2K85). The PGW-2K85 experiments show different changes in precipitation, resulting in snow cover changes in spite of similar warming conditions. Simplified sensitivity experiments that assume homogenous warming of the atmosphere (2 K) and the sea surface show that the altitude dependency of snow cover changes is similar to that in the PGW-2K85 experiments, while the uncertainty of changes in the sea surface temperature influences the snow cover changes both in the lower and higher elevations. The decrease in snowfall is, however, underestimated in the simplified sensitivity experiments as compared with the PGW experiments. Most GCMs project an increase in dry static stability and some GCMs project an anticyclonic anomaly over Central Japan, indicating the inhibition of precipitation, including snowfall, in the PGW experiments.
NASA Technical Reports Server (NTRS)
Walker, Ryan Thomas; Holland, David; Parizek, Byron R.; Alley, Richard B.; Nowicki, Sophie M. J.; Jenkins, Adrian
2013-01-01
Thermodynamic flowline and plume models for the ice shelf-ocean system simplify the ice and ocean dynamics sufficiently to allow extensive exploration of parameters affecting ice-sheet stability while including key physical processes. Comparison between geophysically and laboratory-based treatments of ice-ocean interface thermodynamics shows reasonable agreement between calculated melt rates, except where steep basal slopes and relatively high ocean temperatures are present. Results are especially sensitive to the poorly known drag coefficient, highlighting the need for additional field experiments to constrain its value. These experiments also suggest that if the ice-ocean interface near the grounding line is steeper than some threshold, further steepening of the slope may drive higher entrainment that limits buoyancy, slowing the plume and reducing melting; if confirmed, this will provide a stabilizing feedback on ice sheets under some circumstances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizuno, T
2004-09-03
Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4{pi} sr), the sensitive energy range of the instrument ({approx} 10 MeV to 100 GeV) and abundant components (proton, alpha, e{sup -}, e{sup +}, {mu}{sup -}, {mu}{sup +} and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functionsmore » in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.« less
Chen, Xiaojuan; Chen, Zhihua; Wang, Xun; Huo, Chan; Hu, Zhiquan; Xiao, Bo; Hu, Mian
2016-07-01
The present study focused on the application of anaerobic digestion model no. 1 (ADM1) to simulate biogas production from Hydrilla verticillata. Model simulation was carried out by implementing ADM1 in AQUASIM 2.0 software. Sensitivity analysis was used to select the most sensitive parameters for estimation using the absolute-relative sensitivity function. Among all the kinetic parameters, disintegration constant (kdis), hydrolysis constant of protein (khyd_pr), Monod maximum specific substrate uptake rate (km_aa, km_ac, km_h2) and half-saturation constants (Ks_aa, Ks_ac) affect biogas production significantly, which were optimized by fitting of the model equations to the data obtained from batch experiments. The ADM1 model after parameter estimation was able to well predict the experimental results of daily biogas production and biogas composition. The simulation results of evolution of organic acids, bacteria concentrations and inhibition effects also helped to get insight into the reaction mechanisms. Copyright © 2016. Published by Elsevier Ltd.
The Aqua-planet Experiment (APE): Response to Changed Meridional SST Profile
NASA Technical Reports Server (NTRS)
Williamson, David L.; Blackburn, Michael; Nakajima, Kensuke; Ohfuchi, Wataru; Takahashi, Yoshiyuki O.; Hayashi, Yoshi-Yuki; Nakamura, Hisashi; Ishiwatari, Masaki; Mcgregor, John L.; Borth, Hartmut;
2013-01-01
This paper explores the sensitivity of Atmospheric General Circulation Model (AGCM) simulations to changes in the meridional distribution of sea surface temperature (SST). The simulations are for an aqua-planet, a water covered Earth with no land, orography or sea- ice and with specified zonally symmetric SST. Simulations from 14 AGCMs developed for Numerical Weather Prediction and climate applications are compared. Four experiments are performed to study the sensitivity to the meridional SST profile. These profiles range from one in which the SST gradient continues to the equator to one which is flat approaching the equator, all with the same maximum SST at the equator. The zonal mean circulation of all models shows strong sensitivity to latitudinal distribution of SST. The Hadley circulation weakens and shifts poleward as the SST profile flattens in the tropics. One question of interest is the formation of a double versus a single ITCZ. There is a large variation between models of the strength of the ITCZ and where in the SST experiment sequence they transition from a single to double ITCZ. The SST profiles are defined such that as the equatorial SST gradient flattens, the maximum gradient increases and moves poleward. This leads to a weakening of the mid-latitude jet accompanied by a poleward shift of the jet core. Also considered are tropical wave activity and tropical precipitation frequency distributions. The details of each vary greatly between models, both with a given SST and in the response to the change in SST. One additional experiment is included to examine the sensitivity to an off-equatorial SST maximum. The upward branch of the Hadley circulation follows the SST maximum off the equator. The models that form a single precipitation maximum when the maximum SST is on the equator shift the precipitation maximum off equator and keep it centered over the SST maximum. Those that form a double with minimum on the equatorial maximum SST shift the double structure off the equator, keeping the minimum over the maximum SST. In both situations only modest changes appear in the shifted profile of zonal average precipitation. When the upward branch of the Hadley circulation moves into the hemisphere with SST maximum, the zonal average zonal, meridional and vertical winds all indicate that the Hadley cell in the other hemisphere dominates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
Sensitivity Studies in Gyro-fluid Simulation
NASA Astrophysics Data System (ADS)
Ross, D. W.; Dorland, W.; Beer, M. A.; Hammett, G. W.
1998-11-01
Transport models [1] derived from gyrofluid simulation [2] have been successful in predicting general confinement scalings. Specific fluxes and turbulent spectra, however, can depend sensitively on the plasma configuration and profiles, particularly in experiments with transients. Here, we step back from initial studies on Alcator C-Mod [3] and DIII-D [4] to investigate the sensitivity of simulations to variations in density, temperature (and their gradients) of each plasma species. We discuss the role of electric field shear, and the construction of local transport models for experimental comparison. In accompanying papers [5] we investigate comparisons with the experiments. *Supported by USDOE Grants DE-FG03-95ER54296, and DE-AC02-76CHO3073. [1] M. Kotschenreuther et al., Phys. Plasmas 2, 2381 (1995). [2] M. A. Beer et al, Phys. Plasmas 2, 2687 (1995). [3] D. W. Ross et al., Transport Task Force, Atlanta, 1998. [4] R. V. Bravenec et al., in Proc. 25th EPS Conf. on Contr. Fusion and Plasma Phys., Prague (1998). [5] R. V. Bravenec et al. and W. L. Rowan et al., these proceedings.
Snow measurement Using P-Band Signals of Opportunity Reflectometry
NASA Astrophysics Data System (ADS)
Shah, R.; Yueh, S. H.; Xu, X.; Elder, K.
2017-12-01
Snow water storage in land is a critical parameter of the water cycle. In this study, we develop methods for estimating reflectance from bistatic scattering of digital communication Signals of Opportunity (SoOp) across the available microwave spectrum from VHF to Ka band and show results from proof-of-concept experiments at the Fraser Experimental Forest, Colorado to acquire measurements to relate the SoOp phase and reflectivity to a snow-covered soil surface. The forward modeling of this scenario will be presented and multiple sensitivities were conducted. Available SoOp receiver data along with a network of in situ sensor measurements collected since January 2016 will be used to validate theoretical modeling results. In the winter season of 2016 and 2017, we conducted a field experiment using VHF/UHF-band illuminating sources to detect SWE and surface reflectivity. The amplitude of the reflectivity showed sensitivity to the wetness of snow pack and ground reflectivity while the phase showed sensitivity to SWE. This use of this concept can be helpful to measure the snow water storage in land globally.
NASA Technical Reports Server (NTRS)
Brown, James L.
2014-01-01
Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.
Studies of Missing Energy Decays at Belle II
NASA Astrophysics Data System (ADS)
Guan, Yinghui
The Belle II experiment at the SuperKEKB collider is a major upgrade of the KEK “B factory” facility in Tsukuba, Japan. The machine is designed for an instantaneous luminosity of 8 × 1035cm‑2s‑1, and the experiment is expected to accumulate a data sample of about 50 ab‑1. With this amount of data, decays sensitive to physics beyond the Standard Model can be studied with unprecedented precision. One promising set of modes are physics processes with missing energy such as B+ → τ+ν, B → D(∗)τν, and B → K(∗)νν¯ decays. The B → K(∗)νν¯ decay provides one of the cleanest experimental probes of the flavour-changing neutral current process b → sνν¯, which is sensitive to physics beyond the Standard Model. However, the missing energies of the neutrinos in the final state makes the measurement challenging and requires full reconstruction of the spectator B meson in e+e‑→ Υ(4S) → BB¯ events. This report discusses the expected sensitivities of Belle II for these rare decays.
ERIC Educational Resources Information Center
Mitchell, James K.; Carter, William E.
2000-01-01
Describes using a computer statistical software package called Minitab to model the sensitivity of several microbes to the disinfectant NaOCl (Clorox') using the Kirby-Bauer technique. Each group of students collects data from one microbe, conducts regression analyses, then chooses the best-fit model based on the highest r-values obtained.…
Weaker soil carbon-climate feedbacks resulting from microbial and abiotic interactions
NASA Astrophysics Data System (ADS)
Tang, Jinyun; Riley, William J.
2015-01-01
The large uncertainty in soil carbon-climate feedback predictions has been attributed to the incorrect parameterization of decomposition temperature sensitivity (Q10; ref. ) and microbial carbon use efficiency. Empirical experiments have found that these parameters vary spatiotemporally, but such variability is not included in current ecosystem models. Here we use a thermodynamically based decomposition model to test the hypothesis that this observed variability arises from interactions between temperature, microbial biogeochemistry, and mineral surface sorptive reactions. We show that because mineral surfaces interact with substrates, enzymes and microbes, both Q10 and microbial carbon use efficiency are hysteretic (so that neither can be represented by a single static function) and the conventional labile and recalcitrant substrate characterization with static temperature sensitivity is flawed. In a 4-K temperature perturbation experiment, our fully dynamic model predicted more variable but weaker soil carbon-climate feedbacks than did the static Q10 and static carbon use efficiency model when forced with yearly, daily and hourly variable temperatures. These results imply that current Earth system models probably overestimate the response of soil carbon stocks to global warming. Future ecosystem models should therefore consider the dynamic interactions between sorptive mineral surfaces, substrates and microbial processes.
Model-data integration for developing the Cropland Carbon Monitoring System (CCMS)
NASA Astrophysics Data System (ADS)
Jones, C. D.; Bandaru, V.; Pnvr, K.; Jin, H.; Reddy, A.; Sahajpal, R.; Sedano, F.; Skakun, S.; Wagle, P.; Gowda, P. H.; Hurtt, G. C.; Izaurralde, R. C.
2017-12-01
The Cropland Carbon Monitoring System (CCMS) has been initiated to improve regional estimates of carbon fluxes from croplands in the conterminous United States through integration of terrestrial ecosystem modeling, use of remote-sensing products and publically available datasets, and development of improved landscape and management databases. In order to develop these improved carbon flux estimates, experimental datasets are essential for evaluating the skill of estimates, characterizing the uncertainty of these estimates, characterizing parameter sensitivities, and calibrating specific modeling components. Experiments were sought that included flux tower measurement of CO2 fluxes under production of major agronomic crops. Currently data has been collected from 17 experiments comprising 117 site-years from 12 unique locations. Calibration of terrestrial ecosystem model parameters using available crop productivity and net ecosystem exchange (NEE) measurements resulted in improvements in RMSE of NEE predictions of between 3.78% to 7.67%, while improvements in RMSE for yield ranged from -1.85% to 14.79%. Model sensitivities were dominated by parameters related to leaf area index (LAI) and spring growth, demonstrating considerable capacity for model improvement through development and integration of remote-sensing products. Subsequent analyses will assess the impact of such integrated approaches on skill of cropland carbon flux estimates.
Sensitivities of Greenland ice sheet volume inferred from an ice sheet adjoint model
NASA Astrophysics Data System (ADS)
Heimbach, P.; Bugnion, V.
2009-04-01
We present a new and original approach to understanding the sensitivity of the Greenland ice sheet to key model parameters and environmental conditions. At the heart of this approach is the use of an adjoint ice sheet model. Since its introduction by MacAyeal (1992), the adjoint method has become widespread to fit ice stream models to the increasing number and diversity of satellite observations, and to estimate uncertain model parameters such as basal conditions. However, no attempt has been made to extend this method to comprehensive ice sheet models. As a first step toward the use of adjoints of comprehensive three-dimensional ice sheet models we have generated an adjoint of the ice sheet model SICOPOLIS of Greve (1997). The adjoint was generated by means of the automatic differentiation (AD) tool TAF. The AD tool generates exact source code representing the tangent linear and adjoint model of the nonlinear parent model provided. Model sensitivities are given by the partial derivatives of a scalar-valued model diagnostic with respect to the controls, and can be efficiently calculated via the adjoint. By way of example, we determine the sensitivity of the total Greenland ice volume to various control variables, such as spatial fields of basal flow parameters, surface and basal forcings, and initial conditions. Reliability of the adjoint was tested through finite-difference perturbation calculations for various control variables and perturbation regions. Besides confirming qualitative aspects of ice sheet sensitivities, such as expected regional variations, we detect regions where model sensitivities are seemingly unexpected or counter-intuitive, albeit ``real'' in the sense of actual model behavior. An example is inferred regions where sensitivities of ice sheet volume to basal sliding coefficient are positive, i.e. where a local increase in basal sliding parameter increases the ice sheet volume. Similarly, positive ice temperature sensitivities in certain parts of the ice sheet are found (in most regions it is negativ, i.e. an increase in temperature decreases ice sheet volume), the detection of which seems highly unlikely if only conventional perturbation experiments had been used. An effort to generate an efficient adjoint with the newly developed open-source AD tool OpenAD is also under way. Available adjoint code generation tools now open up a variety of novel model applications, notably with regard to sensitivity and uncertainty analyses and ice sheet state estimation or data assimilation.
The effect of model uncertainty on cooperation in sensorimotor interactions
Grau-Moya, J.; Hez, E.; Pezzulo, G.; Braun, D. A.
2013-01-01
Decision-makers have been shown to rely on probabilistic models for perception and action. However, these models can be incorrect or partially wrong in which case the decision-maker has to cope with model uncertainty. Model uncertainty has recently also been shown to be an important determinant of sensorimotor behaviour in humans that can lead to risk-sensitive deviations from Bayes optimal behaviour towards worst-case or best-case outcomes. Here, we investigate the effect of model uncertainty on cooperation in sensorimotor interactions similar to the stag-hunt game, where players develop models about the other player and decide between a pay-off-dominant cooperative solution and a risk-dominant, non-cooperative solution. In simulations, we show that players who allow for optimistic deviations from their opponent model are much more likely to converge to cooperative outcomes. We also implemented this agent model in a virtual reality environment, and let human subjects play against a virtual player. In this game, subjects' pay-offs were experienced as forces opposing their movements. During the experiment, we manipulated the risk sensitivity of the computer player and observed human responses. We found not only that humans adaptively changed their level of cooperation depending on the risk sensitivity of the computer player but also that their initial play exhibited characteristic risk-sensitive biases. Our results suggest that model uncertainty is an important determinant of cooperation in two-player sensorimotor interactions. PMID:23945266
NASA Astrophysics Data System (ADS)
De Kauwe, M. G.; Medlyn, B.; Walker, A.; Zaehle, S.; Pendall, E.; Norby, R. J.
2017-12-01
Multifactor experiments are often advocated as important for advancing models, yet to date, such models have only been tested against single-factor experiments. We applied 10 models to the multifactor Prairie Heating and CO2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions: comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO2-induced water savings to extend the growing season length. Observed interactive (CO2 × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.
Victimization experiences and the stabilization of victim sensitivity
Gollwitzer, Mario; Süssenbach, Philipp; Hannuschke, Marianne
2015-01-01
People reliably differ in the extent to which they are sensitive to being victimized by others. Importantly, “victim sensitivity” predicts how people behave in social dilemma situations: Victim-sensitive individuals are less likely to trust others and more likely to behave uncooperatively—especially in socially uncertain situations. This pattern can be explained with the sensitivity to mean intentions (SeMI) model, according to which victim sensitivity entails a specific and asymmetric sensitivity to contextual cues that are associated with untrustworthiness. Recent research is largely in line with the model’s prediction, but some issues have remained conceptually unresolved so far. For instance, it is unclear why and how victim sensitivity becomes a stable trait and which developmental and cognitive processes are involved in such stabilization. In the present article, we will discuss the psychological processes that contribute to a stabilization of victim sensitivity within persons, both across the life span (“ontogenetic stabilization”) and across social situations (“actual-genetic stabilization”). Our theoretical framework starts from the assumption that experiences of being exploited threaten a basic need, the need to trust. This need is so fundamental that experiences that threaten it receive a considerable amount of attention and trigger strong affective reactions. Associative learning processes can then explain (a) how certain contextual cues (e.g., facial expressions) become conditioned stimuli that elicit equally strong responses, (b) why these contextual untrustworthiness cues receive much more attention than, for instance, trustworthiness cues, and (c) how these cues shape spontaneous social expectations (regarding other people’s intentions). Finally, avoidance learning can explain why these cognitive processes gradually stabilize and become a trait: the trait which is referred to as victim sensitivity. PMID:25926806
Piao, Shilong; Sitch, Stephen; Ciais, Philippe; Friedlingstein, Pierre; Peylin, Philippe; Wang, Xuhui; Ahlström, Anders; Anav, Alessandro; Canadell, Josep G; Cong, Nan; Huntingford, Chris; Jung, Martin; Levis, Sam; Levy, Peter E; Li, Junsheng; Lin, Xin; Lomas, Mark R; Lu, Meng; Luo, Yiqi; Ma, Yuecun; Myneni, Ranga B; Poulter, Ben; Sun, Zhenzhong; Wang, Tao; Viovy, Nicolas; Zaehle, Soenke; Zeng, Ning
2013-07-01
The purpose of this study was to evaluate 10 process-based terrestrial biosphere models that were used for the IPCC fifth Assessment Report. The simulated gross primary productivity (GPP) is compared with flux-tower-based estimates by Jung et al. [Journal of Geophysical Research 116 (2011) G00J07] (JU11). The net primary productivity (NPP) apparent sensitivity to climate variability and atmospheric CO2 trends is diagnosed from each model output, using statistical functions. The temperature sensitivity is compared against ecosystem field warming experiments results. The CO2 sensitivity of NPP is compared to the results from four Free-Air CO2 Enrichment (FACE) experiments. The simulated global net biome productivity (NBP) is compared with the residual land sink (RLS) of the global carbon budget from Friedlingstein et al. [Nature Geoscience 3 (2010) 811] (FR10). We found that models produce a higher GPP (133 ± 15 Pg C yr(-1) ) than JU11 (118 ± 6 Pg C yr(-1) ). In response to rising atmospheric CO2 concentration, modeled NPP increases on average by 16% (5-20%) per 100 ppm, a slightly larger apparent sensitivity of NPP to CO2 than that measured at the FACE experiment locations (13% per 100 ppm). Global NBP differs markedly among individual models, although the mean value of 2.0 ± 0.8 Pg C yr(-1) is remarkably close to the mean value of RLS (2.1 ± 1.2 Pg C yr(-1) ). The interannual variability in modeled NBP is significantly correlated with that of RLS for the period 1980-2009. Both model-to-model and interannual variation in model GPP is larger than that in model NBP due to the strong coupling causing a positive correlation between ecosystem respiration and GPP in the model. The average linear regression slope of global NBP vs. temperature across the 10 models is -3.0 ± 1.5 Pg C yr(-1) °C(-1) , within the uncertainty of what derived from RLS (-3.9 ± 1.1 Pg C yr(-1) °C(-1) ). However, 9 of 10 models overestimate the regression slope of NBP vs. precipitation, compared with the slope of the observed RLS vs. precipitation. With most models lacking processes that control GPP and NBP in addition to CO2 and climate, the agreement between modeled and observation-based GPP and NBP can be fortuitous. Carbon-nitrogen interactions (only separable in one model) significantly influence the simulated response of carbon cycle to temperature and atmospheric CO2 concentration, suggesting that nutrients limitations should be included in the next generation of terrestrial biosphere models. © 2013 Blackwell Publishing Ltd.
Improved Analysis of Earth System Models and Observations using Simple Climate Models
NASA Astrophysics Data System (ADS)
Nadiga, B. T.; Urban, N. M.
2016-12-01
Earth system models (ESM) are the most comprehensive tools we have to study climate change and develop climate projections. However, the computational infrastructure required and the cost incurred in running such ESMs precludes direct use of such models in conjunction with a wide variety of tools that can further our understanding of climate. Here we are referring to tools that range from dynamical systems tools that give insight into underlying flow structure and topology to tools that come from various applied mathematical and statistical techniques and are central to quantifying stability, sensitivity, uncertainty and predictability to machine learning tools that are now being rapidly developed or improved. Our approach to facilitate the use of such models is to analyze output of ESM experiments (cf. CMIP) using a range of simpler models that consider integral balances of important quantities such as mass and/or energy in a Bayesian framework.We highlight the use of this approach in the context of the uptake of heat by the world oceans in the ongoing global warming. Indeed, since in excess of 90% of the anomalous radiative forcing due greenhouse gas emissions is sequestered in the world oceans, the nature of ocean heat uptake crucially determines the surface warming that is realized (cf. climate sensitivity). Nevertheless, ESMs themselves are never run long enough to directly assess climate sensitivity. So, we consider a range of models based on integral balances--balances that have to be realized in all first-principles based models of the climate system including the most detailed state-of-the art climate simulations. The models range from simple models of energy balance to those that consider dynamically important ocean processes such as the conveyor-belt circulation (Meridional Overturning Circulation, MOC), North Atlantic Deep Water (NADW) formation, Antarctic Circumpolar Current (ACC) and eddy mixing. Results from Bayesian analysis of such models using both ESM experiments and actual observations are presented. One such result points to the importance of direct sequestration of heat below 700 m, a process that is not allowed for in the simple models that have been traditionally used to deduce climate sensitivity.
NASA Technical Reports Server (NTRS)
2002-01-01
Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4(pi) sr), the sensitive energy range of the instrument ((approx) 10 MeV to 100 GeV) and abundant components (proton, alpha, e(sup -), e(sup +), (mu)(sup -), (mu)(sup +) and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functions in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.
A sensitivity analysis of "Forests on the Edge: Housing Development on America's Private Forests."
Eric M. White; Ralph J. Alig; Lisa G. Mahal; David M. Theobald
2009-01-01
The original Forests on the Edge report (FOTE 1) indicated that 44.2 million acres of private forest land was projected to experience substantial increases in residential development in the coming decades. In this study, we examined the sensitivity of the FOTE 1 results to four factors: (1) use of updated private land and forest cover spatial data and a revised model...
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn 011 many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Searching for light dark matter with the SLAC millicharge experiment.
Diamond, M; Schuster, P
2013-11-27
New sub-GeV gauge forces ("dark photons") that kinetically mix with the photon provide a promising scenario for MeV-GeV dark matter and are the subject of a program of searches at fixed-target and collider facilities around the world. In such models, dark photons produced in collisions may decay invisibly into dark-matter states, thereby evading current searches. We reexamine results of the SLAC mQ electron beam dump experiment designed to search for millicharged particles and find that it was strongly sensitive to any secondary beam of dark matter produced by electron-nucleus collisions in the target. The constraints are competitive for dark photon masses in the ~1-30 MeV range, covering part of the parameter space that can reconcile the apparent (g-2)(μ) anomaly. Simple adjustments to the original SLAC search for millicharges may extend sensitivity to cover a sizable portion of the remaining (g-2)(μ) anomaly-motivated region. The mQ sensitivity is therefore complementary to ongoing searches for visible decays of dark photons. Compared to existing direct-detection searches, mQ sensitivity to electron-dark-matter scattering cross sections is more than an order of magnitude better for a significant range of masses and couplings in simple models.
NASA Astrophysics Data System (ADS)
Shellito, C. J.; Sloan, L. C.
2004-12-01
A major turnover in benthic marine and terrestrial fauna marks the Initial Eocene Thermal Maximum (IETM) (~55Ma), a period of ~150 ky in which there was a rapid rise in deep sea and high latitude sea surface temperatures by 5-8C. Curiously, no major responses to this warming in the terrestrial floral record have been detected to date. Here, we present results from experiments examining the response of the global distribution of vegetation to changes in climate at the IETM using the NCAR Land Surface Model (LSM1.2) integrated with a dynamic global vegetation model (DGVM). DGVMs allow vegetation to respond to and interact with climate, and thus, provide a unique new method for addressing questions regarding feedbacks between the ecosystem and climate in Earth's past. However, there are a number of drawbacks to using these models that can affect interpretation of results. More specifically, these drawbacks involve uncertainties in the application of modern plant functional types to paleo-flora simulations, inaccuracies in the model climatology used to drive the DGVM, and lack of available detail regarding paleo-geography and paleo-soil type for use in model boundary conditions. For a better understanding of these drawbacks, we present results from a series of tests in the NCAR LSM-DGVM which examine (1) the effect of removing C4 grasses from the available plant functional types in the model; (2) model sensitivity to a change in soil texture; and (3), model sensitivity to a change in the value of pCO2 used in the photosynthetic rate equations. We consider our DGVM results for the IETM in light of output from these sensitivity experiments.
Experiment of constraining symmetry energy at supra-saturation density with π-/π+ at HIRFL-CSR
NASA Astrophysics Data System (ADS)
Zhang, Ming; Xiao, Zhi-Gang; Zhu, Sheng-Jiang
2010-08-01
The possibility of the experiment for constraining the symmetry energy Esym(ρ) at supra-densities via π-/π+ probe on the external target experiment of phase I (ETE(I)) with part coverage at forward angle at HIRFL-CSR is studied for the first time by using the isospin and momentum dependent hadronic transport model IBUU04. Based on the transport simulation with Au+Au collisions at 400 MeV/u, it is found that the differential π-/π+ ratios are more sensitive to Esym(ρ) at forward angles in laboratory reference, compared with the total yield ratio widely proposed. The insufficient coverage at lower transverse momentum maintains the sensitivity of the dependence of π-/π+ ratio on the Esym(ρ) at high density, indicating that the ETF (I) under construction in Lanzhou provides the possibility of performing the experiment for probing the asymmetric nuclear equation of state.
NASA Astrophysics Data System (ADS)
Otto-Bliesner, Bette L.; Braconnot, Pascale; Harrison, Sandy P.; Lunt, Daniel J.; Abe-Ouchi, Ayako; Albani, Samuel; Bartlein, Patrick J.; Capron, Emilie; Carlson, Anders E.; Dutton, Andrea; Fischer, Hubertus; Goelzer, Heiko; Govin, Aline; Haywood, Alan; Joos, Fortunat; LeGrande, Allegra N.; Lipscomb, William H.; Lohmann, Gerrit; Mahowald, Natalie; Nehrbass-Ahles, Christoph; Pausata, Francesco S. R.; Peterschmitt, Jean-Yves; Phipps, Steven J.; Renssen, Hans; Zhang, Qiong
2017-11-01
Two interglacial epochs are included in the suite of Paleoclimate Modeling Intercomparison Project (PMIP4) simulations in the Coupled Model Intercomparison Project (CMIP6). The experimental protocols for simulations of the mid-Holocene (midHolocene, 6000 years before present) and the Last Interglacial (lig127k, 127 000 years before present) are described here. These equilibrium simulations are designed to examine the impact of changes in orbital forcing at times when atmospheric greenhouse gas levels were similar to those of the preindustrial period and the continental configurations were almost identical to modern ones. These simulations test our understanding of the interplay between radiative forcing and atmospheric circulation, and the connections among large-scale and regional climate changes giving rise to phenomena such as land-sea contrast and high-latitude amplification in temperature changes, and responses of the monsoons, as compared to today. They also provide an opportunity, through carefully designed additional sensitivity experiments, to quantify the strength of atmosphere, ocean, cryosphere, and land-surface feedbacks. Sensitivity experiments are proposed to investigate the role of freshwater forcing in triggering abrupt climate changes within interglacial epochs. These feedback experiments naturally lead to a focus on climate evolution during interglacial periods, which will be examined through transient experiments. Analyses of the sensitivity simulations will also focus on interactions between extratropical and tropical circulation, and the relationship between changes in mean climate state and climate variability on annual to multi-decadal timescales. The comparative abundance of paleoenvironmental data and of quantitative climate reconstructions for the Holocene and Last Interglacial make these two epochs ideal candidates for systematic evaluation of model performance, and such comparisons will shed new light on the importance of external feedbacks (e.g., vegetation, dust) and the ability of state-of-the-art models to simulate climate changes realistically.
Career Decision-Making and the Military Family: Toward a Comprehensive Model
1990-03-01
specify and test the model. Appendices A through E contain the papers on the five major sub- models. 4 II. IN SEARCH OF A COMPREHENSIVE MODEL Employee ... employees (e.g., Mobley, et al., 1979). In other words, the civilian has considerable opportunity to incorporate experience content into career... belongingness ) to growth needs (esteem, self- actualization). The emphasis is upon display of sensitivity, responsiveness and accommodation by management
Shock Initiation Characteristics of an Aluminized DNAN/RDX Melt-Cast Explosive
NASA Astrophysics Data System (ADS)
Cao, Tong-Tang; Zhou, Lin; Zhang, Xiang-Rong; Zhang, Wei; Miao, Fei-Chao
2017-10-01
Shock sensitivity is one of the key parameters for newly developed, 2,4-dinitroanisole (DNAN)-based, melt-cast explosives. For this paper, a series of shock initiation experiments were conducted using a one-dimensional Lagrangian system with a manganin piezoresistive pressure gauge technique to evaluate the shock sensitivity of an aluminized DNAN/cyclotrimethylenetrinitramine (RDX) melt-cast explosive. This study fully investigated the effects of particle size distributions in both RDX and aluminum, as well as the RDX's crystal quality on the shock sensitivity of the aluminized DNAN/RDX melt-cast explosive. Ultimately, the shock sensitivity of the aluminized DNAN/RDX melt-cast explosives increases when the particle size decreases in both RDX and aluminum. Additionally, shock sensitivity increases when the RDX's crystal quality decreases. In order to simulate these effects, an Ignition and Growth (I&G) reactive flow model was calibrated. This calibrated I&G model was able to predict the shock initiation characteristics of the aluminized DNAN/RDX melt-cast explosive.
Xu, Yu; Zhao, Libo; Jiang, Zhuangde; Ding, Jianjun; Peng, Niancai; Zhao, Yulong
2016-01-01
For improving the tradeoff between the sensitivity and the resonant frequency of piezoresistive accelerometers, the dependency between the stress of the piezoresistor and the displacement of the structure is taken into consideration in this paper. In order to weaken the dependency, a novel structure with suspended piezoresistive beams (SPBs) is designed, and a theoretical model is established for calculating the location of SPBs, the stress of SPBs and the resonant frequency of the whole structure. Finite element method (FEM) simulations, comparative simulations and experiments are carried out to verify the good agreement with the theoretical model. It is demonstrated that increasing the sensitivity greatly without sacrificing the resonant frequency is possible in the piezoresistive accelerometer design. Therefore, the proposed structure with SPBs is potentially a novel option for improving the tradeoff between the sensitivity and the resonant frequency of piezoresistive accelerometers. PMID:26861343
Xu, Yu; Zhao, Libo; Jiang, Zhuangde; Ding, Jianjun; Peng, Niancai; Zhao, Yulong
2016-02-06
For improving the tradeoff between the sensitivity and the resonant frequency of piezoresistive accelerometers, the dependency between the stress of the piezoresistor and the displacement of the structure is taken into consideration in this paper. In order to weaken the dependency, a novel structure with suspended piezoresistive beams (SPBs) is designed, and a theoretical model is established for calculating the location of SPBs, the stress of SPBs and the resonant frequency of the whole structure. Finite element method (FEM) simulations, comparative simulations and experiments are carried out to verify the good agreement with the theoretical model. It is demonstrated that increasing the sensitivity greatly without sacrificing the resonant frequency is possible in the piezoresistive accelerometer design. Therefore, the proposed structure with SPBs is potentially a novel option for improving the tradeoff between the sensitivity and the resonant frequency of piezoresistive accelerometers.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
NASA Astrophysics Data System (ADS)
Beminiwattha, Rakitha; Moller Collaboration
2017-09-01
Parity Violating Electron Scattering (PVES) is an extremely successful precision frontier tool that has been used for testing the Standard Model (SM) and understanding nucleon structure. Several generations of highly successful PVES programs at SLAC, MIT-Bates, MAMI-Mainz, and Jefferson Lab have contributed to the understanding of nucleon structure and testing the SM. But missing phenomena like matter-antimatter asymmetry, neutrino flavor oscillations, and dark matter and energy suggest that the SM is only a `low energy' effective theory. The MOLLER experiment at Jefferson Lab will measure the weak charge of the electron, QWe = 1 - 4sin2θW , with a precision of 2.4 % by measuring the parity violating asymmetry in electron-electron () scattering and will be sensitive to subtle but measurable deviations from precisely calculable predictions from the SM. The MOLLER experiment will provide the best contact interaction search for leptons at low OR high energy makes it a probe of physics beyond the Standard Model with sensitivities to mass-scales of new PV physics up to 7.5 TeV. Overview of the experiment and recent pre-R&D progress will be reported.
Improved Upper Ocean/Sea Ice Modeling in the GISS GCM for Investigating Climate Change
NASA Technical Reports Server (NTRS)
1998-01-01
This project built on our previous results in which we highlighted the importance of sea ice in overall climate sensitivity by determining that for both warming and cooling climates, when sea ice was not allowed to change, climate sensitivity was reduced by 35-40%. We also modified the GISS 8 deg x lO deg atmospheric GCM to include an upper-ocean/sea-ice model involving the Semtner three-layer ice/snow thermodynamic model, the Price et al. (1986) ocean mixed layer model and a general upper ocean vertical advection/diffusion scheme for maintaining and fluxing properties across the pycnocline. This effort, in addition to improving the sea ice representation in the AGCM, revealed a number of sensitive components of the sea ice/ocean system. For example, the ability to flux heat through the ice/snow properly is critical in order to resolve the surface temperature properly, since small errors in this lead to unrestrained climate drift. The present project, summarized in this report, had as its objectives: (1) introducing a series of sea ice and ocean improvements aimed at overcoming remaining weaknesses in the GCM sea ice/ocean representation, and (2) performing a series of sensitivity experiments designed to evaluate the climate sensitivity of the revised model to both Antarctic and Arctic sea ice, determine the sensitivity of the climate response to initial ice distribution, and investigate the transient response to doubling CO2.
The pitch of vibrato tones: a model based on instantaneous frequency decomposition.
Mesz, Bruno A; Eguia, Manuel C
2009-07-01
We study vibrato as the more ubiquitous manifestation of a nonstationary tone that can evoke a single overall pitch. Some recent results using nonsymmetrical vibrato tones suggest that the perceived pitch could be governed by some stability-sensitive mechanism. For nonstationary sounds the adequate tools are time-frequency representations (TFRs). We show that a recently proposed TFR could be the simplest framework to explain this hypothetical stability-sensitive mechanism. We propose a one-parameter model within this framework that is able to predict previously reported results and we present new results obtained from psychophysical experiments performed in our laboratory.
Bulk nuclear properties from dynamical description of heavy-ion collisions
NASA Astrophysics Data System (ADS)
Hong, Jun
Mapping out the equation of state (EOS) of nuclear matter is a long standing problem in nuclear physics. Both experimentalists and theoretical physicists spare no effort in improving understanding of the EOS. In this thesis, we examine observables sensitive to the EOS within the pBUU transport model based on the Boltzmann equation. By comparing theoretical predictions with experimental data, we arrive at new constraints for the EOS. Further we propose novel promising observables for analysis of future experimental data. One set of observables that we examine within the pBUU model are pion yields. First, we find that net pion yields in central heavy-ion collisions (HIC) are strongly sensitive to the momentum dependence of the isoscalar nuclear mean field. We reexamine the momentum dependence that is assumed in the Boltzmann equation model for the collisions and optimize that dependence to describe the FOPI measurements of pion yields from the Au+Au collisions at different beam energies. Alas such optimized dependence yields a somewhat weaker baryonic elliptic flow than seen in measurements. Subsequently, we use the same pBUU model to generate predictions for baryonic elliptic flow observable in HIC, while varying the incompressibility of nuclear matter. In parallel, we test the sensitivity of pion multiplicity to the density dependence of EOS, and in particular to incompressibility, and optimize that dependence to describe both the elliptic flow and pion yields. Upon arriving at acceptable regions of density dependence of pressure and energy, we compare our constraints on EOS with those recently arrived at by the joint experiment and theory effort FOPI-IQMD. We should mention that, for the more advanced observables from HIC, there remain discrepancies of up to 30%, depending on energy, between the theory and experiment, indicating the limitations of the transport theory. Next, we explore the impact of the density dependence of the symmetry energy on observables, motivated by experiments aiming at constraining the symmetry energy. In contradiction to IBUU and ImIQMD models in the literature, that claim sensitivity of net charged pion yields to the density dependence of the symmetry energy, albeit in direction opposite from each other, we find practically no such sensitivity in pBUU. However, we find a rather dramatic sensitivity of differential high-energy charged-pion yield ratio to that density dependence, which can be qualitatively understood, and we propose that differential ratio be used in future experiments to constrain the symmetry energy. Finally, we present Gaussian phase-space representation method for studying strongly correlated systems. This approach allows to follow time evolution of quantum many-body systems with large Hilbert spaces through stochastic sampling, provided the interactions are two-body in nature. We demonstrate the advantage of the Gaussian phase-space representation method in coping with the notorious numerical sign problem for fermion systems. Lastly, we discuss the difficulty in trying to stabilize the system during its time evolution, within the Gaussian phase-space method.
Liu, Lizhe; Pilles, Bert M; Gontcharov, Julia; Bucher, Dominik B; Zinth, Wolfgang
2016-01-21
UV-induced formation of the cyclobutane pyrimidine dimer (CPD) lesion is investigated by stationary and time-resolved photosensitization experiments. The photosensitizer 2'-methoxyacetophenone with high intersystem crossing efficiency and large absorption cross-section in the UV-A range was used. A diffusion controlled reaction model is presented. Time-resolved experiments confirmed the validity of the reaction model and provided information on the dynamics of the triplet sensitization process. With a series of concentration dependent stationary illumination experiments, we determined the quantum efficiency for CPD formation from the triplet state of the thymine dinucleotide TpT to be 4 ± 0.2%.
Corley, Michael J; Caruso, Michael J; Takahashi, Lorey K
2012-01-18
Posttraumatic stress disorder (PTSD) is characterized by stress-induced symptoms including exaggerated fear memories, hypervigilance and hyperarousal. However, we are unaware of an animal model that investigates these hallmarks of PTSD especially in relation to fear extinction and habituation. Therefore, to develop a valid animal model of PTSD, we exposed rats to different intensities of footshock stress to determine their effects on either auditory predator odor fear extinction or habituation of fear sensitization. In Experiment 1, rats were exposed to acute footshock stress (no shock control, 0.4 mA, or 0.8 mA) immediately prior to auditory fear conditioning training involving the pairing of auditory clicks with a cloth containing cat odor. When presented to the conditioned auditory clicks in the next 5 days of extinction testing conducted in a runway apparatus with a hide box, rats in the two shock groups engaged in higher levels of freezing and head out vigilance-like behavior from the hide box than the no shock control group. This increase in fear behavior during extinction testing was likely due to auditory activation of the conditioned fear state because Experiment 2 demonstrated that conditioned fear behavior was not broadly increased in the absence of the conditioned auditory stimulus. Experiment 3 was then conducted to determine whether acute exposure to stress induces a habituation resistant sensitized fear state. We found that rats exposed to 0.8 mA footshock stress and subsequently tested for 5 days in the runway hide box apparatus with presentations of nonassociative auditory clicks exhibited high initial levels of freezing, followed by head out behavior and culminating in the occurrence of locomotor hyperactivity. In addition, Experiment 4 indicated that without delivery of nonassociative auditory clicks, 0.8 mA footshock stressed rats did not exhibit robust increases in sensitized freezing and locomotor hyperactivity, albeit head out vigilance-like behavior continued to be observed. In summary, our animal model provides novel information on the effects of different intensities of footshock stress, auditory-predator odor fear conditioning, and their interactions on facilitating either extinction-resistant or habituation-resistant fear-related behavior. These results lay the foundation for exciting new investigations of the hallmarks of PTSD that include the stress-induced formation and persistence of traumatic memories and sensitized fear. Copyright © 2011 Elsevier Inc. All rights reserved.
Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines
NASA Astrophysics Data System (ADS)
Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.
2016-12-01
Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson-Sellers, A.
Land-surface schemes developed for incorporation into global climate models include parameterizations that are not yet fully validated and depend upon the specification of a large (20-50) number of ecological and soil parameters, the values of which are not yet well known. There are two methods of investigating the sensitivity of a land-surface scheme to prescribed values: simple one-at-a-time changes or factorial experiments. Factorial experiments offer information about interactions between parameters and are thus a more powerful tool. Here the results of a suite of factorial experiments are reported. These are designed (i) to illustrate the usefulness of this methodology andmore » (ii) to identify factors important to the performance of complex land-surface schemes. The Biosphere-Atmosphere Transfer Scheme (BATS) is used and its sensitivity is considered (a) to prescribed ecological and soil parameters and (b) to atmospheric forcing used in the off-line tests undertaken. Results indicate that the most important atmospheric forcings are mean monthly temperature and the interaction between mean monthly temperature and total monthly precipitation, although fractional cloudiness and other parameters are also important. The most important ecological parameters are vegetation roughness length, soil porosity, and a factor describing the sensitivity of the stomatal resistance of vegetation to the amount of photosynthetically active solar radiation and, to a lesser extent, soil and vegetation albedos. Two-factor interactions including vegetation roughness length are more important than many of the 23 specified single factors. The results of factorial sensitivity experiments such as these could form the basis for intercomparison of land-surface parameterization schemes and for field experiments and satellite-based observation programs aimed at improving evaluation of important parameters.« less
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Model Forecast Skill and Sensitivity to Initial Conditions in the Seasonal Sea Ice Outlook
NASA Technical Reports Server (NTRS)
Blanchard-Wrigglesworth, E.; Cullather, R. I.; Wang, W.; Zhang, J.; Bitz, C. M.
2015-01-01
We explore the skill of predictions of September Arctic sea ice extent from dynamical models participating in the Sea Ice Outlook (SIO). Forecasts submitted in August, at roughly 2 month lead times, are skillful. However, skill is lower in forecasts submitted to SIO, which began in 2008, than in hindcasts (retrospective forecasts) of the last few decades. The multimodel mean SIO predictions offer slightly higher skill than the single-model SIO predictions, but neither beats a damped persistence forecast at longer than 2 month lead times. The models are largely unsuccessful at predicting each other, indicating a large difference in model physics and/or initial conditions. Motivated by this, we perform an initial condition sensitivity experiment with four SIO models, applying a fixed -1 m perturbation to the initial sea ice thickness. The significant range of the response among the models suggests that different model physics make a significant contribution to forecast uncertainty.
Computational Modeling of Photocatalysts for CO2 Conversion Applications
NASA Astrophysics Data System (ADS)
Tafen, De; Matranga, Christopher
2013-03-01
To make photocatalytic conversion approaches efficient, economically practical, and industrially scalable, catalysts capable of utilizing visible and near infrared photons need to be developed. Recently, a series of CdSe and PbS quantum dot-sensitized TiO2 heterostructures have been synthesized, characterized, and tested for reduction of CO2 under visible light. Following these experiments, we use density functional theory to model these heterostructured catalysts and investigate their CO2 catalytic activity. In particular, we study the nature of the heterostructure interface, charge transport/electron transfer, active sites and the electronic structures of these materials. The results will be presented and compared to experiments. The improvement of our understanding of the properties of these materials will aid not only the development of more robust, visible light active photocatalysts for carbon management applications, but also the development of quantum dot-sensitized semiconductor solar cells with high efficiencies in solar-to-electrical energy conversion.
Central and rear-edge populations can be equally vulnerable to warming
NASA Astrophysics Data System (ADS)
Bennett, Scott; Wernberg, Thomas; Arackal Joy, Bijo; de Bettignies, Thibaut; Campbell, Alexandra H.
2015-12-01
Rear (warm) edge populations are often considered more susceptible to warming than central (cool) populations because of the warmer ambient temperatures they experience, but this overlooks the potential for local variation in thermal tolerances. Here we provide conceptual models illustrating how sensitivity to warming is affected throughout a species' geographical range for locally adapted and non-adapted populations. We test these models for a range-contracting seaweed using observations from a marine heatwave and a 12-month experiment, translocating seaweeds among central, present and historic range edge locations. Growth, reproductive development and survivorship display different temperature thresholds among central and rear-edge populations, but share a 2.5 °C anomaly threshold. Range contraction, therefore, reflects variation in local anomalies rather than differences in absolute temperatures. This demonstrates that warming sensitivity can be similar throughout a species geographical range and highlights the importance of incorporating local adaptation and acclimatization into climate change vulnerability assessments.
NASA Technical Reports Server (NTRS)
Kurzeja, R. J.; Haggard, K. V.; Grose, W. L.
1981-01-01
Three experiments have been performed using a three-dimensional, spectral quasi-geostrophic model in order to investigate the sensitivity of ozone transport to tropospheric orographic and thermal effects and to the zonal wind distribution. In the first experiment, the ozone distribution averaged over the last 30 days of a 60 day transport simulation was determined; in the second experiment, the transport simulation was repeated, but nonzonal orographic and thermal forcing was omitted; and in the final experiment, the simulation was conducted with the intensity and position of the stratospheric jets altered by addition of a Newtonian cooling term to the zonal-mean diabatic heating rate. Results of the three experiments are summarized by comparing the zonal-mean ozone distribution, the amplitude of eddy geopotential height, the zonal winds, and zonal-mean diabatic heating.
Evaluation of a scale-model experiment to investigate long-range acoustic propagation
NASA Technical Reports Server (NTRS)
Parrott, Tony L.; Mcaninch, Gerry L.; Carlberg, Ingrid A.
1987-01-01
Tests were conducted to evaluate the feasibility of using a scale-model experiment situated in an anechoic facility to investigate long-range sound propagation over ground terrain. For a nominal scale factor of 100:1, attenuations along a linear array of six microphones colinear with a continuous-wave type of sound source were measured over a wavelength range from 10 to 160 for a nominal test frequency of 10 kHz. Most tests were made for a hard model surface (plywood), but limited tests were also made for a soft model surface (plywood with felt). For grazing-incidence propagation over the hard surface, measured and predicted attenuation trends were consistent for microphone locations out to between 40 and 80 wavelengths. Beyond 80 wavelengths, significant variability was observed that was caused by disturbances in the propagation medium. Also, there was evidence of extraneous propagation-path contributions to data irregularities at more remote microphones. Sensitivity studies for the hard-surface and microphone indicated a 2.5 dB change in the relative excess attenuation for a systematic error in source and microphone elevations on the order of 1 mm. For the soft-surface model, no comparable sensitivity was found.
NASA Astrophysics Data System (ADS)
Grilli, Nicolo; Dandekar, Akshay; Koslowski, Marisol
2017-06-01
The development of high explosive materials requires constitutive models that are able to predict the influence of microstructure and loading conditions on shock sensitivity. In this work a model at the continuum-scale for the polymer-bonded explosive constituted of β-HMX particles embedded in a Sylgard matrix is developed. It includes a Murnaghan equation of state, a crystal plasticity model, based on power-law slip rate and hardening, and a phase field damage model based on crack regularization. The temperature increase due to chemical reactions is introduced by a heat source term, which is validated using results from reactive molecular dynamics simulations. An initial damage field representing pre-existing voids and cracks is used in the simulations to understand the effect of these inhomogeneities on the damage propagation and shock sensitivity. We show the predictions of the crystal plasticity model and the effect of the HMX crystal orientation on the shock initiation and on the dissipated plastic work and damage propagation. The simulation results are validated with ultra-fast dynamic transmission electron microscopy experiments and x-ray experiments carried out at Purdue University. Membership Pending.
Impact of the quenching of gA on the sensitivity of 0 ν β β experiments
NASA Astrophysics Data System (ADS)
Suhonen, Jouni
2017-11-01
Detection of the neutrinoless β β (0 ν β β ) decay is of high priority in the particle- and neutrino-physics communities. The detectability of this decay mode is strongly influenced by the value of the weak axial-vector coupling constant gA. The recent nuclear-model analyses of β and β β decays suggest that the value of gA could be dramatically quenched, reaching ratios of gAfree/gA≈4 , where gAfree=1.27 is the free, neutron-decay, value of gA. The effects of this quenching appear devastating for the sensitivity of the present and future 0 ν β β experiments since the fourth power of this ratio scales the 0 ν β β half-lives. This, in turn, could lead to some two orders of magnitude less sensitivity for the 0 ν β β experiments. In the present article it is shown that by using a consistent approach to both the two-neutrino β β and 0 ν β β decays by the proton-neutron quasiparticle random-phase approximation, the feared two-orders-of-magnitude reduction in the sensitivity of the 0 ν β β experiments actually shrinks to a reduction by factors in the range 2 -6 . This certainly has dramatic consequences for the potential to detect the 0 ν β β decay.
[Experimental analysis of some determinants of inductive reasoning].
Ono, K
1989-02-01
Three experiments were conducted from a behavioral perspective to investigate the determinants of inductive reasoning and to compare some methodological differences. The dependent variable used in these experiments was the threshold of confident response (TCR), which was defined as "the minimal sample size required to establish generalization from instances." Experiment 1 examined the effects of population size on inductive reasoning, and the results from 35 college students showed that the TCR varied in proportion to the logarithm of population size. In Experiment 2, 30 subjects showed distinct sensitivity to both prior probability and base-rate. The results from 70 subjects who participated in Experiment 3 showed that the TCR was affected by its consequences (risk condition), and especially, that humans were sensitive to a loss situation. These results demonstrate the sensitivity of humans to statistical variables in inductive reasoning. Furthermore, methodological comparison indicated that the experimentally observed values of TCR were close to, but not as precise as the optimal values predicted by Bayes' model. On the other hand, the subjective TCR estimated by subjects was highly discrepant from the observed TCR. These findings suggest that various aspects of inductive reasoning can be fruitfully investigated not only from subjective estimations such as probability likelihood but also from an objective behavioral perspective.
Probing flavor models with ^{ {76}}Ge-based experiments on neutrinoless double-β decay
NASA Astrophysics Data System (ADS)
Agostini, Matteo; Merle, Alexander; Zuber, Kai
2016-04-01
The physics impact of a staged approach for double-β decay experiments based on ^{ {76}}Ge is studied. The scenario considered relies on realistic time schedules envisioned by the Gerda and the Majorana collaborations, which are jointly working towards the realization of a future larger scale ^{ {76}}Ge experiment. Intermediate stages of the experiments are conceived to perform quasi background-free measurements, and different data sets can be reliably combined to maximize the physics outcome. The sensitivity for such a global analysis is presented, with focus on how neutrino flavor models can be probed already with preliminary phases of the experiments. The synergy between theory and experiment yields strong benefits for both sides: the model predictions can be used to sensibly plan the experimental stages, and results from intermediate stages can be used to constrain whole groups of theoretical scenarios. This strategy clearly generates added value to the experimental efforts, while at the same time it allows to achieve valuable physics results as early as possible.
A Mass Spectrometric Analysis Method Based on PPCA and SVM for Early Detection of Ovarian Cancer.
Wu, Jiang; Ji, Yanju; Zhao, Ling; Ji, Mengying; Ye, Zhuang; Li, Suyi
2016-01-01
Background. Surfaced-enhanced laser desorption-ionization-time of flight mass spectrometry (SELDI-TOF-MS) technology plays an important role in the early diagnosis of ovarian cancer. However, the raw MS data is highly dimensional and redundant. Therefore, it is necessary to study rapid and accurate detection methods from the massive MS data. Methods. The clinical data set used in the experiments for early cancer detection consisted of 216 SELDI-TOF-MS samples. An MS analysis method based on probabilistic principal components analysis (PPCA) and support vector machine (SVM) was proposed and applied to the ovarian cancer early classification in the data set. Additionally, by the same data set, we also established a traditional PCA-SVM model. Finally we compared the two models in detection accuracy, specificity, and sensitivity. Results. Using independent training and testing experiments 10 times to evaluate the ovarian cancer detection models, the average prediction accuracy, sensitivity, and specificity of the PCA-SVM model were 83.34%, 82.70%, and 83.88%, respectively. In contrast, those of the PPCA-SVM model were 90.80%, 92.98%, and 88.97%, respectively. Conclusions. The PPCA-SVM model had better detection performance. And the model combined with the SELDI-TOF-MS technology had a prospect in early clinical detection and diagnosis of ovarian cancer.
Wen, Jessica; Koo, Soh Myoung; Lape, Nancy
2018-02-01
While predictive models of transdermal transport have the potential to reduce human and animal testing, microscopic stratum corneum (SC) model output is highly dependent on idealized SC geometry, transport pathway (transcellular vs. intercellular), and penetrant transport parameters (e.g., compound diffusivity in lipids). Most microscopic models are limited to a simple rectangular brick-and-mortar SC geometry and do not account for variability across delivery sites, hydration levels, and populations. In addition, these models rely on transport parameters obtained from pure theory, parameter fitting to match in vivo experiments, and time-intensive diffusion experiments for each compound. In this work, we develop a microscopic finite element model that allows us to probe model sensitivity to variations in geometry, transport pathway, and hydration level. Given the dearth of experimentally-validated transport data and the wide range in theoretically-predicted transport parameters, we examine the model's response to a variety of transport parameters reported in the literature. Results show that model predictions are strongly dependent on all aforementioned variations, resulting in order-of-magnitude differences in lag times and permeabilities for distinct structure, hydration, and parameter combinations. This work demonstrates that universally predictive models cannot fully succeed without employing experimentally verified transport parameters and individualized SC structures. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Kim, Chi Hun; Romberg, Carola; Hvoslef-Eide, Martha; Oomen, Charlotte A; Mar, Adam C; Heath, Christopher J; Berthiaume, Andrée-Anne; Bussey, Timothy J; Saksida, Lisa M
2015-11-01
The hippocampus is implicated in many of the cognitive impairments observed in conditions such as Alzheimer's disease (AD) and schizophrenia (SCZ). Often, mice are the species of choice for models of these diseases and the study of the relationship between brain and behaviour more generally. Thus, automated and efficient hippocampal-sensitive cognitive tests for the mouse are important for developing therapeutic targets for these diseases, and understanding brain-behaviour relationships. One promising option is to adapt the touchscreen-based trial-unique nonmatching-to-location (TUNL) task that has been shown to be sensitive to hippocampal dysfunction in the rat. This study aims to adapt the TUNL task for use in mice and to test for hippocampus-dependency of the task. TUNL training protocols were altered such that C57BL/6 mice were able to acquire the task. Following acquisition, dysfunction of the dorsal hippocampus (dHp) was induced using a fibre-sparing excitotoxin, and the effects of manipulation of several task parameters were examined. Mice could acquire the TUNL task using training optimised for the mouse (experiments 1). TUNL was found to be sensitive to dHp dysfunction in the mouse (experiments 2, 3 and 4). In addition, we observed that performance of dHp dysfunction group was somewhat consistently lower when sample locations were presented in the centre of the screen. This study opens up the possibility of testing both mouse and rat models on this flexible and hippocampus-sensitive touchscreen task.
Shedding light on neutrino masses with dark forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batell, Brian; Pospelov, Maxim; Shuve, Brian
Heavy right-handed neutrinos, N , provide the simplest explanation for the origin of light neutrino masses and mixings. If M N is at or below the weak scale, direct experimental discovery of these states is possible at accelerator experiments such as the LHC or new dedicated beam dump experiments; in these experiments, N decays after traversing a macroscopic distance from the collision point. The experimental sensitivity to right-handed neutrinos is significantly enhanced if there is a new “dark” gauge force connecting them to the Standard Model (SM), and detection of N can be the primary discovery mode for the newmore » dark force itself. We take the well-motivated example of a B – L gauge symmetry and analyze the sensitivity to displaced decays of N produced via the new gauge interaction in two experiments: the LHC and the proposed SHiP beam dump experiment. In the most favorable case in which the mediator can be produced on-shell and decays to right handed neutrinos (pp → X + V B–L → X + N N ), the sensitivity reach is controlled by the square of the B – L gauge coupling. Here, we demonstrate that these experiments could access neutrino parameters responsible for the observed SM neutrino masses and mixings in the most straightforward implementation of the see-saw mechanism.« less
Shedding light on neutrino masses with dark forces
Batell, Brian; Pospelov, Maxim; Shuve, Brian
2016-08-08
Heavy right-handed neutrinos, N , provide the simplest explanation for the origin of light neutrino masses and mixings. If M N is at or below the weak scale, direct experimental discovery of these states is possible at accelerator experiments such as the LHC or new dedicated beam dump experiments; in these experiments, N decays after traversing a macroscopic distance from the collision point. The experimental sensitivity to right-handed neutrinos is significantly enhanced if there is a new “dark” gauge force connecting them to the Standard Model (SM), and detection of N can be the primary discovery mode for the newmore » dark force itself. We take the well-motivated example of a B – L gauge symmetry and analyze the sensitivity to displaced decays of N produced via the new gauge interaction in two experiments: the LHC and the proposed SHiP beam dump experiment. In the most favorable case in which the mediator can be produced on-shell and decays to right handed neutrinos (pp → X + V B–L → X + N N ), the sensitivity reach is controlled by the square of the B – L gauge coupling. Here, we demonstrate that these experiments could access neutrino parameters responsible for the observed SM neutrino masses and mixings in the most straightforward implementation of the see-saw mechanism.« less
Models for Total-Dose Radiation Effects in Non-Volatile Memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Philip Montgomery; Wix, Steven D.
The objective of this work is to develop models to predict radiation effects in non- volatile memory: flash memory and ferroelectric RAM. In flash memory experiments have found that the internal high-voltage generators (charge pumps) are the most sensitive to radiation damage. Models are presented for radiation effects in charge pumps that demonstrate the experimental results. Floating gate models are developed for the memory cell in two types of flash memory devices by Intel and Samsung. These models utilize Fowler-Nordheim tunneling and hot electron injection to charge and erase the floating gate. Erase times are calculated from the models andmore » compared with experimental results for different radiation doses. FRAM is less sensitive to radiation than flash memory, but measurements show that above 100 Krad FRAM suffers from a large increase in leakage current. A model for this effect is developed which compares closely with the measurements.« less
Warming experiments underpredict plant phenological responses to climate change
Wolkovich, Elizabeth M.; Cook, Benjamin I.; Allen, Jenica M.; Crimmins, Theresa M.; Betancourt, Julio L.; Travers, Steven E.; Pau, Stephanie; Regetz, James; Davies, T. Jonathan; Kraft, Nathan J.B.; Ault, Toby R.; Bolmgren, Kjell; Mazer, Susan J.; McCabe, Gregory J.; McGill, Brian J.; Parmesan, Camille; Salamin, Nicolas; Schwartz, Mark D.; Cleland, Elsa E.
2012-01-01
Warming experiments are increasingly relied on to estimate plant responses to global climate change. For experiments to provide meaningful predictions of future responses, they should reflect the empirical record of responses to temperature variability and recent warming, including advances in the timing of flowering and leafing. We compared phenology (the timing of recurring life history events) in observational studies and warming experiments spanning four continents and 1,634 plant species using a common measure of temperature sensitivity (change in days per degree Celsius). We show that warming experiments underpredict advances in the timing of flowering and leafing by 8.5-fold and 4.0-fold, respectively, compared with long-term observations. For species that were common to both study types, the experimental results did not match the observational data in sign or magnitude. The observational data also showed that species that flower earliest in the spring have the highest temperature sensitivities, but this trend was not reflected in the experimental data. These significant mismatches seem to be unrelated to the study length or to the degree of manipulated warming in experiments. The discrepancy between experiments and observations, however, could arise from complex interactions among multiple drivers in the observational data, or it could arise from remediable artefacts in the experiments that result in lower irradiance and drier soils, thus dampening the phenological responses to manipulated warming. Our results introduce uncertainty into ecosystem models that are informed solely by experiments and suggest that responses to climate change that are predicted using such models should be re-evaluated.
Warming Experiments Underpredict Plant Phenological Responses to Climate Change
NASA Technical Reports Server (NTRS)
Wolkovich, E. M.; Cook, B. I.; Allen, J. M.; Crimmins, T. M.; Betancourt, J. L.; Travers, S. E.; Pau, S.; Regetz, J.; Davies, T. J.; Kraft, N. J. B.;
2012-01-01
Warming experiments are increasingly relied on to estimate plant responses to global climate change. For experiments to provide meaningful predictions of future responses, they should reflect the empirical record of responses to temperature variability and recent warming, including advances in the timing of flowering and leafing. We compared phenology (the timing of recurring life history events) in observational studies and warming experiments spanning four continents and 1,634 plant species using a common measure of temperature sensitivity (change in days per degree Celsius). We show that warming experiments underpredict advances in the timing of flowering and leafing by 8.5-fold and 4.0-fold, respectively, compared with long-term observations. For species that were common to both study types, the experimental results did not match the observational data in sign or magnitude. The observational data also showed that species that flower earliest in the spring have the highest temperature sensitivities, but this trend was not reflected in the experimental data. These significant mismatches seem to be unrelated to the study length or to the degree of manipulated warming in experiments. The discrepancy between experiments and observations, however, could arise from complex interactions among multiple drivers in the observational data, or it could arise from remediable artefacts in the experiments that result in lower irradiance and drier soils, thus dampening the phenological responses to manipulated warming. Our results introduce uncertainty into ecosystem models that are informed solely by experiments and suggest that responses to climate change that are predicted using such models should be re-evaluated.
NASA Technical Reports Server (NTRS)
Perlwitz, Jan; Tegen, Ina; Miller, Ron L.
2000-01-01
The sensitivity of the soil dust aerosol cycle to the radiative forcing by soil dust aerosols is studied. Four experiments with the NASA/GISS atmospheric general circulation model, which includes a soil dust aerosol model, are compared, all using a prescribed climatological sea surface temperature as lower boundary condition. In one experiment, dust is included as dynamic tracer only (without interacting with radiation), whereas dust interacts with radiation in the other simulations. Although the single scattering albedo of dust particles is prescribed to be globally uniform in the experiments with radiatively active dust, a different single scattering albedo is used in those experiments to estimate whether regional variations in dust optical properties, corresponding to variations in mineralogical composition among different source regions, are important for the soil dust cycle and the climate state. On a global scale, the radiative forcing by dust generally causes a reduction in the atmospheric dust load corresponding to a decreased dust source flux. That is, there is a negative feedback in the climate system due to the radiative effect of dust. The dust source flux and its changes were analyzed in more detail for the main dust source regions. This analysis shows that the reduction varies both with the season and with the single scattering albedo of the dust particles. By examining the correlation with the surface wind, it was found that the dust emission from the Saharan/Sahelian source region and from the Arabian peninsula, along with the sensitivity of the emission to the single scattering albedo of dust particles, are related to large scale circulation patterns, in particular to the trade winds during Northern Hemisphere winter and to the Indian monsoon circulation during summer. In the other regions, such relations to the large scale circulation were not found. There, the dependence of dust deflation to radiative forcing by dust particles is probably dominated by physical processes with short time scales. The experiments show that dust radiative forcing can lead to significant changes both in the soil dust cycle and in the climate state. To estimate dust concentration and radiative forcing by dust more accurately, dust size distributions and dust single scattering albedo in the model should be a function of the source region, because dust concentration and climate response to dust radiative forcing are sensitive to dust radiative parameters.
A Python Interface for the Dakota Iterative Systems Analysis Toolkit
NASA Astrophysics Data System (ADS)
Piper, M.; Hutton, E.; Syvitski, J. P.
2016-12-01
Uncertainty quantification is required to improve the accuracy, reliability, and accountability of Earth science models. Dakota is a software toolkit, developed at Sandia National Laboratories, that provides an interface between models and a library of analysis methods, including support for sensitivity analysis, uncertainty quantification, optimization, and calibration techniques. Dakota is a powerful tool, but its learning curve is steep: the user not only must understand the structure and syntax of the Dakota input file, but also must develop intermediate code, called an analysis driver, that allows Dakota to run a model. The CSDMS Dakota interface (CDI) is a Python package that wraps and extends Dakota's user interface. It simplifies the process of configuring and running a Dakota experiment. A user can program to the CDI, allowing a Dakota experiment to be scripted. The CDI creates Dakota input files and provides a generic analysis driver. Any model written in Python that exposes a Basic Model Interface (BMI), as well as any model componentized in the CSDMS modeling framework, automatically works with the CDI. The CDI has a plugin architecture, so models written in other languages, or those that don't expose a BMI, can be accessed by the CDI by programmatically extending a template; an example is provided in the CDI distribution. Currently, six Dakota analysis methods have been implemented for examples from the much larger Dakota library. To demonstrate the CDI, we performed an uncertainty quantification experiment with the HydroTrend hydrological water balance and transport model. In the experiment, we evaluated the response of long-term suspended sediment load at the river mouth (Qs) to uncertainty in two input parameters, annual mean temperature (T) and precipitation (P), over a series of 100-year runs, using the polynomial chaos method. Through Dakota, we calculated moments, local and global (Sobol') sensitivity indices, and probability density and cumulative distribution functions for the response.
Body size, swimming speed, or thermal sensitivity? Predator-imposed selection on amphibian larvae.
Gvoždík, Lumír; Smolinský, Radovan
2015-11-02
Many animals rely on their escape performance during predator encounters. Because of its dependence on body size and temperature, escape velocity is fully characterized by three measures, absolute value, size-corrected value, and its response to temperature (thermal sensitivity). The primary target of the selection imposed by predators is poorly understood. We examined predator (dragonfly larva)-imposed selection on prey (newt larvae) body size and characteristics of escape velocity using replicated and controlled predation experiments under seminatural conditions. Specifically, because these species experience a wide range of temperatures throughout their larval phases, we predict that larvae achieving high swimming velocities across temperatures will have a selective advantage over more thermally sensitive individuals. Nonzero selection differentials indicated that predators selected for prey body size and both absolute and size-corrected maximum swimming velocity. Comparison of selection differentials with control confirmed selection only on body size, i.e., dragonfly larvae preferably preyed on small newt larvae. Maximum swimming velocity and its thermal sensitivity showed low group repeatability, which contributed to non-detectable selection on both characteristics of escape performance. In the newt-dragonfly larvae interaction, body size plays a more important role than maximum values and thermal sensitivity of swimming velocity during predator escape. This corroborates the general importance of body size in predator-prey interactions. The absence of an appropriate control in predation experiments may lead to potentially misleading conclusions about the primary target of predator-imposed selection. Insights from predation experiments contribute to our understanding of the link between performance and fitness, and further improve mechanistic models of predator-prey interactions and food web dynamics.
Low energy probes of PeV scale sfermions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altmannshofer, Wolfgang; Harnik, Roni; Zupan, Jure
2013-11-27
We derive bounds on squark and slepton masses in mini-split supersymmetry scenario using low energy experiments. In this setup gauginos are at the TeV scale, while sfermions are heavier by a loop factor. We cover the most sensitive low energy probes including electric dipole moments (EDMs), meson oscillations and charged lepton flavor violation (LFV) transitions. A leading log resummation of the large logs of gluino to sfermion mass ratio is performed. A sensitivity to PeV squark masses is obtained at present from kaon mixing measurements. A number of observables, including neutron EDMs, mu->e transitions and charmed meson mixing, will startmore » probing sfermion masses in the 100 TeV-1000 TeV range with the projected improvements in the experimental sensitivities. We also discuss the implications of our results for a variety of models that address the flavor hierarchy of quarks and leptons. We find that EDM searches will be a robust probe of models in which fermion masses are generated radiatively, while LFV searches remain sensitive to simple-texture based flavor models.« less
Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian
2016-02-01
The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.
Maluf, Renato Sergio; Burlandy, Luciene; Santarelli, Mariana; Schottz, Vanessa; Speranza, Juliana Simões
2015-08-01
This paper explores the possibilities of the nutrition-sensitive agriculture approach in the context of the programs and actions towards promoting food and nutrition sovereignty and security in Brazil. To analyze the links between nutrition and agriculture, this paper presents the conceptual framework related to food and nutrition security, and stresses the correlations among concepts, institutional structures and program design in Brazil. Dominant models of food production and consumption are scrutinized in the light of these relationships. This paper also highlights differences amongst different ways to promote nutrition-sensitive agriculture through food-acquisition programs from family farmers, experiences in agro-ecology and bio-fortification programs. In the closing remarks, the paper draws some lessons learned from the Brazilian experience that highlight the advantages of family farming and rapid food production, distribution and consumption cycles in order to promote access to an affordable, diversified and more adequate diet in nutritional terms.
Li, Fu Hua; Yao, Kun; Lv, Wen Ying; Liu, Guo Guang; Chen, Ping; Huang, Hao Ping; Kang, Ya Pu
2015-04-01
The photodegradation of ibuprofen (IBP) in aqueous media was studied in this paper. The degradation mechanism, the reaction kinetics and toxicity of the photolysis products of IBP under UV-Vis irradiation were investigated by dissolved oxygen experiments, quenching experiments of reactive oxygen species (ROS), and toxicity evaluation utilizing Vibrio fischeri. The results demonstrated that the IBP degradation process could be fitted by the pseudo first-order kinetics model. The degradation of IBP by UV-Vis irradiation included direct photolysis and self-sensitization via ROS. The presence of dissolved oxygen inhibited the photodegradation of IBP, which indicated that direct photolysis was more rapid than the self-sensitization. The contribution rates of ·OH and (1)O2 were 21.8 % and 38.6 % in self-sensitization, respectively. Ibuprofen generated a number of intermediate products that were more toxic than the base compound during photodegradation.
Gold nanoparticles: enhanced optical trapping and sensitivity coupled with significant heating.
Seol, Yeonee; Carpenter, Amanda E; Perkins, Thomas T
2006-08-15
Gold nanoparticles appear to be superior handles in optical trapping assays. We demonstrate that relatively large gold particles (R(b)=50 nm) indeed yield a sixfold enhancement in trapping efficiency and detection sensitivity as compared to similar-sized polystyrene particles. However, optical absorption by gold at the most common trapping wavelength (1064 nm) induces dramatic heating (266 degrees C/W). We determined this heating by comparing trap stiffness from three different methods in conjunction with detailed modeling. Due to this heating, gold nanoparticles are not useful for temperature-sensitive optical-trapping experiments, but may serve as local molecular heaters. Also, such particles, with their increased detection sensitivity, make excellent probes for certain zero-force biophysical assays.
NASA Astrophysics Data System (ADS)
Huang, Min; Carmichael, Gregory R.; Pierce, R. Bradley; Jo, Duseong S.; Park, Rokjin J.; Flemming, Johannes; Emmons, Louisa K.; Bowman, Kevin W.; Henze, Daven K.; Davila, Yanko; Sudo, Kengo; Eiof Jonson, Jan; Tronstad Lund, Marianne; Janssens-Maenhout, Greet; Dentener, Frank J.; Keating, Terry J.; Oetjen, Hilke; Payne, Vivienne H.
2017-05-01
The recent update on the US National Ambient Air Quality Standards (NAAQS) of the ground-level ozone (O3) can benefit from a better understanding of its source contributions in different US regions during recent years. In the Hemispheric Transport of Air Pollution experiment phase 1 (HTAP1), various global models were used to determine the O3 source-receptor (SR) relationships among three continents in the Northern Hemisphere in 2001. In support of the HTAP phase 2 (HTAP2) experiment that studies more recent years and involves higher-resolution global models and regional models' participation, we conduct a number of regional-scale Sulfur Transport and dEposition Model (STEM) air quality base and sensitivity simulations over North America during May-June 2010. STEM's top and lateral chemical boundary conditions were downscaled from three global chemical transport models' (i.e., GEOS-Chem, RAQMS, and ECMWF C-IFS) base and sensitivity simulations in which the East Asian (EAS) anthropogenic emissions were reduced by 20 %. The mean differences between STEM surface O3 sensitivities to the emission changes and its corresponding boundary condition model's are smaller than those among its boundary condition models, in terms of the regional/period-mean (< 10 %) and the spatial distributions. An additional STEM simulation was performed in which the boundary conditions were downscaled from a RAQMS (Realtime Air Quality Modeling System) simulation without EAS anthropogenic emissions. The scalability of O3 sensitivities to the size of the emission perturbation is spatially varying, and the full (i.e., based on a 100 % emission reduction) source contribution obtained from linearly scaling the North American mean O3 sensitivities to a 20 % reduction in the EAS anthropogenic emissions may be underestimated by at least 10 %. The three boundary condition models' mean O3 sensitivities to the 20 % EAS emission perturbations are ˜ 8 % (May-June 2010)/˜ 11 % (2010 annual) lower than those estimated by eight global models, and the multi-model ensemble estimates are higher than the HTAP1 reported 2001 conditions. GEOS-Chem sensitivities indicate that the EAS anthropogenic NOx emissions matter more than the other EAS O3 precursors to the North American O3, qualitatively consistent with previous adjoint sensitivity calculations. In addition to the analyses on large spatial-temporal scales relative to the HTAP1, we also show results on subcontinental and event scales that are more relevant to the US air quality management. The EAS pollution impacts are weaker during observed O3 exceedances than on all days in most US regions except over some high-terrain western US rural/remote areas. Satellite O3 (TES, JPL-IASI, and AIRS) and carbon monoxide (TES and AIRS) products, along with surface measurements and model calculations, show that during certain episodes stratospheric O3 intrusions and the transported EAS pollution influenced O3 in the western and the eastern US differently. Free-running (i.e., without chemical data assimilation) global models underpredicted the transported background O3 during these episodes, posing difficulties for STEM to accurately simulate the surface O3 and its source contribution. Although we effectively improved the modeled O3 by incorporating satellite O3 (OMI and MLS) and evaluated the quality of the HTAP2 emission inventory with the Royal Netherlands Meteorological Institute-Ozone Monitoring Instrument (KNMI-OMI) nitrogen dioxide, using observations to evaluate and improve O3 source attribution still remains to be further explored.
Huang, Min; Carmichael, Gregory R; Pierce, R Bradley; Jo, Duseong S; Park, Rokjin J; Flemming, Johannes; Emmons, Louisa K; Bowman, Kevin W; Henze, Daven K; Davila, Yanko; Sudo, Kengo; Jonson, Jan Eiof; Lund, Marianne Tronstad; Janssens-Maenhout, Greet; Dentener, Frank J; Keating, Terry J; Oetjen, Hilke; Payne, Vivienne H
2017-05-08
The recent update on the US National Ambient Air Quality Standards (NAAQS) of the ground-level ozone (O 3 / can benefit from a better understanding of its source contributions in different US regions during recent years. In the Hemispheric Transport of Air Pollution experiment phase 1 (HTAP1), various global models were used to determine the O 3 source-receptor (SR) relationships among three continents in the Northern Hemisphere in 2001. In support of the HTAP phase 2 (HTAP2) experiment that studies more recent years and involves higher-resolution global models and regional models' participation, we conduct a number of regional-scale Sulfur Transport and dEposition Model (STEM) air quality base and sensitivity simulations over North America during May-June 2010. STEM's top and lateral chemical boundary conditions were downscaled from three global chemical transport models' (i.e., GEOS-Chem, RAQMS, and ECMWF C-IFS) base and sensitivity simulations in which the East Asian (EAS) anthropogenic emissions were reduced by 20 %. The mean differences between STEM surface O 3 sensitivities to the emission changes and its corresponding boundary condition model's are smaller than those among its boundary condition models, in terms of the regional/period-mean (<10 %) and the spatial distributions. An additional STEM simulation was performed in which the boundary conditions were downscaled from a RAQMS (Realtime Air Quality Modeling System) simulation without EAS anthropogenic emissions. The scalability of O 3 sensitivities to the size of the emission perturbation is spatially varying, and the full (i.e., based on a 100% emission reduction) source contribution obtained from linearly scaling the North American mean O 3 sensitivities to a 20% reduction in the EAS anthropogenic emissions may be underestimated by at least 10 %. The three boundary condition models' mean O 3 sensitivities to the 20% EAS emission perturbations are ~8% (May-June 2010)/~11% (2010 annual) lower than those estimated by eight global models, and the multi-model ensemble estimates are higher than the HTAP1 reported 2001 conditions. GEOS-Chem sensitivities indicate that the EAS anthropogenic NO x emissions matter more than the other EAS O 3 precursors to the North American O 3 , qualitatively consistent with previous adjoint sensitivity calculations. In addition to the analyses on large spatial-temporal scales relative to the HTAP1, we also show results on subcontinental and event scales that are more relevant to the US air quality management. The EAS pollution impacts are weaker during observed O 3 exceedances than on all days in most US regions except over some high-terrain western US rural/remote areas. Satellite O 3 (TES, JPL-IASI, and AIRS) and carbon monoxide (TES and AIRS) products, along with surface measurements and model calculations, show that during certain episodes stratospheric O 3 intrusions and the transported EAS pollution influenced O 3 in the western and the eastern US differently. Free-running (i.e., without chemical data assimilation) global models underpredicted the transported background O 3 during these episodes, posing difficulties for STEM to accurately simulate the surface O 3 and its source contribution. Although we effectively improved the modeled O 3 by incorporating satellite O 3 (OMI and MLS) and evaluated the quality of the HTAP2 emission inventory with the Royal Netherlands Meteorological Institute-Ozone Monitoring Instrument (KNMI-OMI) nitrogen dioxide, using observations to evaluate and improve O 3 source attribution still remains to be further explored.
Detecting potential impacts of deep subsurface CO2 injection on shallow drinking water
NASA Astrophysics Data System (ADS)
Smyth, R. C.; Yang, C.; Romanak, K.; Mickler, P. J.; Lu, J.; Hovorka, S. D.
2012-12-01
Presented here are results from one aspect of collective research conducted at Gulf Coast Carbon Center, BEG, Jackson School at UT Austin. The biggest hurdle to public acceptance of CCS is to show that drinking water resources will not be impacted. Since late 1990s our group has been supported by US DOE NETL and private industry to research how best to detect potential impacts to shallow (0 to ~0.25 km) subsurface drinking water from deep (~1 to 3.5 km) injection of CO2. Work has and continues to include (1) field sampling and testing, (2) laboratory batch experiments, (3) geochemical modeling. The objective has been to identify the most sensitive geochemical indicators using data from research-level investigations, which can be economically applied on an industrial-scale. The worst-case scenario would be introduction of CO2 directly into drinking water from a leaking wellbore at a brownfield site. This is unlikely for a properly screened and/or maintained site, but needs to be considered. Our results show aquifer matrix (carbonate vs. clastic) to be critical to interpretation of pH and carbonate (DIC, Alkalinity, and δ13C of DIC) parameters because of the influence of water-rock reaction (buffering vs. non-buffering) on aqueous geochemistry. Field groundwater sampling sites to date are Cranfield, MS and SACROC, TX CO2-EOR oilfields. Two major aquifer types are represented, one dominated by silicate (Cranfield) and the other by carbonate (SACROC) water-rock reactions. We tested sensitivity of geochemical indicators (pH, DIC, Alkalinity, and δ13C of DIC) by modeling the effects of increasing pCO2 on aqueous geochemistry, and laboratory batch experiments, both with partial pressure of CO2 gas (pCO2) at 1x105 Pa (1 atm). Aquifer matrix and groundwater data provided constraints for the geochemical models. We used results from modeling and batch experiments to rank geochemical parameter sensitivity to increased pCO2 into weakly, mildly and strongly sensitive categories for both aquifer systems. DIC concentration is strongly sensitive to increased pCO2 for both aquifers; however, CO2 outgassing during sampling complicates direct field measurement of DIC. Interpretation of data from in-situ push-pull aquifer tests is ongoing and will be used to augment results summarized here. We are currently designing groundwater monitoring plans for two additional industrial-scale sites where we will further test the sensitivity and utility of our sampling approach.
Zhou, Xuhui; Xu, Xia; Zhou, Guiyao; Luo, Yiqi
2018-02-01
Temperature sensitivity of soil organic carbon (SOC) decomposition is one of the major uncertainties in predicting climate-carbon (C) cycle feedback. Results from previous studies are highly contradictory with old soil C decomposition being more, similarly, or less sensitive to temperature than decomposition of young fractions. The contradictory results are partly from difficulties in distinguishing old from young SOC and their changes over time in the experiments with or without isotopic techniques. In this study, we have conducted a long-term field incubation experiment with deep soil collars (0-70 cm in depth, 10 cm in diameter of PVC tubes) for excluding root C input to examine apparent temperature sensitivity of SOC decomposition under ambient and warming treatments from 2002 to 2008. The data from the experiment were infused into a multi-pool soil C model to estimate intrinsic temperature sensitivity of SOC decomposition and C residence times of three SOC fractions (i.e., active, slow, and passive) using a data assimilation (DA) technique. As active SOC with the short C residence time was progressively depleted in the deep soil collars under both ambient and warming treatments, the residences times of the whole SOC became longer over time. Concomitantly, the estimated apparent and intrinsic temperature sensitivity of SOC decomposition also became gradually higher over time as more than 50% of active SOC was depleted. Thus, the temperature sensitivity of soil C decomposition in deep soil collars was positively correlated with the mean C residence times. However, the regression slope of the temperature sensitivity against the residence time was lower under the warming treatment than under ambient temperature, indicating that other processes also regulated temperature sensitivity of SOC decomposition. These results indicate that old SOC decomposition is more sensitive to temperature than young components, making the old C more vulnerable to future warmer climate. © 2017 John Wiley & Sons Ltd.
Moment-Tensor Spectra of Source Physics Experiments (SPE) Explosions in Granite
NASA Astrophysics Data System (ADS)
Yang, X.; Cleveland, M.
2016-12-01
We perform frequency-domain moment tensor inversions of Source Physics Experiments (SPE) explosions conducted in granite during Phase I of the experiment. We test the sensitivity of source moment-tensor spectra to factors such as the velocity model, selected dataset and smoothing and damping parameters used in the inversion to constrain the error bound of inverted source spectra. Using source moments and corner frequencies measured from inverted source spectra of these explosions, we develop a new explosion P-wave source model that better describes observed source spectra of these small and over-buried chemical explosions detonated in granite than classical explosion source models derived mainly from nuclear-explosion data. In addition to source moment and corner frequency, we analyze other features in the source spectra to investigate their physical causes.
NASA Technical Reports Server (NTRS)
Hoge, F. E.; Swift, R. N.
1983-01-01
Airborne lidar oil spill experiments carried out to determine the practicability of the AOFSCE (absolute oil fluorescence spectral conversion efficiency) computational model are described. The results reveal that the model is suitable over a considerable range of oil film thicknesses provided the fluorescence efficiency of the oil does not approach the minimum detection sensitivity limitations of the lidar system. Separate airborne lidar experiments to demonstrate measurement of the water column Raman conversion efficiency are also conducted to ascertain the ultimate feasibility of converting such relative oil fluorescence to absolute values. Whereas the AOFSCE model is seen as highly promising, further airborne water column Raman conversion efficiency experiments with improved temporal or depth-resolved waveform calibration and software deconvolution techniques are thought necessary for a final determination of suitability.
Rupprecht, Elizabeth A; Kueny, Clair Reynolds; Shoss, Mindy K; Metzger, Andrew J
2016-10-01
We challenge the intuitive belief that greater leader sensitivity is always associated with desirable outcomes for employees and organizations. Specifically, we argue that followers' idiosyncratic desires for, and perceptions of, leader sensitivity behaviors play a key role in how followers react to their leader's sensitivity. Moreover, these resulting affective experiences are likely to have important consequences for organizations, specifically as they relate to employee counterproductive work behavior (CWB). Drawing from supplies-values (S-V) fit theory and the stressor-emotion model of CWB, the current study focuses on the affective and behavioral consequences of fit between subordinates' ideal leader sensitivity behavior preferences and subordinates' perceptions of their actual leader's sensitivity behaviors. Polynomial regression analyses reveal that congruence between ideal and actual leader sensitivity influences employee negative affect and, consequently, engagement in counterproductive work behavior. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
PMT waveform modeling at the Daya Bay experiment
NASA Astrophysics Data System (ADS)
Sören, Jetter; Dan, Dwyer; Jiang, Wen-Qi; Liu, Da-Wei; Wang, Yi-Fang; Wang, Zhi-Min; Wen, Liang-Jian
2012-08-01
Detailed measurements of Hamamatsu R5912 photomultiplier signals are presented, including the single photoelectron charge response, waveform shape, nonlinearity, saturation, overshoot, oscillation, prepulsing, and afterpulsing. The results were used to build a detailed model of the PMT signal characteristics over a wide range of light intensities. Including the PMT model in simulated Daya Bay particle interactions shows no significant systematic effects that are detrimental to the experimental sensitivity.
NASA Technical Reports Server (NTRS)
Li, Xiao-Fan; Sui, C.-H.; Lau, K.-M.; Tao, W.-K.
2004-01-01
Prognostic cloud schemes are increasingly used in weather and climate models in order to better treat cloud-radiation processes. Simplifications are often made in such schemes for computational efficiency, like the scheme being used in the National Centers for Environment Prediction models that excludes some microphysical processes and precipitation-radiation interaction. In this study, sensitivity tests with a 2D cloud resolving model are carried out to examine effects of the excluded microphysical processes and precipitation-radiation interaction on tropical thermodynamics and cloud properties. The model is integrated for 10 days with the imposed vertical velocity derived from the Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment. The experiment excluding the depositional growth of snow from cloud ice shows anomalous growth of cloud ice and more than 20% increase of fractional cloud cover, indicating that the lack of the depositional snow growth causes unrealistically large mixing ratio of cloud ice. The experiment excluding the precipitation-radiation interaction displays a significant cooling and drying bias. The analysis of heat and moisture budgets shows that the simulation without the interaction produces more stable upper troposphere and more unstable mid and lower troposphere than does the simulation with the interaction. Thus, the suppressed growth of ice clouds in upper troposphere and stronger radiative cooling in mid and lower troposphere are responsible for the cooling bias, and less evaporation of rain associated with the large-scale subsidence induces the drying in mid and lower troposphere.
NASA Astrophysics Data System (ADS)
Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry
2017-04-01
Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.
Analysis of Darwin Rainfall Data: Implications on Sampling Strategy
NASA Technical Reports Server (NTRS)
Rafael, Qihang Li; Bras, Rafael L.; Veneziano, Daniele
1996-01-01
Rainfall data collected by radar in the vicinity of Darwin, Australia, have been analyzed in terms of their mean, variance, autocorrelation of area-averaged rain rate, and diurnal variation. It is found that, when compared with the well-studied GATE (Global Atmospheric Research Program Atlantic Tropical Experiment) data, Darwin rainfall has larger coefficient of variation (CV), faster reduction of CV with increasing area size, weaker temporal correlation, and a strong diurnal cycle and intermittence. The coefficient of variation for Darwin rainfall has larger magnitude and exhibits larger spatial variability over the sea portion than over the land portion within the area of radar coverage. Stationary, and nonstationary models have been used to study the sampling errors associated with space-based rainfall measurement. The nonstationary model shows that the sampling error is sensitive to the starting sampling time for some sampling frequencies, due to the diurnal cycle of rain, but not for others. Sampling experiments using data also show such sensitivity. When the errors are averaged over starting time, the results of the experiments and the stationary and nonstationary models match each other very closely. In the small areas for which data are available for I>oth Darwin and GATE, the sampling error is expected to be larger for Darwin due to its larger CV.
NASA Astrophysics Data System (ADS)
Arellano, A. F., Jr.; Tang, W.
2017-12-01
Assimilating observational data of chemical constituents into a modeling system is a powerful approach in assessing changes in atmospheric composition and estimating associated emissions. However, the results of such chemical data assimilation (DA) experiments are largely subject to various key factors such as: a) a priori information, b) error specification and representation, and c) structural biases in the modeling system. Here we investigate the sensitivity of an ensemble-based data assimilation state and emission estimates to these key factors. We focus on investigating the assimilation performance of the Community Earth System Model (CESM)/CAM-Chem with the Data Assimilation Research Testbed (DART) in representing biomass burning plumes in the Amazonia during the 2008 fire season. We conduct the following ensemble DA MOPITT CO experiments: 1) use of monthly-average NCAR's FINN surface fire emissionss, 2) use of daily FINN surface fire emissions, 3) use of daily FINN emissions with climatological injection heights, and 4) use of perturbed FINN emission parameters to represent not only the uncertainties in combustion activity but also in combustion efficiency. We show key diagnostics of assimilation performance for these experiments and verify with available ground-based and aircraft-based measurements.
NASA Astrophysics Data System (ADS)
Miller, D. O.; Brune, W. H.
2017-12-01
Accurate estimates of secondary organic aerosol (SOA) from atmospheric models is a major research challenge due to the complexity of the chemical and physical processes involved in the SOA formation and continuous aging. The primary uncertainties of SOA models include those associated with the formation of gas-phase products, the conversion between gas phase and particle phase, the aging mechanisms of SOA, and other processes related to the heterogeneous and particle-phase reactions. To address this challenge, we us a modular modeling framework that combines both simple and near-explicit gas-phase reactions and a two-dimensional volatility basis set (2D-VBS) to simulate the formation and evolution of SOA. Global sensitivity analysis is used to assess the relative importance of the model input parameters. In addition, the model is compared to the measurements from the Focused Isoprene eXperiment at the California Institute of Technology (FIXCIT).
Psychophysically based model of surface gloss perception
NASA Astrophysics Data System (ADS)
Ferwerda, James A.; Pellacini, Fabio; Greenberg, Donald P.
2001-06-01
In this paper we introduce a new model of surface appearance that is based on quantitative studies of gloss perception. We use image synthesis techniques to conduct experiments that explore the relationships between the physical dimensions of glossy reflectance and the perceptual dimensions of glossy appearance. The product of these experiments is a psychophysically-based model of surface gloss, with dimensions that are both physically and perceptually meaningful and scales that reflect our sensitivity to gloss variations. We demonstrate that the model can be used to describe and control the appearance of glossy surfaces in synthesis images, allowing prediction of gloss matches and quantification of gloss differences. This work represents some initial steps toward developing psychophyscial models of the goniometric aspects of surface appearance to complement widely-used colorimetric models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Wenhua; Sui, Chung-Hsiung; Fan, Jiwen
Cloud microphysical properties and precipitation over the Tibetan Plateau (TP) are unique because of the high terrains, clean atmosphere, and sufficient water vapor. With dual-polarization precipitation radar and cloud radar measurements during the Third Tibetan Plateau Atmospheric Scientific Experiment (TIPEX-III), the simulated microphysics and precipitation by the Weather Research and Forecasting model (WRF) with the Chinese Academy of Meteorological Sciences (CAMS) microphysics and other microphysical schemes are investigated through a typical plateau rainfall event on 22 July 2014. Results show that the WRF-CAMS simulation reasonably reproduces the spatial distribution of 24-h accumulated precipitation, but has limitations in simulating time evolutionmore » of precipitation rates. The model-calculated polarimetric radar variables have biases as well, suggesting bias in modeled hydrometeor types. The raindrop sizes in convective region are larger than those in stratiform region indicated by the small intercept of raindrop size distribution in the former. The sensitivity experiments show that precipitation processes are sensitive to the changes of warm rain processes in condensation and nucleated droplet size (but less sensitive to evaporation process). Increasing droplet condensation produces the best area-averaged rain rate during weak convection period compared with the observation, suggesting a considerable bias in thermodynamics in the baseline simulation. Increasing the initial cloud droplet size causes the rain rate reduced by half, an opposite effect to that of increasing droplet condensation.« less
Numerical Modelling of Smouldering Combustion as a Remediation Technology for NAPL Source Zones
NASA Astrophysics Data System (ADS)
Macphee, S. L.; Pironi, P.; Gerhard, J. I.; Rein, G.
2009-05-01
Smouldering combustion of non-aqueous phase liquids (NAPLs) is a novel concept that has significant potential for the remediation of contaminated industrial sites. Many common NAPLs, including coal tar, solvents, oils and petrochemicals are combustible and capable of generating substantial amounts of heat when burned. Smouldering is a flameless form of combustion in which a condensed phase fuel undergoes surface oxidation reactions within a porous matrix. Gerhard et al., 2006 (Eos Trans., 87(52), Fall Meeting Suppl. H24A) presented proof-of-concept experiments demonstrating the successful destruction of NAPLs embedded in a porous medium via smouldering. Pironi et al., 2008 (Eos Trans., 89(53), Fall Meet. Suppl. H34C) presented a series of column experiments illustrating the self-sustaining nature of the NAPL smouldering process and examined its sensitivity to a variety of key system parameters. In this work, a numerical model capable of simulating the propagation of a smouldering front in NAPL-contaminated porous media is presented. The model couples the multiphase flow code DNAPL3D-MT [Gerhard and Grant, 2007] with an analytical model for fire propagation [Richards, 1995]. The fire model is modified in this work for smouldering behaviour; in particular, incorporating a correlation of the velocity of the smouldering front to key parameters such as contaminant type, NAPL saturation, water saturation, porous media type and air injection rate developed from the column experiments. NAPL smouldering simulations are then validated against the column experiments. Furthermore, multidimensional simulations provide insight into scaling up the remediation process and are valuable for evaluating process sensitivity at the scales of in situ pilot and field applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Justin; Hund, Lauren
2017-02-01
Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less
NASA Astrophysics Data System (ADS)
Zhou, L.; Baker, K. R.; Napelenok, S. L.; Elleman, R. A.; Urbanski, S. P.
2016-12-01
Biomass burning, including wildfires and prescribed burns, strongly impact the global carbon cycle and are of increasing concern due to the potential impacts on ambient air quality. This modelling study focuses on the evolution of carbonaceous compounds during a prescribed burning experiment and assesses the impacts of burning on local to regional air quality. The Community Multiscale Air Quality (CMAQ) model is used to conduct 4 and 2 km grid resolution simulations of prescribed burning experiments in southeast Washington state and western Idaho state in summer 2013. The ground and airborne measurements from the field experiment are used to evaluate the model performance in capturing surface and aloft impacts from the burning events. Phase partitioning of organic compounds in the plume are studied as it is a crucial step towards understanding the fate of carbonaceous compounds. The sensitivities of ambient concentrations and deposition to emissions are conducted for organic carbon, elemental carbon and ozone to estimate the impacts of fire on air quality.
Stress Sensitivity and Stress Generation in Social Anxiety Disorder: A Temporal Process Approach
Farmer, Antonina S.; Kashdan, Todd B.
2015-01-01
Dominant theoretical models of social anxiety disorder (SAD) suggest that people who suffer from function-impairing social fears are likely to react more strongly to social stressors. Researchers have examined the reactivity of people with SAD to stressful laboratory tasks, but there is little knowledge about how stress affects their daily lives. We asked 79 adults from the community, 40 diagnosed with SAD and 39 matched healthy controls, to self-monitor their social interactions, social events, and emotional experiences over two weeks using electronic diaries. These data allowed us to examine associations of social events and emotional well-being both within-day and from one day to the next. Using hierarchical linear modeling, we found all participants to report increases in negative affect and decreases in positive affect and self-esteem on days when they experienced more stressful social events. However, people with SAD displayed greater stress sensitivity, particularly in negative emotion reactions to stressful social events, compared to healthy controls. Groups also differed in how previous days’ events influenced sensitivity to current days’ events. Moreover, we found evidence of stress generation in that the SAD group reported more frequent interpersonal stress, though temporal analyses did not suggest greater likelihood of social stress on days following intense negative emotions. Our findings support the role of heightened social stress sensitivity in SAD, highlighting rigidity in reactions and occurrence of stressful experiences from one day to the next. These findings also shed light on theoretical models of emotions and self-esteem in SAD and present important clinical implications. PMID:25688437
Simulation of the Intercontinental Transport, Aging, and Removal of a Boreal Fire Smoke Plume
NASA Astrophysics Data System (ADS)
Ghan, S. J.; Chapman, E. G.; Easter, R. C.; Reid, J. S.; Justice, C.
2003-12-01
Back trajectories suggest that an elevated absorbing aerosol plume observed over Oklahoma in May 2003 can be traced to intense forest fires in Siberia two weeks earlier. The Fire Locating and Modeling of Burning Emissions (FLAMBE) product is used to estimate smoke emissions from those fires. The Model for Integrated Research on Atmospheric Model Exchanges (MIRAGE) is used to simulate the transport, aging, radiative properties, and removal of the aerosol. The simulated aerosol optical depth is compared with satellite retrievals, and the vertical structure of the plume is compared with in situ measurements. Sensitivity experiments are performed to determine the sensitivity of the simulated plume to uncertainty in the emissions vertical profile, mass flux, size distribution, and composition.
Optical skin friction measurement technique in hypersonic wind tunnel
NASA Astrophysics Data System (ADS)
Chen, Xing; Yao, Dapeng; Wen, Shuai; Pan, Junjie
2016-10-01
Shear-sensitive liquid-crystal coatings (SSLCCs) have an optical characteristic that they are sensitive to the applied shear stress. Based on this, a novel technique is developed to measure the applied shear stress of the model surface regarding both its magnitude and direction in hypersonic flow. The system of optical skin friction measurement are built in China Academy of Aerospace Aerodynamics (CAAA). A series of experiments of hypersonic vehicle is performed in wind tunnel of CAAA. Global skin friction distribution of the model which shows complicated flow structures is discussed, and a brief mechanism analysis and an evaluation on optical measurement technique have been made.
Beauty and the beholder: the role of visual sensitivity in visual preference
Spehar, Branka; Wong, Solomon; van de Klundert, Sarah; Lui, Jessie; Clifford, Colin W. G.; Taylor, Richard P.
2015-01-01
For centuries, the essence of aesthetic experience has remained one of the most intriguing mysteries for philosophers, artists, art historians and scientists alike. Recently, views emphasizing the link between aesthetics, perception and brain function have become increasingly prevalent (Ramachandran and Hirstein, 1999; Zeki, 1999; Livingstone, 2002; Ishizu and Zeki, 2013). The link between art and the fractal-like structure of natural images has also been highlighted (Spehar et al., 2003; Graham and Field, 2007; Graham and Redies, 2010). Motivated by these claims and our previous findings that humans display a consistent preference across various images with fractal-like statistics, here we explore the possibility that observers’ preference for visual patterns might be related to their sensitivity for such patterns. We measure sensitivity to simple visual patterns (sine-wave gratings varying in spatial frequency and random textures with varying scaling exponent) and find that they are highly correlated with visual preferences exhibited by the same observers. Although we do not attempt to offer a comprehensive neural model of aesthetic experience, we demonstrate a strong relationship between visual sensitivity and preference for simple visual patterns. Broadly speaking, our results support assertions that there is a close relationship between aesthetic experience and the sensory coding of natural stimuli. PMID:26441611
Ethical Sensitivity in Nursing Ethical Leadership: A Content Analysis of Iranian Nurses Experiences
Esmaelzadeh, Fatemeh; Abbaszadeh, Abbas; Borhani, Fariba; Peyrovi, Hamid
2017-01-01
Background: Considering that many nursing actions affect other people’s health and life, sensitivity to ethics in nursing practice is highly important to ethical leaders as a role model. Objective: The study aims to explore ethical sensitivity in ethical nursing leaders in Iran. Method: This was a qualitative study based on the conventional content analysis in 2015. Data were collected using deep and semi-structured interviews with 20 Iranian nurses. The participants were chosen using purposive sampling. Data were analyzed using conventional content analysis. In order to increase the accuracy and integrity of the data, Lincoln and Guba's criteria were considered. Results: Fourteen sub-categories and five main categories emerged. Main categories consisted of sensitivity to care, sensitivity to errors, sensitivity to communication, sensitivity in decision making and sensitivity to ethical practice. Conclusion: Ethical sensitivity appears to be a valuable attribute for ethical nurse leaders, having an important effect on various aspects of professional practice and help the development of ethics in nursing practice. PMID:28584564
NASA Astrophysics Data System (ADS)
McGuire, A. D.
2016-12-01
The Model Integration Group of the Permafrost Carbon Network (see http://www.permafrostcarbon.org/) has conducted studies to evaluate the sensitivity of offline terrestrial permafrost and carbon models to both historical and projected climate change. These studies indicate that there is a wide range of (1) initial states permafrost extend and carbon stocks simulated by these models and (2) responses of permafrost extent and carbon stocks to both historical and projected climate change. In this study, we synthesize what has been learned about the variability in initial states among models and the driving factors that contribute to variability in the sensitivity of responses. We conclude the talk with a discussion of efforts needed by (1) the modeling community to standardize structural representation of permafrost and carbon dynamics among models that are used to evaluate the permafrost carbon feedback and (2) the modeling and observational communities to jointly develop data sets and methodologies to more effectively benchmark models.
The analysis sensitivity to tropical winds from the Global Weather Experiment
NASA Technical Reports Server (NTRS)
Paegle, J.; Paegle, J. N.; Baker, W. E.
1986-01-01
The global scale divergent and rotational flow components of the Global Weather Experiment (GWE) are diagnosed from three different analyses of the data. The rotational flow shows closer agreement between the analyses than does the divergent flow. Although the major outflow and inflow centers are similarly placed in all analyses, the global kinetic energy of the divergent wind varies by about a factor of 2 between different analyses while the global kinetic energy of the rotational wind varies by only about 10 percent between the analyses. A series of real data assimilation experiments has been performed with the GLA general circulation model using different amounts of tropical wind data during the First Special Observing Period of the Global Weather Experiment. In exeriment 1, all available tropical wind data were used; in the second experiment, tropical wind data were suppressed; while, in the third and fourth experiments, only tropical wind data with westerly and easterly components, respectively, were assimilated. The rotational wind appears to be more sensitive to the presence or absence of tropical wind data than the divergent wind. It appears that the model, given only extratropical observations, generates excessively strong upper tropospheric westerlies. These biases are sufficiently pronounced to amplify the globally integrated rotational flow kinetic energy by about 10 percent and the global divergent flow kinetic energy by about a factor of 2. Including only easterly wind data in the tropics is more effective in controlling the model error than including only westerly wind data. This conclusion is especially noteworthy because approximately twice as many upper tropospheric westerly winds were available in these cases as easterly winds.
Determination and impact of surface radiative processes for TOGA COARE
NASA Technical Reports Server (NTRS)
Curry, Judith A.; Ackerman, Thomas; Rossow, William B.; Webster, Peter J.
1991-01-01
Experiments using atmospheric general circulation models have shown that the atmospheric circulation is very sensitive to small changes in sea surface temperature in the tropical western Pacific Ocean warm pool region. The mutual sensitivity of the ocean and the atmosphere in the warm pool region places stringent requirements on models of the coupled ocean atmosphere system. At present, the situation is such that diagnostic studies using available data sets have been unable to balance the surface energy budget in the warm pool region to better than 50 to 80 W/sq m. The Tropical Ocean Global Atmosphere (TOGA) Coupled Ocean Atmosphere Response Experiment (COARE) is an observation and modelling program that aims specifically at the elucidation of the physical process which determine the mean and transient state of the warm pool region and the manner in which the warm pool interacts with the global ocean and atmosphere. This project focuses on one very important aspect of the ocean atmosphere interface component of TOGA COARE, namely the temporal and spatial variability of surface radiative fluxes in the warm pool region.
Shock Initiation Experiments with Ignition and Growth Modeling on the HMX-Based Explosive LX-14
NASA Astrophysics Data System (ADS)
Vandersall, Kevin S.; Dehaven, Martin R.; Strickland, Shawn L.; Tarver, Craig M.; Springer, H. Keo; Cowan, Matt R.
2017-06-01
Shock initiation experiments on the HMX-based explosive LX-14 were performed to obtain in-situ pressure gauge data, characterize the run-distance-to-detonation behavior, and provide a basis for Ignition and Growth reactive flow modeling. A 101 mm diameter gas gun was utilized to initiate the explosive charges with manganin piezoresistive pressure gauge packages placed between sample disks pressed to different densities ( 1.57 or 1.83 g/cm3 that corresponds to 85 or 99% of theoretical maximum density (TMD), respectively). The shock sensitivity was found to increase with decreasing density as expected. Ignition and Growth model parameters were derived that yielded reasonable agreement with the experimental data at both initial densities. The shock sensitivity at the tested densities will be compared to prior work published on other HMX-based formulations. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. This work was funded in part by the Joint DoD-DOE Munitions Program.
NASA Astrophysics Data System (ADS)
Rosland, R.; Strand, Ø.; Alunno-Bruscia, M.; Bacher, C.; Strohmeier, T.
2009-08-01
A Dynamic Energy Budget (DEB) model for simulation of growth and bioenergetics of blue mussels ( Mytilus edulis) has been tested in three low seston sites in southern Norway. The observations comprise four datasets from laboratory experiments (physiological and biometrical mussel data) and three datasets from in situ growth experiments (biometrical mussel data). Additional in situ data from commercial farms in southern Norway were used for estimation of biometrical relationships in the mussels. Three DEB parameters (shape coefficient, half saturation coefficient, and somatic maintenance rate coefficient) were estimated from experimental data, and the estimated parameters were complemented with parameter values from literature to establish a basic parameter set. Model simulations based on the basic parameter set and site specific environmental forcing matched fairly well with observations, but the model was not successful in simulating growth at the extreme low seston regimes in the laboratory experiments in which the long period of negative growth caused negative reproductive mass. Sensitivity analysis indicated that the model was moderately sensitive to changes in the parameter and initial conditions. The results show the robust properties of the DEB model as it manages to simulate mussel growth in several independent datasets from a common basic parameter set. However, the results also demonstrate limitations of Chl a as a food proxy for blue mussels and limitations of the DEB model to simulate long term starvation. Future work should aim at establishing better food proxies and improving the model formulations of the processes involved in food ingestion and assimilation. The current DEB model should also be elaborated to allow shrinking in the structural tissue in order to produce more realistic growth simulations during long periods of starvation.
Online and offline tools for head movement compensation in MEG.
Stolk, Arjen; Todorovic, Ana; Schoffelen, Jan-Mathijs; Oostenveld, Robert
2013-03-01
Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.
Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less
NASA Astrophysics Data System (ADS)
Chevuturi, Amulya; Turner, Andrew G.; Woolnoug, Steve J.; Martin, Gill
2017-04-01
In this study we investigate the development of biases over the Indian region in summer hindcasts of the UK Met Office coupled initialised global seasonal forecasting system, GloSea5-GC2. Previous work has demonstrated the rapid evolution of strong monsoon circulation biases over India from seasonal forecasts initialised in early May, together with coupled strong easterly wind biases on the equator. These mean state biases lead to strong precipitation errors during the monsoon over the subcontinent. We analyse a set of three springtime start dates for the 20-year hindcast period (1992-2011) and fifteen total ensemble members for each year. We use comparisons with variety of observations to assess the evolution of the mean state biases over the Indian land surface. All biases within the model develop rapidly, particularly surface heat and radiation flux biases. Strong biases are present within the model climatology from pre-monsoon (May) in the surface heat fluxes over India (higher sensible / lower latent heat fluxes) when compared to observed estimates. The early evolution of such biases prior to onset rains suggests possible problems with the land surface scheme or soil moisture errors. Further analysis of soil moisture over the Indian land surface shows a dry bias present from the beginning of the hindcasts during the pre-monsoon. This lasts until the after the monsoon develops (July) after which there is a wet bias over the region. Soil moisture used for initialization of the model also shows a dry bias when compared against the observed estimates, which may lead to the same in the model. The early dry bias in the model may reduce local moisture availability through surface evaporation and thus may possibly limit precipitation recycling. On this premise, we identify and test the sensitivity of the monsoon in the model against higher soil moisture forcing. We run sensitivity experiments initiated using gridpoint-wise annual soil moisture maxima over the Indian land surface as input for experiments in the atmosphere-only version of the model. We plan to analyse the response of the sensitivity experiments on seasonal forecasting of surface heat fluxes and subsequently monsoon precipitation.
NASA Astrophysics Data System (ADS)
Niezgodzki, Igor; Knorr, Gregor; Lohmann, Gerrit; Tyszka, Jarosław; Markwick, Paul J.
2017-09-01
We investigate the impact of different CO2 levels and different subarctic gateway configurations on the surface temperatures during the latest Cretaceous using the Earth System Model COSMOS. The simulated temperatures are compared with the surface temperature reconstructions based on a recent compilation of the latest Cretaceous proxies. In our numerical experiments, the CO2 level ranges from 1 to 6 times the preindustrial (PI) CO2 level of 280 ppm. On a global scale, the most reasonable match between modeling and proxy data is obtained for the experiments with 3 to 5 × PI CO2 concentrations. However, the simulated low- (high-) latitude temperatures are too high (low) as compared to the proxy data. The moderate CO2 levels scenarios might be more realistic, if we take into account proxy data and the dead zone effect criterion. Furthermore, we test if the model-data discrepancies can be caused by too simplistic proxy-data interpretations. This is distinctly seen at high latitudes, where most proxies are biased toward summer temperatures. Additional sensitivity experiments with different ocean gateway configurations and constant CO2 level indicate only minor surface temperatures changes (< 1°C) on a global scale, with higher values (up to 8°C) on a regional scale. These findings imply that modeled and reconstructed temperature gradients are to a large degree only qualitatively comparable, providing challenges for the interpretation of proxy data and/or model sensitivity. With respect to the latter, our results suggest that an assessment of greenhouse worlds is best constrained by temperatures in the midlatitudes.
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.
2011-12-01
A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.
Bayesian component separation: The Planck experience
NASA Astrophysics Data System (ADS)
Wehus, Ingunn Kathrine; Eriksen, Hans Kristian
2018-05-01
Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.
Ma, Jun; Liu, Lei; Ge, Sai; Xue, Qiang; Li, Jiangshan; Wan, Yong; Hui, Xinminnan
2018-03-01
A quantitative description of aerobic waste degradation is important in evaluating landfill waste stability and economic management. This research aimed to develop a coupling model to predict the degree of aerobic waste degradation. On the basis of the first-order kinetic equation and the law of conservation of mass, we first developed the coupling model of aerobic waste degradation that considered temperature, initial moisture content and air injection volume to simulate and predict the chemical oxygen demand in the leachate. Three different laboratory experiments on aerobic waste degradation were simulated to test the model applicability. Parameter sensitivity analyses were conducted to evaluate the reliability of parameters. The coupling model can simulate aerobic waste degradation, and the obtained simulation agreed with the corresponding results of the experiment. Comparison of the experiment and simulation demonstrated that the coupling model is a new approach to predict aerobic waste degradation and can be considered as the basis for selecting the economic air injection volume and appropriate management in the future.
Dynamic Experiments and Constitutive Model Performance for Polycarbonate
2014-07-01
phase disabled. Note, positive stress is tensile and negative is compressive ....28 Figure 23. Parameter sensitivity showing numerical contours of axial ... compressive . For the no alpha and no beta cases shown in the axial stress plots of figure 23 at 40 s, an increase in radial compression as compared...traditional Taylor cylinder impact experiment, which achieves large strain and high-strain-rate deformation but under hydrostatic compression
ERIC Educational Resources Information Center
Daumas, Stephanie; Sandin, Johan; Chen, Karen S.; Kobayashi, Dione; Tulloch, Jane; Martin, Stephen J.; Games, Dora; Morris, Richard G. M.
2008-01-01
Two experiments were conducted to investigate the possibility of faster forgetting by PDAPP mice (a well-established model of Alzheimer's disease as reported by Games and colleagues in an earlier paper). Experiment 1, using mice aged 13-16 mo, confirmed the presence of a deficit in a spatial reference memory task in the water maze by hemizygous…
Habilitation thesis on STT and Higgs searches in WH production (in FRENCH)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonnenschein, Lars
The detector of the D0 experiment at the proton anti-proton collider Tevatron in Run II is discussed in detail. The performance of the collider and the experiment is presented. Standard model Higgs searches with integrated luminosities between 260 pb -1 and 950 pb -1 and their combination are performed. No deviation from SM background expectation has been observed. Sensitivity prospects at the Tevatron are shown.
Klement, William; Wilk, Szymon; Michalowski, Wojtek; Farion, Ken J; Osmond, Martin H; Verter, Vedat
2012-03-01
Using an automatic data-driven approach, this paper develops a prediction model that achieves more balanced performance (in terms of sensitivity and specificity) than the Canadian Assessment of Tomography for Childhood Head Injury (CATCH) rule, when predicting the need for computed tomography (CT) imaging of children after a minor head injury. CT is widely considered an effective tool for evaluating patients with minor head trauma who have potentially suffered serious intracranial injury. However, its use poses possible harmful effects, particularly for children, due to exposure to radiation. Safety concerns, along with issues of cost and practice variability, have led to calls for the development of effective methods to decide when CT imaging is needed. Clinical decision rules represent such methods and are normally derived from the analysis of large prospectively collected patient data sets. The CATCH rule was created by a group of Canadian pediatric emergency physicians to support the decision of referring children with minor head injury to CT imaging. The goal of the CATCH rule was to maximize the sensitivity of predictions of potential intracranial lesion while keeping specificity at a reasonable level. After extensive analysis of the CATCH data set, characterized by severe class imbalance, and after a thorough evaluation of several data mining methods, we derived an ensemble of multiple Naive Bayes classifiers as the prediction model for CT imaging decisions. In the first phase of the experiment we compared the proposed ensemble model to other ensemble models employing rule-, tree- and instance-based member classifiers. Our prediction model demonstrated the best performance in terms of AUC, G-mean and sensitivity measures. In the second phase, using a bootstrapping experiment similar to that reported by the CATCH investigators, we showed that the proposed ensemble model achieved a more balanced predictive performance than the CATCH rule with an average sensitivity of 82.8% and an average specificity of 74.4% (vs. 98.1% and 50.0% for the CATCH rule respectively). Automatically derived prediction models cannot replace a physician's acumen. However, they help establish reference performance indicators for the purpose of developing clinical decision rules so the trade-off between prediction sensitivity and specificity is better understood. Copyright © 2011 Elsevier B.V. All rights reserved.
Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.
2007-01-01
In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921
NASA Astrophysics Data System (ADS)
Zhao, Chun; Huang, Maoyi; Fast, Jerome D.; Berg, Larry K.; Qian, Yun; Guenther, Alex; Gu, Dasa; Shrivastava, Manish; Liu, Ying; Walters, Stacy; Pfister, Gabriele; Jin, Jiming; Shilling, John E.; Warneke, Carsten
2016-05-01
Current climate models still have large uncertainties in estimating biogenic trace gases, which can significantly affect atmospheric chemistry and secondary aerosol formation that ultimately influences air quality and aerosol radiative forcing. These uncertainties result from many factors, including uncertainties in land surface processes and specification of vegetation types, both of which can affect the simulated near-surface fluxes of biogenic volatile organic compounds (BVOCs). In this study, the latest version of Model of Emissions of Gases and Aerosols from Nature (MEGAN v2.1) is coupled within the land surface scheme CLM4 (Community Land Model version 4.0) in the Weather Research and Forecasting model with chemistry (WRF-Chem). In this implementation, MEGAN v2.1 shares a consistent vegetation map with CLM4 for estimating BVOC emissions. This is unlike MEGAN v2.0 in the public version of WRF-Chem that uses a stand-alone vegetation map that differs from what is used by land surface schemes. This improved modeling framework is used to investigate the impact of two land surface schemes, CLM4 and Noah, on BVOCs and examine the sensitivity of BVOCs to vegetation distributions in California. The measurements collected during the Carbonaceous Aerosol and Radiative Effects Study (CARES) and the California Nexus of Air Quality and Climate Experiment (CalNex) conducted in June of 2010 provided an opportunity to evaluate the simulated BVOCs. Sensitivity experiments show that land surface schemes do influence the simulated BVOCs, but the impact is much smaller than that of vegetation distributions. This study indicates that more effort is needed to obtain the most appropriate and accurate land cover data sets for climate and air quality models in terms of simulating BVOCs, oxidant chemistry and, consequently, secondary organic aerosol formation.
Digital PCR Modeling for Maximal Sensitivity, Dynamic Range and Measurement Precision
Majumdar, Nivedita; Wessel, Thomas; Marks, Jeffrey
2015-01-01
The great promise of digital PCR is the potential for unparalleled precision enabling accurate measurements for genetic quantification. A challenge associated with digital PCR experiments, when testing unknown samples, is to perform experiments at dilutions allowing the detection of one or more targets of interest at a desired level of precision. While theory states that optimal precision (Po) is achieved by targeting ~1.59 mean copies per partition (λ), and that dynamic range (R) includes the space spanning one positive (λL) to one negative (λU) result from the total number of partitions (n), these results are tempered for the practitioner seeking to construct digital PCR experiments in the laboratory. A mathematical framework is presented elucidating the relationships between precision, dynamic range, number of partitions, interrogated volume, and sensitivity in digital PCR. The impact that false reaction calls and volumetric variation have on sensitivity and precision is next considered. The resultant effects on sensitivity and precision are established via Monte Carlo simulations reflecting the real-world likelihood of encountering such scenarios in the laboratory. The simulations provide insight to the practitioner on how to adapt experimental loading concentrations to counteract any one of these conditions. The framework is augmented with a method of extending the dynamic range of digital PCR, with and without increasing n, via the use of dilutions. An example experiment demonstrating the capabilities of the framework is presented enabling detection across 3.33 logs of starting copy concentration. PMID:25806524
Covey, Curt; Lucas, Donald D.; Tannahill, John; ...
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
A shorter and more specific oral sensitization-based experimental model of food allergy in mice.
Bailón, Elvira; Cueto-Sola, Margarita; Utrilla, Pilar; Rodríguez-Ruiz, Judith; Garrido-Mesa, Natividad; Zarzuelo, Antonio; Xaus, Jordi; Gálvez, Julio; Comalada, Mònica
2012-07-31
Cow's milk protein allergy (CMPA) is one of the most prevalent human food-borne allergies, particularly in children. Experimental animal models have become critical tools with which to perform research on new therapeutic approaches and on the molecular mechanisms involved. However, oral food allergen sensitization in mice requires several weeks and is usually associated with unspecific immune responses. To overcome these inconveniences, we have developed a new food allergy model that takes only two weeks while retaining the main characters of allergic response to food antigens. The new model is characterized by oral sensitization of weaned Balb/c mice with 5 doses of purified cow's milk protein (CMP) plus cholera toxin (CT) for only two weeks and posterior challenge with an intraperitoneal administration of the allergen at the end of the sensitization period. In parallel, we studied a conventional protocol that lasts for seven weeks, and also the non-specific effects exerted by CT in both protocols. The shorter protocol achieves a similar clinical score as the original food allergy model without macroscopically affecting gut morphology or physiology. Moreover, the shorter protocol caused an increased IL-4 production and a more selective antigen-specific IgG1 response. Finally, the extended CT administration during the sensitization period of the conventional protocol is responsible for the exacerbated immune response observed in that model. Therefore, the new model presented here allows a reduction not only in experimental time but also in the number of animals required per experiment while maintaining the features of conventional allergy models. We propose that the new protocol reported will contribute to advancing allergy research. Copyright © 2012 Elsevier B.V. All rights reserved.
Adaptive significance of natural variations in maternal care in rats: a translational perspective
Beery, Annaliese K.; Francis, Darlene D.
2011-01-01
A wealth of data from the last fifty years documents the potency of early life experiences including maternal care on developing offspring. A majority of this research has focused on the developing stress axis and stress-sensitive behaviors in hopes of identifying factors impacting resilience and risk-sensitivity. The power of early life experience to shape later development is profound and has the potential to increase fitness of individuals for their environments. Current findings in a rat maternal care paradigm highlight the complex and dynamic relation between early experiences and a variety of outcomes. In this review we propose adaptive hypotheses for alternate maternal strategies and resulting offspring phenotypes, and ways to distinguish between these hypotheses. We also provide evidence underscoring the critical role of context in interpreting the adaptive significance of early experiences. If our goal is to identify risk-factors relevant to humans, we must better explore the role of the social and physical environment in our basic animal models. PMID:21458485
NASA Astrophysics Data System (ADS)
Shi, Xiaoxu; Lohmann, Gerrit
2017-09-01
A coupled atmosphere-ocean-sea ice model is applied to investigate to what degree the area-thickness distribution of new ice formed in open water affects the ice and ocean properties. Two sensitivity experiments are performed which modify the horizontal-to-vertical aspect ratio of open-water ice growth. The resulting changes in the Arctic sea-ice concentration strongly affect the surface albedo, the ocean heat release to the atmosphere, and the sea-ice production. The changes are further amplified through a positive feedback mechanism among the Arctic sea ice, the Atlantic Meridional Overturning Circulation (AMOC), and the surface air temperature in the Arctic, as the Fram Strait sea ice import influences the freshwater budget in the North Atlantic Ocean. Anomalies in sea-ice transport lead to changes in sea surface properties of the North Atlantic and the strength of AMOC. For the Southern Ocean, the most pronounced change is a warming along the Antarctic Circumpolar Current (ACC), owing to the interhemispheric bipolar seasaw linked to AMOC weakening. Another insight of this study lies on the improvement of our climate model. The ocean component FESOM is a newly developed ocean-sea ice model with an unstructured mesh and multi-resolution. We find that the subpolar sea-ice boundary in the Northern Hemisphere can be improved by tuning the process of open-water ice growth, which strongly influences the sea ice concentration in the marginal ice zone, the North Atlantic circulation, salinity and Arctic sea ice volume. Since the distribution of new ice on open water relies on many uncertain parameters and the knowledge of the detailed processes is currently too crude, it is a challenge to implement the processes realistically into models. Based on our sensitivity experiments, we conclude a pronounced uncertainty related to open-water sea ice growth which could significantly affect the climate system sensitivity.
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
Estimates of effects of residual acceleration on USML-1 experiments
NASA Technical Reports Server (NTRS)
Naumann, Robert J.
1995-01-01
The purpose of this study effort was to develop analytical models to describe the effects of residual accelerations on the experiments to be carried on the first U.S. Microgravity Lab mission (USML-1) and to test the accuracy of these models by comparing the pre-flight predicted effects with the post-flight measured effects. After surveying the experiments to be performed on USML-1, it became evident that the anticipated residual accelerations during the USML-1 mission were well below the threshold for most of the primary experiments and all of the secondary (Glovebox) experiments and that the only set of experiments that could provide quantifiable effects, and thus provide a definitive test of the analytical models, were the three melt growth experiments using the Bridgman-Stockbarger type Crystal Growth Furnace (CGF). This class of experiments is by far the most sensitive to low level quasi-steady accelerations that are unavoidable on space craft operating in low earth orbit. Because of this, they have been the drivers for the acceleration requirements imposed on the Space Station. Therefore, it is appropriate that the models on which these requirements are based are tested experimentally. Also, since solidification proceeds directionally over a long period of time, the solidified ingot provides a more or less continuous record of the effects from acceleration disturbances.
Leuco-crystal-violet micelle gel dosimeters: Component effects on dose-rate dependence
NASA Astrophysics Data System (ADS)
Xie, J. C.; Katz, E. A. B.; Alexander, K. M.; Schreiner, L. J.; McAuley, K. B.
2017-05-01
Designed experiments were performed to produce empirical models for the dose sensitivity, initial absorbance, and dose-rate dependence respectively for leucocrystal violet (LCV) micelle gel dosimeters containing cetyltrimethylammonium bromide (CTAB) and 2,2,2-trichloroethanol (TCE). Previous gels of this type showed dose-rate dependent behaviour, producing an ˜18% increase in dose sensitivity between dose rates of 100 and 600 cGy min-1. Our models predict that the dose rate dependence can be reduced by increasing the concentration of TCE, CTAB and LCV. Increasing concentrations of LCV and CTAB produces a significant increase in dose sensitivity with a corresponding increase in initial absorbance. An optimization procedure was used to determine a nearly dose-rate independent gel which maintained high sensitivity and low initial absorbance. This gel which contains 33 mM CTAB, 1.25 mM LCV, and 96 mM TCE in 25 mM trichloroacetic acid and 4 wt% gelatin showed an increase in dose sensitivity of only 4% between dose rates of 100 and 600 cGy min-1, and provides an 80% greater dose sensitivity compared to Jordan’s standard gels with similar initial absorbance.
Social Regulation of Leukocyte Homeostasis: The Role of Glucocorticoid Sensitivity
Cole, Steve W.
2010-01-01
Recent small-scale genomics analyses suggest that physiologic regulation of pro-inflammatory gene expression by endogenous glucocorticoids may be compromised in individuals who experience chronic social isolation. This could potentially contribute to the elevated prevalence of inflammation-related disease previously observed in social isolates. The present study assessed the relationship between leukocyte distributional sensitivity to glucocorticoid regulation and subjective social isolation in a large population-based sample of older adults. Initial analyses confirmed that circulating neutrophil percentages were elevated, and circulating lymphocyte and monocyte percentages were suppressed, in direct proportion to circulating cortisol levels. However, leukocyte distributional sensitivity to endogenous glucocorticoids was abrogated in individuals reporting either occasional or frequent experiences of subjective social isolation. This finding held in both nonparametric univariate analyses and in multivariate linear models controlling for a variety of biological, social, behavioral, and psychological confounders. The present results suggest that social factors may alter immune cell sensitivity to physiologic regulation by the hypothalamic-pituitary-adrenal axis in ways that could ultimately contribute to the increased physical health risks associated with social isolation. PMID:18394861
Parenting predicts Strange Situation cortisol reactivity among children adopted internationally.
DePasquale, Carrie E; Raby, K Lee; Hoye, Julie; Dozier, Mary
2018-03-01
The functioning of the hypothalamic pituitary adrenal (HPA) axis can be altered by adverse early experiences. Recent studies indicate that children who were adopted internationally after experiencing early institutional rearing and unstable caregiving exhibit blunted HPA reactivity to stressful situations. The present study examined whether caregiving experiences post-adoption further modulate children's HPA responses to stress. Parental sensitivity during naturalistic parent-child play interactions was assessed for 66 children (M age = 17.3 months, SD = 4.6) within a year of being adopted internationally. Approximately 8 months later, children's salivary cortisol levels were measured immediately before as well as 15 and 30 min after a series of brief separations from the mother in an unfamiliar laboratory setting. Latent growth curve modeling indicated that experiencing more parental sensitivity predicted increased cortisol reactivity to the stressor. Although half the families received an intervention designed to improve parental sensitivity, the intervention did not significantly alter children's cortisol outcomes. These findings suggest that post-adoption parental sensitivity may help normalize the HPA response to stress among children adopted internationally. Copyright © 2018 Elsevier Ltd. All rights reserved.
De Kauwe, Martin G; Medlyn, Belinda E; Walker, Anthony P; Zaehle, Sönke; Asao, Shinichi; Guenet, Bertrand; Harper, Anna B; Hickler, Thomas; Jain, Atul K; Luo, Yiqi; Lu, Xingjie; Luus, Kristina; Parton, William J; Shu, Shijie; Wang, Ying-Ping; Werner, Christian; Xia, Jianyang; Pendall, Elise; Morgan, Jack A; Ryan, Edmund M; Carrillo, Yolima; Dijkstra, Feike A; Zelikova, Tamara J; Norby, Richard J
2017-09-01
Multifactor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date, such models have only been tested against single-factor experiments. We applied 10 TBMs to the multifactor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1 ). Comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the N cycle models, N availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO 2 -induced water savings to extend the growing season length. Observed interactive (CO 2 × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change. © 2017 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.
Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less
De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.; ...
2017-02-01
Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less
Multiple angles on the sterile neutrino - a combined view of cosmological and oscillation limits
NASA Astrophysics Data System (ADS)
Guzowski, Pawel
2017-09-01
The possible existence of sterile neutrinos is an important unresolved question for both particle physics and cosmology. Data sensitive to a sterile neutrino is coming from both particle physics experiments and from astrophysical measurements of the Cosmic Microwave Background. In this study, we address the question whether these two contrasting data sets provide complementary information about sterile neutrinos. We focus on the muon disappearance oscillation channel, taking data from the MINOS, ICECUBE and Planck experiments, converting the limits into particle physics and cosmological parameter spaces, to illustrate the different regions of parameter space where the data sets have the best sensitivity. For the first time, we combine the data sets into a single analysis to illustrate how the limits on the parameters of the sterile-neutrino model are strengthened. We investigate how data from a future accelerator neutrino experiment (SBN) will be able to further constrain this picture.
McCauley, Peter; Kalachev, Leonid V; Mollicone, Daniel J; Banks, Siobhan; Dinges, David F; Van Dongen, Hans P A
2013-12-01
Recent experimental observations and theoretical advances have indicated that the homeostatic equilibrium for sleep/wake regulation--and thereby sensitivity to neurobehavioral impairment from sleep loss--is modulated by prior sleep/wake history. This phenomenon was predicted by a biomathematical model developed to explain changes in neurobehavioral performance across days in laboratory studies of total sleep deprivation and sustained sleep restriction. The present paper focuses on the dynamics of neurobehavioral performance within days in this biomathematical model of fatigue. Without increasing the number of model parameters, the model was updated by incorporating time-dependence in the amplitude of the circadian modulation of performance. The updated model was calibrated using a large dataset from three laboratory experiments on psychomotor vigilance test (PVT) performance, under conditions of sleep loss and circadian misalignment; and validated using another large dataset from three different laboratory experiments. The time-dependence of circadian amplitude resulted in improved goodness-of-fit in night shift schedules, nap sleep scenarios, and recovery from prior sleep loss. The updated model predicts that the homeostatic equilibrium for sleep/wake regulation--and thus sensitivity to sleep loss--depends not only on the duration but also on the circadian timing of prior sleep. This novel theoretical insight has important implications for predicting operator alertness during work schedules involving circadian misalignment such as night shift work.
Numerical experiments on short-term meteorological effects on solar variability
NASA Technical Reports Server (NTRS)
Somerville, R. C. J.; Hansen, J. E.; Stone, P. H.; Quirk, W. J.; Lacis, A. A.
1975-01-01
A set of numerical experiments was conducted to test the short-range sensitivity of a large atmospheric general circulation model to changes in solar constant and ozone amount. On the basis of the results of 12-day sets of integrations with very large variations in these parameters, it is concluded that realistic variations would produce insignificant meteorological effects. Any causal relationships between solar variability and weather, for time scales of two weeks or less, rely upon changes in parameters other than solar constant or ozone amounts, or upon mechanisms not yet incorporated in the model.
Eng, Jason W.-L.; Reed, Chelsey B.; Kokolus, Kathleen M.; Pitoniak, Rosemarie; Utley, Adam; Bucsek, Mark J.; Ma, Wen Wee; Repasky, Elizabeth A.; Hylander, Bonnie L.
2015-01-01
Cancer research relies heavily on murine models for evaluating the anti-tumour efficacy of therapies. Here we show that the sensitivity of several pancreatic tumour models to cytotoxic therapies is significantly increased when mice are housed at a thermoneutral ambient temperature of 30 °C compared with the standard temperature of 22 °C. Further, we find that baseline levels of norepinephrine as well as the levels of several anti-apoptotic molecules are elevated in tumours from mice housed at 22 °C. The sensitivity of tumours to cytotoxic therapies is also enhanced by administering a β-adrenergic receptor antagonist to mice housed at 22 °C. These data demonstrate that standard housing causes a degree of cold stress sufficient to impact the signalling pathways related to tumour-cell survival and affect the outcome of pre-clinical experiments. Furthermore, these data highlight the significant role of host physiological factors in regulating the sensitivity of tumours to therapy. PMID:25756236
NASA Astrophysics Data System (ADS)
Eng, Jason W.-L.; Reed, Chelsey B.; Kokolus, Kathleen M.; Pitoniak, Rosemarie; Utley, Adam; Bucsek, Mark J.; Ma, Wen Wee; Repasky, Elizabeth A.; Hylander, Bonnie L.
2015-03-01
Cancer research relies heavily on murine models for evaluating the anti-tumour efficacy of therapies. Here we show that the sensitivity of several pancreatic tumour models to cytotoxic therapies is significantly increased when mice are housed at a thermoneutral ambient temperature of 30 °C compared with the standard temperature of 22 °C. Further, we find that baseline levels of norepinephrine as well as the levels of several anti-apoptotic molecules are elevated in tumours from mice housed at 22 °C. The sensitivity of tumours to cytotoxic therapies is also enhanced by administering a β-adrenergic receptor antagonist to mice housed at 22 °C. These data demonstrate that standard housing causes a degree of cold stress sufficient to impact the signalling pathways related to tumour-cell survival and affect the outcome of pre-clinical experiments. Furthermore, these data highlight the significant role of host physiological factors in regulating the sensitivity of tumours to therapy.
Zhao, Yueyuan; Zhang, Xuefeng; Zhu, Fengcai; Jin, Hui; Wang, Bei
2016-08-02
Objective To estimate the cost-effectiveness of hepatitis E vaccination among pregnant women in epidemic regions. Methods A decision tree model was constructed to evaluate the cost-effectiveness of 3 hepatitis E virus vaccination strategies from societal perspectives. The model parameters were estimated on the basis of published studies and experts' experience. Sensitivity analysis was used to evaluate the uncertainties of the model. Results Vaccination was more economically effective on the basis of the incremental cost-effectiveness ratio (ICER< 3 times China's per capital gross domestic product/quality-adjusted life years); moreover, screening and vaccination had higher QALYs and lower costs compared with universal vaccination. No parameters significantly impacted ICER in one-way sensitivity analysis, and probabilistic sensitivity analysis also showed screening and vaccination to be the dominant strategy. Conclusion Screening and vaccination is the most economical strategy for pregnant women in epidemic regions; however, further studies are necessary to confirm the efficacy and safety of the hepatitis E vaccines.
Mesoscale Assimilation of TMI Rainfall Data with 4DVAR: Sensitivity Studies
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Pu, Zhaoxia
2003-01-01
Sensitivity studies are performed on the assimilation of TRMM (Tropical Rainfall Measurement Mission) Microwave Imager (TMI) derived rainfall data into a mesoscale model using a four-dimensional variational data assimilation (4DVAR) technique. A series of numerical experiments is conducted to evaluate the impact of TMI rainfall data on the numerical simulation of Hurricane Bonnie (1998). The results indicate that rainfall data assimilation is sensitive to the error characteristics of the data and the inclusion of physics in the adjoint and forward models. In addition, assimilating the rainfall data alone is helpful for producing a more realistic eye and rain bands in the hurricane but does not ensure improvements in hurricane intensity forecasts. Further study indicated that it is necessary to incorporate TMI rainfall data together with other types of data such as wind data into the model, in which case the inclusion of the rainfall data further improves the intensity forecast of the hurricane. This implies that proper constraints may be needed for rainfall assimilation.
NASA Astrophysics Data System (ADS)
Kodama, C.; Noda, A. T.; Satoh, M.
2012-06-01
This study presents an assessment of three-dimensional structures of hydrometeors simulated by the NICAM, global nonhydrostatic atmospheric model without cumulus parameterization, using multiple satellite data sets. A satellite simulator package (COSP: the CFMIP Observation Simulator Package) is employed to consistently compare model output with ISCCP, CALIPSO, and CloudSat satellite observations. Special focus is placed on high thin clouds, which are not observable in the conventional ISCCP data set, but can be detected by the CALIPSO observations. For the control run, the NICAM simulation qualitatively captures the geographical distributions of the high, middle, and low clouds, even though the horizontal mesh spacing is as coarse as 14 km. The simulated low cloud is very close to that of the CALIPSO low cloud. Both the CloudSat observations and NICAM simulation show a boomerang-type pattern in the radar reflectivity-height histogram, suggesting that NICAM realistically simulates the deep cloud development process. A striking difference was found in the comparisons of high thin cirrus, showing overestimated cloud and higher cloud top in the model simulation. Several model sensitivity experiments are conducted with different cloud microphysical parameters to reduce the model-observation discrepancies in high thin cirrus. In addition, relationships among clouds, Hadley circulation, outgoing longwave radiation and precipitation are discussed through the sensitivity experiments.
Absolute neutrino mass measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Joachim
2011-10-06
The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2{beta}) searches, single {beta}-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy.Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments inmore » Mainz and Troitsk, using tritium as beta emitter. The next generation tritium {beta}-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope ({sup 137}Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R and D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2{beta} decay and single {beta}-decay.« less
NASA Technical Reports Server (NTRS)
Hou, Gene
2004-01-01
The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.
NASA Astrophysics Data System (ADS)
Chen, Tao; Ye, Meng-li; Liu, Shu-liang; Deng, Yan
2018-03-01
In view of the principle for occurrence of cross-sensitivity, a series of calibration experiments are carried out to solve the cross-sensitivity problem of embedded fiber Bragg gratings (FBGs) using the reference grating method. Moreover, an ultrasonic-vibration-assisted grinding (UVAG) model is established, and finite element analysis (FEA) is carried out under the monitoring environment of embedded temperature measurement system. In addition, the related temperature acquisition tests are set in accordance with requirements of the reference grating method. Finally, comparative analyses of the simulation and experimental results are performed, and it may be concluded that the reference grating method may be utilized to effectively solve the cross-sensitivity of embedded FBGs.
Liang, X; Wang, Z-Y; Liu, H-Y; Lin, Q; Wang, Z; Liu, Y
2015-01-01
to investigate adult attachment status in first-time mothers, and stability and/or changes in maternal sensitivity during infancy. longitudinal study using quantitative and qualitative methods, and statistical modelling. Three home visits were undertaken when the infant was approximately six, nine and 14 months old. The Adult-to-Parental Attachment Experience Survey was used, and scores for three dimensions were obtained: secure-autonomous, preoccupied and dismissive. Maternal sensitivity was assessed at each time point using the Maternal Behaviour Q-Sort by observing interaction between the mother and infant at home. homes and community settings in greater metropolitan Beijing, North China. 83 mothers and infants born in 2010 enrolled in this study. Data were missing for one or more time points in 20 cases. the mean score for maternal sensitivity tended to increase from six to 14 months. Post-hoc analyses of one-way repeated-measures analysis of variance revealed that maternal sensitivity was significantly higher at 14 months than at six or nine months. An unconditional latent growth model (LGM) of maternal sensitivity, estimated using the Bayesian approach, provided a good fit for the data. Using three attachment-related variables as predictors in the conditional LGM, the model fitting indices were found to be sufficient, and the results suggested that the secure score positively predicted the intercept of the growth model, and the dismissive score negatively predicted both the intercept and slope of the growth model. maternal sensitivity increased over time during infancy. Furthermore, individual differences existed in the developmental trajectory, which was influenced by maternal attachment status. knowledge about attachment-related differences in the trajectory of first-time mothers' sensitivity to infants may help midwives and doctors to provide individualised information and support, with special attention given to mothers with a dismissive attachment status. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Microminipig as an Animal Model for Influenza A Virus Infection
Nakajima, Noriko; Shibata, Masatoshi; Takahashi, Kenta; Sato, Yuko; Kiso, Maki; Yamayoshi, Seiya; Ito, Mutsumi; Enya, Satoko; Otake, Masayoshi; Kangawa, Akihisa; da Silva Lopes, Tiago Jose; Ito, Hirotaka; Hasegawa, Hideki
2016-01-01
ABSTRACT Pigs are considered a mixing vessel for the generation of novel pandemic influenza A viruses through reassortment because of their susceptibility to both avian and human influenza viruses. However, experiments to understand reassortment in pigs in detail have been limited because experiments with regular-sized pigs are difficult to perform. Miniature pigs have been used as an experimental animal model, but they are still large and require relatively large cages for housing. The microminipig is one of the smallest miniature pigs used for experiments. Introduced in 2010, microminipigs weigh around 10 kg at an early stage of maturity (6 to 7 months old) and are easy to handle. To evaluate the microminipig as an animal model for influenza A virus infection, we compared the receptor distribution of 10-week-old male pigs (Yorkshire Large White) and microminipigs. We found that both animals have SAα2,3Gal and SAα2,6Gal in their respiratory tracts, with similar distributions of both receptor types. We further found that the sensitivity of microminipigs to influenza A viruses was the same as that of larger miniature pigs. Our findings indicate that the microminipig could serve as a novel model animal for influenza A virus infection. IMPORTANCE The microminipig is one of the smallest miniature pigs in the world and is used as an experimental animal model for life science research. In this study, we evaluated the microminipig as a novel animal model for influenza A virus infection. The distribution of influenza virus receptors in the respiratory tract of the microminipig was similar to that of the pig, and the sensitivity of microminipigs to influenza A viruses was the same as that of miniature pigs. Our findings suggest that microminipigs represent a novel animal model for influenza A virus infection. PMID:27807225
Lagarde, Mylene; Pagaiya, Nonglak; Tangcharoensathian, Viroj; Blaauw, Duane
2013-12-01
This study investigates heterogeneity in Thai doctors' job preferences at the beginning of their career, with a view to inform the design of effective policies to retain them in rural areas. A discrete choice experiment was designed and administered to 198 young doctors. We analysed the data using several specifications of a random parameter model to account for various sources of preference heterogeneity. By modelling preference heterogeneity, we showed how sensitivity to different incentives varied in different sections of the population. In particular, doctors from rural backgrounds were more sensitive than others to a 45% salary increase and having a post near their home province, but they were less sensitive to a reduction in the number of on-call nights. On the basis of the model results, the effects of two types of interventions were simulated: introducing various incentives and modifying the population structure. The results of the simulations provide multiple elements for consideration for policy-makers interested in designing effective interventions. They also underline the interest of modelling preference heterogeneity carefully. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan
2014-09-01
A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.
The acquisition process of musical tonal schema: implications from connectionist modeling.
Matsunaga, Rie; Hartono, Pitoyo; Abe, Jun-Ichi
2015-01-01
Using connectionist modeling, we address fundamental questions concerning the acquisition process of musical tonal schema of listeners. Compared to models of previous studies, our connectionist model (Learning Network for Tonal Schema, LeNTS) was better equipped to fulfill three basic requirements. Specifically, LeNTS was equipped with a learning mechanism, bound by culture-general properties, and trained by sufficient melody materials. When exposed to Western music, LeNTS acquired musical 'scale' sensitivity early and 'harmony' sensitivity later. The order of acquisition of scale and harmony sensitivities shown by LeNTS was consistent with the culture-specific acquisition order shown by musically westernized children. The implications of these results for the acquisition process of a tonal schema of listeners are as follows: (a) the acquisition process may entail small and incremental changes, rather than large and stage-like changes, in corresponding neural circuits; (b) the speed of schema acquisition may mainly depend on musical experiences rather than maturation; and (c) the learning principles of schema acquisition may be culturally invariant while the acquired tonal schemas are varied with exposed culture-specific music.
The acquisition process of musical tonal schema: implications from connectionist modeling
Matsunaga, Rie; Hartono, Pitoyo; Abe, Jun-ichi
2015-01-01
Using connectionist modeling, we address fundamental questions concerning the acquisition process of musical tonal schema of listeners. Compared to models of previous studies, our connectionist model (Learning Network for Tonal Schema, LeNTS) was better equipped to fulfill three basic requirements. Specifically, LeNTS was equipped with a learning mechanism, bound by culture-general properties, and trained by sufficient melody materials. When exposed to Western music, LeNTS acquired musical ‘scale’ sensitivity early and ‘harmony’ sensitivity later. The order of acquisition of scale and harmony sensitivities shown by LeNTS was consistent with the culture-specific acquisition order shown by musically westernized children. The implications of these results for the acquisition process of a tonal schema of listeners are as follows: (a) the acquisition process may entail small and incremental changes, rather than large and stage-like changes, in corresponding neural circuits; (b) the speed of schema acquisition may mainly depend on musical experiences rather than maturation; and (c) the learning principles of schema acquisition may be culturally invariant while the acquired tonal schemas are varied with exposed culture-specific music. PMID:26441725
Direction selectivity of blowfly motion-sensitive neurons is computed in a two-stage process.
Borst, A; Egelhaaf, M
1990-01-01
Direction selectivity of motion-sensitive neurons is generally thought to result from the nonlinear interaction between the signals derived from adjacent image points. Modeling of motion-sensitive networks, however, reveals that such elements may still respond to motion in a rather poor directionally selective way. Direction selectivity can be significantly enhanced if the nonlinear interaction is followed by another processing stage in which the signals of elements with opposite preferred directions are subtracted from each other. Our electrophysiological experiments in the fly visual system suggest that here direction selectivity is acquired in such a two-stage process. Images PMID:2251278
Acceleration sensitivity of micromachined pressure sensors
NASA Astrophysics Data System (ADS)
August, Richard; Maudie, Theresa; Miller, Todd F.; Thompson, Erik
1999-08-01
Pressure sensors serve a variety of automotive applications, some which may experience high levels of acceleration such as tire pressure monitoring. To design pressure sensors for high acceleration environments it is important to understand their sensitivity to acceleration especially if thick encapsulation layers are used to isolate the device from the hostile environment in which they reside. This paper describes a modeling approach to determine their sensitivity to acceleration that is very general and is applicable to different device designs and configurations. It also describes the results of device testing of a capacitive surface micromachined pressure sensor at constant acceleration levels from 500 to 2000 g's.
Low-Speed Pressure Sensitive Paint Studies
NASA Technical Reports Server (NTRS)
Owen, Brown; Mehta, Rabindra; Nixon, David (Technical Monitor)
1998-01-01
A series of low speed (M less than 0.2) experiments using University of Washington Fib-07 Pressure Sensitive Paint (PSP) have been conducted at NASA Ames on a NACA 0012 airfoil. Significant improvements in results have been shown: PSP calibration errors of the improved data (with pressure taps as a reference) now agree with theoretical error limits. Additional measurements on the 0012 airfoil using Temperature Sensitive Paint have been made. These TSP measurements now fully quantify the impact of temporal temperature changes on model surfaces on PSP measurements. Finally, simultaneous PSP - TSP measurements have been performed, allowing in-situ temperature correction of PSP data with good results.
NASA Astrophysics Data System (ADS)
Adloff, Markus; Reick, Christian H.; Claussen, Martin
2018-04-01
In simulations with the MPI Earth System Model, we study the feedback between the terrestrial carbon cycle and atmospheric CO2 concentrations under ice age and interglacial conditions. We find different sensitivities of terrestrial carbon storage to rising CO2 concentrations in the two settings. This result is obtained by comparing the transient response of the terrestrial carbon cycle to a fast and strong atmospheric CO2 concentration increase (roughly 900 ppm) in Coupled Climate Carbon Cycle Model Intercomparison Project (C4MIP)-type simulations starting from climates representing the Last Glacial Maximum (LGM) and pre-industrial times (PI). In this set-up we disentangle terrestrial contributions to the feedback from the carbon-concentration effect, acting biogeochemically via enhanced photosynthetic productivity when CO2 concentrations increase, and the carbon-climate effect, which affects the carbon cycle via greenhouse warming. We find that the carbon-concentration effect is larger under LGM than PI conditions because photosynthetic productivity is more sensitive when starting from the lower, glacial CO2 concentration and CO2 fertilization saturates later. This leads to a larger productivity increase in the LGM experiment. Concerning the carbon-climate effect, it is the PI experiment in which land carbon responds more sensitively to the warming under rising CO2 because at the already initially higher temperatures, tropical plant productivity deteriorates more strongly and extratropical carbon is respired more effectively. Consequently, land carbon losses increase faster in the PI than in the LGM case. Separating the carbon-climate and carbon-concentration effects, we find that they are almost additive for our model set-up; i.e. their synergy is small in the global sum of carbon changes. Together, the two effects result in an overall strength of the terrestrial carbon cycle feedback that is almost twice as large in the LGM experiment as in the PI experiment. For PI, ocean and land contributions to the total feedback are of similar size, while in the LGM case the terrestrial feedback is dominant.
Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian
2016-01-01
The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter—describing somatic integration—and the spike-history filter—accounting for spike-frequency adaptation—dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations. PMID:26907675
Design and experiment of data-driven modeling and flutter control of a prototype wing
NASA Astrophysics Data System (ADS)
Lum, Kai-Yew; Xu, Cai-Lin; Lu, Zhenbo; Lai, Kwok-Leung; Cui, Yongdong
2017-06-01
This paper presents an approach for data-driven modeling of aeroelasticity and its application to flutter control design of a wind-tunnel wing model. Modeling is centered on system identification of unsteady aerodynamic loads using computational fluid dynamics data, and adopts a nonlinear multivariable extension of the Hammerstein-Wiener system. The formulation is in modal coordinates of the elastic structure, and yields a reduced-order model of the aeroelastic feedback loop that is parametrized by airspeed. Flutter suppression is thus cast as a robust stabilization problem over uncertain airspeed, for which a low-order H∞ controller is computed. The paper discusses in detail parameter sensitivity and observability of the model, the former to justify the chosen model structure, and the latter to provide a criterion for physical sensor placement. Wind tunnel experiments confirm the validity of the modeling approach and the effectiveness of the control design.
Analysis of Seasonal Chlorophyll-a Using An Adjoint Three-Dimensional Ocean Carbon Cycle Model
NASA Astrophysics Data System (ADS)
Tjiputra, J.; Winguth, A.; Polzin, D.
2004-12-01
The misfit between numerical ocean model and observations can be reduced using data assimilation. This can be achieved by optimizing the model parameter values using adjoint model. The adjoint model minimizes the model-data misfit by estimating the sensitivity or gradient of the cost function with respect to initial condition, boundary condition, or parameters. The adjoint technique was used to assimilate seasonal chlorophyll-a data from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite to a marine biogeochemical model HAMOCC5.1. An Identical Twin Experiment (ITE) was conducted to test the robustness of the model and the non-linearity level of the forward model. The ITE experiment successfully recovered most of the perturbed parameter to their initial values, and identified the most sensitive ecosystem parameters, which contribute significantly to model-data bias. The regional assimilations of SeaWiFS chlorophyll-a data into the model were able to reduce the model-data misfit (i.e. the cost function) significantly. The cost function reduction mostly occurred in the high latitudes (e.g. the model-data misfit in the northern region during summer season was reduced by 54%). On the other hand, the equatorial regions appear to be relatively stable with no strong reduction in cost function. The optimized parameter set is used to forecast the carbon fluxes between marine ecosystem compartments (e.g. Phytoplankton, Zooplankton, Nutrients, Particulate Organic Carbon, and Dissolved Organic Carbon). The a posteriori model run using the regional best-fit parameterization yields approximately 36 PgC/yr of global net primary productions in the euphotic zone.
STS-40 orbital acceleration research experiment flight results during a typical sleep period
NASA Technical Reports Server (NTRS)
Blanchard, Robert C.; Nicholson, John Y.; Ritter, James R.
1992-01-01
The Orbital Acceleration Research Experiment (OARE), an electrostatic accelerometer package with complete on-orbit calibration capabilities was flown aboard Shuttle on STS-40. The instrument is designed to measure and record the Shuttle aerodynamic acceleration environment from the free molecule flow regime through the rarefied flow transition into the hypersonic continuum regime. Because of its sensitivity, the OARE instrument detects aerodynamic behavior of the Shuttle while in low-earth orbit. A 2-h orbital time period on day seven of the mission, when the crew was asleep and other spacecraft activities were at a minimum, was examined. Examination of the model with the flight data shows the instrument to be sensitive to all major expected low-frequency acceleration phenomena; however, some erratic instrument bias behavior persists in two axes. In these axes, the OARE data can be made to match a comprehensive atmospheric-aerodynamic model by making bias adjustments and slight liner corrections for drift.
Swanson, William H; Dul, Mitchell W; Horner, Douglas G; Liu, Tiffany; Tran, Irene
2014-01-20
To develop perimetric stimuli for which sensitivities are more resistant to reduced retinal illumination than current clinical perimeters. Fifty-four people free of eye disease were dilated and tested monocularly. For each test, retinal illumination was attenuated with neutral density (ND) filters, and a standard adaptation model was fit to derive mean and SEM for the adaptation parameter (NDhalf). For different stimuli, t-tests on NDhalf were used to assess significance of differences in consistency with Weber's law. Three experiments used custom Gaussian-windowed contrast sensitivity perimetry (CSP). Experiment 1 used CSP-1, with a Gaussian temporal pulse, a spatial frequency of 0.375 cyc/deg (cpd), and SD of 1.5°. Experiment 1 also used the Humphrey Matrix perimeter, with the N-30 test using 0.25 cpd and 25 Hz flicker. Experiment 2 used a rectangular temporal pulse, SDs of 0.25° and 0.5°, and spatial frequencies of 0.0 and 1.0 cpd. Experiment 3 used CSP-2, with 5-Hz flicker, SDs from 0.5° to 1.8°, and spatial frequencies from 0.14 to 0.50 cpd. In Experiment 1, CSP-1 was more consistent with Weber's law (NDhalf ± SEM = 1.86 ± 0.08 log unit) than N-30 (NDhalf = 1.03 ± 0.03 log unit; t > 9, P < 0.0001). All stimuli used in Experiments 2 and 3 had comparable consistency with Weber's law (NDhalf = 1.49-1.69 log unit; t < 2). Perimetric sensitivities were consistent with Weber's law when higher temporal frequencies were avoided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Don; Rearden, Bradley T; Hollenbach, Daniel F
2009-02-01
The Radiochemical Development Facility at Oak Ridge National Laboratory has been storing solid materials containing 233U for decades. Preparations are under way to process these materials into a form that is inherently safe from a nuclear criticality safety perspective. This will be accomplished by down-blending the {sup 233}U materials with depleted or natural uranium. At the request of the U.S. Department of Energy, a study has been performed using the SCALE sensitivity and uncertainty analysis tools to demonstrate how these tools could be used to validate nuclear criticality safety calculations of selected process and storage configurations. ISOTEK nuclear criticality safetymore » staff provided four models that are representative of the criticality safety calculations for which validation will be needed. The SCALE TSUNAMI-1D and TSUNAMI-3D sequences were used to generate energy-dependent k{sub eff} sensitivity profiles for each nuclide and reaction present in the four safety analysis models, also referred to as the applications, and in a large set of critical experiments. The SCALE TSUNAMI-IP module was used together with the sensitivity profiles and the cross-section uncertainty data contained in the SCALE covariance data files to propagate the cross-section uncertainties ({Delta}{sigma}/{sigma}) to k{sub eff} uncertainties ({Delta}k/k) for each application model. The SCALE TSUNAMI-IP module was also used to evaluate the similarity of each of the 672 critical experiments with each application. Results of the uncertainty analysis and similarity assessment are presented in this report. A total of 142 experiments were judged to be similar to application 1, and 68 experiments were judged to be similar to application 2. None of the 672 experiments were judged to be adequately similar to applications 3 and 4. Discussion of the uncertainty analysis and similarity assessment is provided for each of the four applications. Example upper subcritical limits (USLs) were generated for application 1 based on trending of the energy of average lethargy of neutrons causing fission, trending of the TSUNAMI similarity parameters, and use of data adjustment techniques.« less
Application of the pressure sensitive paint technique to steady and unsteady flow
NASA Technical Reports Server (NTRS)
Shimbo, Y.; Mehta, R.; Cantwell, B.
1996-01-01
Pressure sensitive paint is a newly-developed optical measurement technique with which one can get a continuous pressure distribution in much shorter time and lower cost than a conventional pressure tap measurement. However, most of the current pressure sensitive paint applications are restricted to steady pressure measurement at high speeds because of the small signal-to-noise ratio at low speed and a slow response to pressure changes. In the present study, three phases of work have been completed to extend the application of the pressure sensitive paint technique to low-speed testing and to investigate the applicability of the paint technique to unsteady flow. First the measurement system using a commercially available PtOEP/GP-197 pressure sensitive paint was established and applied to impinging jet measurements. An in-situ calibration using only five pressure tap data points was applied and the results showed good repeatability and good agreement with conventional pressure tap measurements on the whole painted area. The overall measurement accuracy in these experiments was found to be within 0.1 psi. The pressure sensitive paint technique was then applied to low-speed wind tunnel tests using a 60 deg delta wing model with leading edge blowing slots. The technical problems encountered in low-speed testing were resolved by using a high grade CCD camera and applying corrections to improve the measurement accuracy. Even at 35 m/s, the paint data not only agreed well with conventional pressure tap measurements but also clearly showed the suction region generated by the leading edge vortices. The vortex breakdown was also detected at alpha=30 deg. It was found that a pressure difference of 0.2 psi was required for a quantitative pressure measurement in this experiment and that temperature control or a parallel temperature measurement is necessary if thermal uniformity does not hold on the model. Finally, the pressure sensitive paint was applied to a periodically changing pressure field with a 12.8s time period. A simple first-order pole model was applied to deal with the phase lag of the paint. The unsteady pressure estimated from the time-changing pressure sensitive paint data agreed well with the pressure transducer data in regions of higher pressure and showed the possibility of extending the technique to unsteady pressure measurements. However, the model still needs further refinement based on the physics of the oxygen diffusion into the paint layer and the oxygen quenching on the paint luminescence.
A MiniBooNE Accelerator-Produced (sub)-GeV Dark Matter Search
NASA Astrophysics Data System (ADS)
Thornton, Remington; MiniBooNE-DM Collaboration
2016-09-01
Cosmological observations indicate that our universe contains dark matter (DM), yet we have no measurements of its microscopic properties. Whereas the gravitational interaction of DM is well understood, its interaction with the Standard Model is not. Direct detection experiments search for a nuclear recoil interaction produced by a DM relic particle and have a low-mass sensitivity edge of order 1 GeV. To detect DM with mass below 1 GeV, either the sensitivity of the experiments needs to be improved or use of accelerators producing boosted low-mass DM are needed. Using neutrino detectors to search for low-mass DM is logical due to the similarity of the DM and ν signatures in the detector. The MiniBooNE experiment, located at Fermilab on the Booster Neutrino Beamline, ran for 10 years in ν and ν modes and is already well understood, making it desirable to search for accelerator-produced boosted low-mass DM. A search for DM produced by 8 GeV protons hitting a steel beam-dump has finished, collecting 1 . 86 ×1020 POT . Final analysis containing 90% confidence limits and a model independent fit will be presented.
Acquisition of automatic imitation is sensitive to sensorimotor contingency.
Cook, Richard; Press, Clare; Dickinson, Anthony; Heyes, Cecilia
2010-08-01
The associative sequence learning model proposes that the development of the mirror system depends on the same mechanisms of associative learning that mediate Pavlovian and instrumental conditioning. To test this model, two experiments used the reduction of automatic imitation through incompatible sensorimotor training to assess whether mirror system plasticity is sensitive to contingency (i.e., the extent to which activation of one representation predicts activation of another). In Experiment 1, residual automatic imitation was measured following incompatible training in which the action stimulus was a perfect predictor of the response (contingent) or not at all predictive of the response (noncontingent). A contingency effect was observed: There was less automatic imitation indicative of more learning in the contingent group. Experiment 2 replicated this contingency effect and showed that, as predicted by associative learning theory, it can be abolished by signaling trials in which the response occurs in the absence of an action stimulus. These findings support the view that mirror system development depends on associative learning and indicate that this learning is not purely Hebbian. If this is correct, associative learning theory could be used to explain, predict, and intervene in mirror system development.
Modeling the effects of argument length and validity on inductive and deductive reasoning.
Rotello, Caren M; Heit, Evan
2009-09-01
In an effort to assess models of inductive reasoning and deductive reasoning, the authors, in 3 experiments, examined the effects of argument length and logical validity on evaluation of arguments. In Experiments 1a and 1b, participants were given either induction or deduction instructions for a common set of stimuli. Two distinct effects were observed: Induction judgments were more affected by argument length, and deduction judgments were more affected by validity. In Experiment 2, fluency was manipulated by displaying the materials in a low-contrast font, leading to increased sensitivity to logical validity. Several variants of 1-process and 2-process models of reasoning were assessed against the results. A 1-process model that assumed the same scale of argument strength underlies induction and deduction was not successful. A 2-process model that assumed separate, continuous informational dimensions of apparent deductive validity and associative strength gave the more successful account. (c) 2009 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Yoshida, K.; Naoe, H.
2016-12-01
Whether climate models drive Quasi-Biennial Oscillation (QBO) appropriately is important to assess QBO impact on climate change such as global warming and solar related variation. However, there were few models generating QBO in the Coupled Model Intercomparison Project Phase 5 (CMIP5). This study focuses on dynamical structure of the QBO and its sensitivity to background wind pattern and model configuration. We present preliminary results of experiments designed by "Towards Improving the QBO in Global Climate Models (QBOi)", which is derived from the Stratosphere-troposphere processes and their role in climate (SPARC), in the Meteorological Research Institute earth system model, MRI-ESM2. The simulations were performed in present-day climate condition, repeated annual cycle condition with various CO2 level and sea surface temperatures, and QBO hindcast. In the present climate simulation, zonal wind in the equatorial stratosphere generally exhibits realistic behavior of the QBO. Equatorial zonal wind variability associated with QBO is overestimated in upper stratosphere and underestimated in lower stratosphere. In the MRI-ESM2, the QBO behavior is mainly driven by gravity wave drag parametrization (GWDP) introduced in Hines (1997). Comparing to reanalyses, shortage of resolved wave forcing is found especially in equatorial lower stratosphere. These discrepancies can be attributed to difference in wave forcing, background wind pattern and model configuration. We intend to show results of additional sensitivity experiments to examine how model configuration and background wind pattern affect resolved wave source, wave propagation characteristics, and QBO behavior.
Sensitivity of Vadose Zone Water Fluxes to Climate Shifts in Arid Settings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfletschinger, H.; Prömmel, K.; Schüth, C.
2014-01-01
Vadose zone water fluxes in arid settings are investigated regarding their sensitivity to hydraulic soil parameters and meteorological data. The study is based on the inverse modeling of highly defined soil column experiments and subsequent scenario modeling comparing different climate projections for a defined arid region. In arid regions, groundwater resources are prone to depletion due to excessive water use and little recharge potential. Especially in sand dune areas, groundwater recharge is highly dependent on vadose zone properties and corresponding water fluxes. Nevertheless, vadose zone water fluxes under arid conditions are hard to determine owing to, among other reasons, deepmore » vadose zones with generally low fluxes and only sporadic high infiltration events. In this study, we present an inverse model of infiltration experiments accounting for variable saturated nonisothermal water fluxes to estimate effective hydraulic and thermal parameters of dune sands. A subsequent scenario modeling links the results of the inverse model with projections of a global climate model until 2100. The scenario modeling clearly showed the high dependency of groundwater recharge on precipitation amounts and intensities, whereas temperature increases are only of minor importance for deep infiltration. However, simulated precipitation rates are still affected by high uncertainties in the response to the hydrological input data of the climate model. Thus, higher certainty in the prediction of precipitation pattern is a major future goal for climate modeling to constrain future groundwater management strategies in arid regions.« less
Nicholas C. Coops; Richard H. Waring; Todd A. Schroeder
2009-01-01
Although long-lived tree species experience considerable environmental variation over their life spans, their geographical distributions reflect sensitivity mainly to mean monthly climatic conditions.We introduce an approach that incorporates a physiologically based growth model to illustrate how a half-dozen tree species differ in their responses to monthly variation...
Huang, Min; Carmichael, Gregory R.; Pierce, R. Bradley; Jo, Duseong S.; Park, Rokjin J.; Flemming, Johannes; Emmons, Louisa K.; Bowman, Kevin W.; Henze, Daven K.; Davila, Yanko; Sudo, Kengo; Jonson, Jan Eiof; Lund, Marianne Tronstad; Janssens-Maenhout, Greet; Dentener, Frank J.; Keating, Terry J.; Oetjen, Hilke; Payne, Vivienne H.
2018-01-01
The recent update on the US National Ambient Air Quality Standards (NAAQS) of the ground-level ozone (O3/ can benefit from a better understanding of its source contributions in different US regions during recent years. In the Hemispheric Transport of Air Pollution experiment phase 1 (HTAP1), various global models were used to determine the O3 source–receptor (SR) relationships among three continents in the Northern Hemisphere in 2001. In support of the HTAP phase 2 (HTAP2) experiment that studies more recent years and involves higher-resolution global models and regional models’ participation, we conduct a number of regional-scale Sulfur Transport and dEposition Model (STEM) air quality base and sensitivity simulations over North America during May–June 2010. STEM’s top and lateral chemical boundary conditions were downscaled from three global chemical transport models’ (i.e., GEOS-Chem, RAQMS, and ECMWF C-IFS) base and sensitivity simulations in which the East Asian (EAS) anthropogenic emissions were reduced by 20 %. The mean differences between STEM surface O3 sensitivities to the emission changes and its corresponding boundary condition model’s are smaller than those among its boundary condition models, in terms of the regional/period-mean (<10 %) and the spatial distributions. An additional STEM simulation was performed in which the boundary conditions were downscaled from a RAQMS (Realtime Air Quality Modeling System) simulation without EAS anthropogenic emissions. The scalability of O3 sensitivities to the size of the emission perturbation is spatially varying, and the full (i.e., based on a 100% emission reduction) source contribution obtained from linearly scaling the North American mean O3 sensitivities to a 20% reduction in the EAS anthropogenic emissions may be underestimated by at least 10 %. The three boundary condition models’ mean O3 sensitivities to the 20% EAS emission perturbations are ~8% (May–June 2010)/~11% (2010 annual) lower than those estimated by eight global models, and the multi-model ensemble estimates are higher than the HTAP1 reported 2001 conditions. GEOS-Chem sensitivities indicate that the EAS anthropogenic NOx emissions matter more than the other EAS O3 precursors to the North American O3, qualitatively consistent with previous adjoint sensitivity calculations. In addition to the analyses on large spatial–temporal scales relative to the HTAP1, we also show results on subcontinental and event scales that are more relevant to the US air quality management. The EAS pollution impacts are weaker during observed O3 exceedances than on all days in most US regions except over some high-terrain western US rural/remote areas. Satellite O3 (TES, JPL–IASI, and AIRS) and carbon monoxide (TES and AIRS) products, along with surface measurements and model calculations, show that during certain episodes stratospheric O3 intrusions and the transported EAS pollution influenced O3 in the western and the eastern US differently. Free-running (i.e., without chemical data assimilation) global models underpredicted the transported background O3 during these episodes, posing difficulties for STEM to accurately simulate the surface O3 and its source contribution. Although we effectively improved the modeled O3 by incorporating satellite O3 (OMI and MLS) and evaluated the quality of the HTAP2 emission inventory with the Royal Netherlands Meteorological Institute–Ozone Monitoring Instrument (KNMI–OMI) nitrogen dioxide, using observations to evaluate and improve O3 source attribution still remains to be further explored. PMID:29780406
The Cloud Feedback Model Intercomparison Project (CFMIP) contribution to CMIP6
Webb, Mark J.; Andrews, Timothy; Bodas-Salcedo, Alejandro; ...
2017-01-01
Our primary objective of CFMIP is to inform future assessments of cloud feedbacks through improved understanding of cloud–climate feedback mechanisms and better evaluation of cloud processes and cloud feedbacks in climate models. But, the CFMIP approach is also increasingly being used to understand other aspects of climate change, and so a second objective has now been introduced, to improve understanding of circulation, regional-scale precipitation, and non-linear changes. CFMIP is supporting ongoing model inter-comparison activities by coordinating a hierarchy of targeted experiments for CMIP6, along with a set of cloud-related output diagnostics. CFMIP contributes primarily to addressing the CMIP6 questions Howmore » does the Earth system respond to forcing? and What are the origins and consequences of systematic model biases? and supports the activities of the WCRP Grand Challenge on Clouds, Circulation and Climate Sensitivity.A compact set of Tier 1 experiments is proposed for CMIP6 to address this question: (1) what are the physical mechanisms underlying the range of cloud feedbacks and cloud adjustments predicted by climate models, and which models have the most credible cloud feedbacks? Additional Tier 2 experiments are proposed to address the following questions. (2) Are cloud feedbacks consistent for climate cooling and warming, and if not, why? (3) How do cloud-radiative effects impact the structure, the strength and the variability of the general atmospheric circulation in present and future climates? (4) How do responses in the climate system due to changes in solar forcing differ from changes due to CO 2, and is the response sensitive to the sign of the forcing? (5) To what extent is regional climate change per CO 2 doubling state-dependent (non-linear), and why? (6) Are climate feedbacks during the 20th century different to those acting on long-term climate change and climate sensitivity? (7) How do regional climate responses (e.g. in precipitation) and their uncertainties in coupled models arise from the combination of different aspects of CO 2 forcing and sea surface warming?CFMIP also proposes a number of additional model outputs in the CMIP DECK, CMIP6 Historical and CMIP6 CFMIP experiments, including COSP simulator outputs and process diagnostics to address the following questions. How well do clouds and other relevant variables simulated by models agree with observations?What physical processes and mechanisms are important for a credible simulation of clouds, cloud feedbacks and cloud adjustments in climate models?Which models have the most credible representations of processes relevant to the simulation of clouds?How do clouds and their changes interact with other elements of the climate system?« less
The Cloud Feedback Model Intercomparison Project (CFMIP) contribution to CMIP6.
NASA Technical Reports Server (NTRS)
Webb, Mark J.; Andrews, Timothy; Bodas-Salcedo, Alejandro; Bony, Sandrine; Bretherton, Christopher S.; Chadwick, Robin; Chepfer, Helene; Douville, Herve; Good, Peter; Kay, Jennifer E.;
2017-01-01
The primary objective of CFMIP is to inform future assessments of cloud feedbacks through improved understanding of cloud-climate feedback mechanisms and better evaluation of cloud processes and cloud feedbacks in climate models. However, the CFMIP approach is also increasingly being used to understand other aspects of climate change, and so a second objective has now been introduced, to improve understanding of circulation, regional-scale precipitation, and non-linear changes. CFMIP is supporting ongoing model inter-comparison activities by coordinating a hierarchy of targeted experiments for CMIP6, along with a set of cloud-related output diagnostics. CFMIP contributes primarily to addressing the CMIP6 questions 'How does the Earth system respond to forcing?' and 'What are the origins and consequences of systematic model biases?' and supports the activities of the WCRP Grand Challenge on Clouds, Circulation and Climate Sensitivity. A compact set of Tier 1 experiments is proposed for CMIP6 to address this question: (1) what are the physical mechanisms underlying the range of cloud feedbacks and cloud adjustments predicted by climate models, and which models have the most credible cloud feedbacks? Additional Tier 2 experiments are proposed to address the following questions. (2) Are cloud feedbacks consistent for climate cooling and warming, and if not, why? (3) How do cloud-radiative effects impact the structure, the strength and the variability of the general atmospheric circulation in present and future climates? (4) How do responses in the climate system due to changes in solar forcing differ from changes due to CO2, and is the response sensitive to the sign of the forcing? (5) To what extent is regional climate change per CO2 doubling state-dependent (non-linear), and why? (6) Are climate feedbacks during the 20th century different to those acting on long-term climate change and climate sensitivity? (7) How do regional climate responses (e.g. in precipitation) and their uncertainties in coupled models arise from the combination of different aspects of CO2 forcing and sea surface warming? CFMIP also proposes a number of additional model outputs in the CMIP DECK, CMIP6 Historical and CMIP6 CFMIP experiments, including COSP simulator outputs and process diagnostics to address the following questions. 1. How well do clouds and other relevant variables simulated by models agree with observations? 2. What physical processes and mechanisms are important for a credible simulation of clouds, cloud feedbacks and cloud adjustments in climate models? 3. Which models have the most credible representations of processes relevant to the simulation of clouds? 4. How do clouds and their changes interact with other elements of the climate system?
The Cloud Feedback Model Intercomparison Project (CFMIP) contribution to CMIP6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb, Mark J.; Andrews, Timothy; Bodas-Salcedo, Alejandro
Our primary objective of CFMIP is to inform future assessments of cloud feedbacks through improved understanding of cloud–climate feedback mechanisms and better evaluation of cloud processes and cloud feedbacks in climate models. But, the CFMIP approach is also increasingly being used to understand other aspects of climate change, and so a second objective has now been introduced, to improve understanding of circulation, regional-scale precipitation, and non-linear changes. CFMIP is supporting ongoing model inter-comparison activities by coordinating a hierarchy of targeted experiments for CMIP6, along with a set of cloud-related output diagnostics. CFMIP contributes primarily to addressing the CMIP6 questions Howmore » does the Earth system respond to forcing? and What are the origins and consequences of systematic model biases? and supports the activities of the WCRP Grand Challenge on Clouds, Circulation and Climate Sensitivity.A compact set of Tier 1 experiments is proposed for CMIP6 to address this question: (1) what are the physical mechanisms underlying the range of cloud feedbacks and cloud adjustments predicted by climate models, and which models have the most credible cloud feedbacks? Additional Tier 2 experiments are proposed to address the following questions. (2) Are cloud feedbacks consistent for climate cooling and warming, and if not, why? (3) How do cloud-radiative effects impact the structure, the strength and the variability of the general atmospheric circulation in present and future climates? (4) How do responses in the climate system due to changes in solar forcing differ from changes due to CO 2, and is the response sensitive to the sign of the forcing? (5) To what extent is regional climate change per CO 2 doubling state-dependent (non-linear), and why? (6) Are climate feedbacks during the 20th century different to those acting on long-term climate change and climate sensitivity? (7) How do regional climate responses (e.g. in precipitation) and their uncertainties in coupled models arise from the combination of different aspects of CO 2 forcing and sea surface warming?CFMIP also proposes a number of additional model outputs in the CMIP DECK, CMIP6 Historical and CMIP6 CFMIP experiments, including COSP simulator outputs and process diagnostics to address the following questions. How well do clouds and other relevant variables simulated by models agree with observations?What physical processes and mechanisms are important for a credible simulation of clouds, cloud feedbacks and cloud adjustments in climate models?Which models have the most credible representations of processes relevant to the simulation of clouds?How do clouds and their changes interact with other elements of the climate system?« less
Initialization shock in decadal hindcasts due to errors in wind stress over the tropical Pacific
NASA Astrophysics Data System (ADS)
Pohlmann, Holger; Kröger, Jürgen; Greatbatch, Richard J.; Müller, Wolfgang A.
2017-10-01
Low prediction skill in the tropical Pacific is a common problem in decadal prediction systems, especially for lead years 2-5 which, in many systems, is lower than in uninitialized experiments. On the other hand, the tropical Pacific is of almost worldwide climate relevance through its teleconnections with other tropical and extratropical regions and also of importance for global mean temperature. Understanding the causes of the reduced prediction skill is thus of major interest for decadal climate predictions. We look into the problem of reduced prediction skill by analyzing the Max Planck Institute Earth System Model (MPI-ESM) decadal hindcasts for the fifth phase of the Climate Model Intercomparison Project and performing a sensitivity experiment in which hindcasts are initialized from a model run forced only by surface wind stress. In both systems, sea surface temperature variability in the tropical Pacific is successfully initialized, but most skill is lost at lead years 2-5. Utilizing the sensitivity experiment enables us to pin down the reason for the reduced prediction skill in MPI-ESM to errors in wind stress used for the initialization. A spurious trend in the wind stress forcing displaces the equatorial thermocline in MPI-ESM unrealistically. When the climate model is then switched into its forecast mode, the recovery process triggers artificial El Niño and La Niña events at the surface. Our results demonstrate the importance of realistic wind stress products for the initialization of decadal predictions.
Evaluation of microarray data normalization procedures using spike-in experiments
Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders
2006-01-01
Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679
Advanced Numerical Model for Irradiated Concrete
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giorla, Alain B.
In this report, we establish a numerical model for concrete exposed to irradiation to address these three critical points. The model accounts for creep in the cement paste and its coupling with damage, temperature and relative humidity. The shift in failure mode with the loading rate is also properly represented. The numerical model for creep has been validated and calibrated against different experiments in the literature [Wittmann, 1970, Le Roy, 1995]. Results from a simplified model are shown to showcase the ability of numerical homogenization to simulate irradiation effects in concrete. In future works, the complete model will be appliedmore » to the analysis of the irradiation experiments of Elleuch et al. [1972] and Kelly et al. [1969]. This requires a careful examination of the experimental environmental conditions as in both cases certain critical information are missing, including the relative humidity history. A sensitivity analysis will be conducted to provide lower and upper bounds of the concrete expansion under irradiation, and check if the scatter in the simulated results matches the one found in experiments. The numerical and experimental results will be compared in terms of expansion and loss of mechanical stiffness and strength. Both effects should be captured accordingly by the model to validate it. Once the model has been validated on these two experiments, it can be applied to simulate concrete from nuclear power plants. To do so, the materials used in these concrete must be as well characterized as possible. The main parameters required are the mechanical properties of each constituent in the concrete (aggregates, cement paste), namely the elastic modulus, the creep properties, the tensile and compressive strength, the thermal expansion coefficient, and the drying shrinkage. These can be either measured experimentally, estimated from the initial composition in the case of cement paste, or back-calculated from mechanical tests on concrete. If some are unknown, a sensitivity analysis must be carried out to provide lower and upper bounds of the material behaviour. Finally, the model can be used as a basis to formulate a macroscopic material model for concrete subject to irradiation, which later can be used in structural analyses to estimate the structural impact of irradiation on nuclear power plants.« less
NASA Astrophysics Data System (ADS)
Kirkegaard, Casper; Foged, Nikolaj; Auken, Esben; Christiansen, Anders Vest; Sørensen, Kurt
2012-09-01
Helicopter borne time domain EM systems historically measure only the Z-component of the secondary field, whereas fixed wing systems often measure all field components. For the latter systems the X-component is often used to map discrete conductors, whereas it finds little use in the mapping of layered settings. Measuring the horizontal X-component with an offset loop helicopter system probes the earth with a complementary sensitivity function that is very different from that of the Z-component, and could potentially be used for improving resolution of layered structures in one dimensional modeling. This area is largely unexplored in terms of quantitative results in the literature, since measuring and inverting X-component data from a helicopter system is not straightforward: The signal strength is low, the noise level is high, the signal is very sensitive to the instrument pitch and the sensitivity function also has a complex lateral behavior. The basis of our study is a state of the art inversion scheme, using a local 1D forward model description, in combination with experiences gathered from extending the SkyTEM system to measure the X component. By means of a 1D sensitivity analysis we motivate that in principle resolution of layered structures can be improved by including an X-component signal in a 1D inversion, given the prerequisite that a low-pass filter of suitably low cut-off frequency can be employed. In presenting our practical experiences with modifying the SkyTEM system we discuss why this prerequisite unfortunately can be very difficult to fulfill in practice. Having discussed instrumental limitations we show what can be obtained in practice using actual field data. Here, we demonstrate how the issue of high sensitivity towards instrument pitch can be overcome by including the pitch angle as an inversion parameter and how joint inversion of the Z- and X-components produces virtually the same model result as for the Z-component alone. We conclude that adding helicopter system X-component to a 1D inversion can be used to facilitate higher confidence in the layered result, as the requirements for fitting the data into a 1D model envelope becomes more stringent and the model result thus less prone to misinterpretation.
Characterization of a developmental toxicity dose-response model.
Faustman, E M; Wellington, D G; Smith, W P; Kimmel, C A
1989-01-01
The Rai and Van Ryzin dose-response model proposed for teratology experiments has been characterized for its appropriateness and applicability in modeling the dichotomous response data from developmental toxicity studies. Modifications were made in the initial probability statements to reflect more accurately biological events underlying developmental toxicity. Data sets used for the evaluation were obtained from the National Toxicology Program and U.S. EPA laboratories. The studies included developmental evaluations of ethylene glycol, diethylhexyl phthalate, di- and triethylene glycol dimethyl ethers, and nitrofen in rats, mice, or rabbits. Graphic examination and statistical evaluation demonstrate that this model is sensitive to the data when compared to directly measured experimental outcomes. The model was used to interpolate to low-risk dose levels, and comparisons were made between the values obtained and the no-observed-adverse-effect levels (NOAELs) divided by an uncertainty factor. Our investigation suggests that the Rai and Van Ryzin model is sensitive to the developmental toxicity end points, prenatal deaths, and malformations, and appears to model closely their relationship to dose. PMID:2707204
Privacy preserving data anonymization of spontaneous ADE reporting system dataset.
Lin, Wen-Yang; Yang, Duen-Chuan; Wang, Jie-Teng
2016-07-18
To facilitate long-term safety surveillance of marketing drugs, many spontaneously reporting systems (SRSs) of ADR events have been established world-wide. Since the data collected by SRSs contain sensitive personal health information that should be protected to prevent the identification of individuals, it procures the issue of privacy preserving data publishing (PPDP), that is, how to sanitize (anonymize) raw data before publishing. Although much work has been done on PPDP, very few studies have focused on protecting privacy of SRS data and none of the anonymization methods is favorable for SRS datasets, due to which contain some characteristics such as rare events, multiple individual records, and multi-valued sensitive attributes. We propose a new privacy model called MS(k, θ (*) )-bounding for protecting published spontaneous ADE reporting data from privacy attacks. Our model has the flexibility of varying privacy thresholds, i.e., θ (*) , for different sensitive values and takes the characteristics of SRS data into consideration. We also propose an anonymization algorithm for sanitizing the raw data to meet the requirements specified through the proposed model. Our algorithm adopts a greedy-based clustering strategy to group the records into clusters, conforming to an innovative anonymization metric aiming to minimize the privacy risk as well as maintain the data utility for ADR detection. Empirical study was conducted using FAERS dataset from 2004Q1 to 2011Q4. We compared our model with four prevailing methods, including k-anonymity, (X, Y)-anonymity, Multi-sensitive l-diversity, and (α, k)-anonymity, evaluated via two measures, Danger Ratio (DR) and Information Loss (IL), and considered three different scenarios of threshold setting for θ (*) , including uniform setting, level-wise setting and frequency-based setting. We also conducted experiments to inspect the impact of anonymized data on the strengths of discovered ADR signals. With all three different threshold settings for sensitive value, our method can successively prevent the disclosure of sensitive values (nearly all observed DRs are zeros) without sacrificing too much of data utility. With non-uniform threshold setting, level-wise or frequency-based, our MS(k, θ (*))-bounding exhibits the best data utility and the least privacy risk among all the models. The experiments conducted on selected ADR signals from MedWatch show that only very small difference on signal strength (PRR or ROR) were observed. The results show that our method can effectively prevent the disclosure of patient sensitive information without sacrificing data utility for ADR signal detection. We propose a new privacy model for protecting SRS data that possess some characteristics overlooked by contemporary models and an anonymization algorithm to sanitize SRS data in accordance with the proposed model. Empirical evaluation on the real SRS dataset, i.e., FAERS, shows that our method can effectively solve the privacy problem in SRS data without influencing the ADR signal strength.
Remote sensing of mineral dust aerosol using AERI during the UAE2: A modeling and sensitivity study
NASA Astrophysics Data System (ADS)
Hansell, R. A.; Liou, K. N.; Ou, S. C.; Tsay, S. C.; Ji, Q.; Reid, J. S.
2008-09-01
Numerical simulations and sensitivity studies have been performed to assess the potential for using brightness temperature spectra from a ground-based Atmospheric Emitted Radiance Interferometer (AERI) during the United Arab Emirates Unified Aerosol Experiment (UAE2) for detecting/retrieving mineral dust aerosol. A methodology for separating dust from clouds and retrieving the dust IR optical depths was developed by exploiting differences between their spectral absorptive powers in prescribed thermal IR window subbands. Dust microphysical models were constructed using in situ data from the UAE2 and prior field studies while composition was modeled using refractive index data sets for minerals commonly observed around the UAE region including quartz, kaolinite, and calcium carbonate. The T-matrix, finite difference time domain (FDTD), and Lorenz-Mie light scattering programs were employed to calculate the single scattering properties for three dust shapes: oblate spheroids, hexagonal plates, and spheres. We used the Code for High-resolution Accelerated Radiative Transfer with Scattering (CHARTS) radiative transfer program to investigate sensitivity of the modeled AERI spectra to key dust and atmospheric parameters. Sensitivity studies show that characterization of the thermodynamic boundary layer is crucial for accurate AERI dust detection/retrieval. Furthermore, AERI sensitivity to dust optical depth is manifested in the strong subband slope dependence of the window region. Two daytime UAE2 cases were examined to demonstrate the present detection/retrieval technique, and we show that the results compare reasonably well to collocated AERONET Sun photometer/MPLNET micropulse lidar measurements. Finally, sensitivity of the developed methodology to the AERI's estimated MgCdTe detector nonlinearity was evaluated.
An individual reproduction model sensitive to milk yield and body condition in Holstein dairy cows.
Brun-Lafleur, L; Cutullic, E; Faverdin, P; Delaby, L; Disenhaus, C
2013-08-01
To simulate the consequences of management in dairy herds, the use of individual-based herd models is very useful and has become common. Reproduction is a key driver of milk production and herd dynamics, whose influence has been magnified by the decrease in reproductive performance over the last decades. Moreover, feeding management influences milk yield (MY) and body reserves, which in turn influence reproductive performance. Therefore, our objective was to build an up-to-date animal reproduction model sensitive to both MY and body condition score (BCS). A dynamic and stochastic individual reproduction model was built mainly from data of a single recent long-term experiment. This model covers the whole reproductive process and is composed of a succession of discrete stochastic events, mainly calving, ovulations, conception and embryonic loss. Each reproductive step is sensitive to MY or BCS levels or changes. The model takes into account recent evolutions of reproductive performance, particularly concerning calving-to-first ovulation interval, cyclicity (normal cycle length, prevalence of prolonged luteal phase), oestrus expression and pregnancy (conception, early and late embryonic loss). A sensitivity analysis of the model to MY and BCS at calving was performed. The simulated performance was compared with observed data from the database used to build the model and from the bibliography to validate the model. Despite comprising a whole series of reproductive steps, the model made it possible to simulate realistic global reproduction outputs. It was able to well simulate the overall reproductive performance observed in farms in terms of both success rate (recalving rate) and reproduction delays (calving interval). This model has the purpose to be integrated in herd simulation models to usefully test the impact of management strategies on herd reproductive performance, and thus on calving patterns and culling rates.
NASA Astrophysics Data System (ADS)
Pitman, Andrew J.; Yang, Zong-Liang; Henderson-Sellers, Ann
1993-10-01
The sensitivity of a land surface scheme to the distribution of precipitation within a general circulation model's grid element is investigated. Earlier experiments which showed considerable sensitivity of the runoff and evaporation simulation to the distribution of precipitation are repeated in the light of other results which show no sensitivity of evaporation to the distribution of precipitation. Results show that while the earlier results over-estimated the sensitivity of the surface hydrology to the precipitation distribution, the general conclusion that the system is sensitive is supported. It is found that changing the distribution of precipitation from falling over 100% of the grid square to falling over 10% leads to a reduction in evaporation from 1578 mm y-1 to 1195 mm y -1 while runoff increases from 278 mm y-1 to 602 mm y-1. The sensitivity is explained in terms of evaporation being dominated by available energy when precipitation falls over nearly the entire grid square, but by moisture availability (mainly intercepted water) when it falls over little of the grid square. These results also indicate that earlier work using stand-alone forcing to drive land surface schemes ‘off-line’, and to investigate the sensitivity of land surface codes to various parameters, leads to results which are non-repeatable in single column simulations.
Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System
Anderson, Lucy A.
2016-01-01
High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the findings suggest that auditory temporal processing deficits, such as impairments in gap-in-noise detection, could arise from reduced brain sensitivity to sound offsets alone. PMID:26865621
NASA Astrophysics Data System (ADS)
Ragi, K. B.; Patel, R.
2015-12-01
A great deal of studies focused on deforestation scenarios in the tropical rainforests. Though all these efforts are useful in the understanding of its response to climate, the systematic understanding of uncertainties in representation of physical processes related to vegetation through sensitivity studies is imperative antecedently to understand the real role of vegetation in changing the climate. It is understood that the dense vegetation fluxes energy and moisture to the atmosphere. But, how much a specific process/a group of processes in the surface conditions of a specific area helps flux energy, moisture and tracers is unknown due to lack of process sensitivity studies and uncertain due to malfunctioning of processes. In this presentation, we have found a faulty parameterization, through process sensitivity studies, that would abet in energy and moisture fluxes to the atmosphere. The model we have employed is the Common Land Model2014. The area we have chosen is the Congolese rainforest. We have discovered the flaw in the leaf boundary layer resistance (LBLR), through sensitivity studies in the LSMs, especially in the dense forest regions. This LBLR is over-parameterized with constant heat transfer coefficient and characteristic dimension of leaves; and friction velocity. However, it is too scant because of overlooking of significant complex physics of turbulence and canopy roughness boundary layer to function it realistically. Our sensitivity results show the deficiency of this process and we have formulated canopy boundary layer resistance, instead of LBLR, with depending variables such as LAI, roughness length, vegetation temperature using appropriate thermo-fluid dynamical principles. We are running the sensitivity experiments with new formulations for setting the parameter values for the data not available so far. This effort would lead to better physics for the land-use change studies and demand for the retrieval of new parameters from satellite/field experiments such as leaf mass per area and specific heat capacity of vegetation.
NASA Astrophysics Data System (ADS)
Smith, S.; Ullman, D. J.; He, F.; Carlson, A. E.; Marzeion, B.; Maussion, F.
2017-12-01
Understanding the behavior of the world's glaciers during previous interglaciations is key to interpreting the sensitivity and behavior of the cryosphere under scenarios of future anthropogenic warming. Previous studies of the Last Interglaciation (LIG, 130 ka to 116 ka) indicate elevated global temperatures and higher sea levels than the Holocene, but most assessments of the impact on the cryosphere have focused on the mass balance and volume change of polar ice sheets. In assessing sea-level sources, most studies assume complete deglacation of global glaciers, but this has yet to be tested. In addition, the significant changes in orbital forcing during the LIG and the associated impacts on climate seasonality and variability may have led to unique glacier evolution.Here, we explore the effect of LIG climate on the global glacier budget. We employ the Open Global Glacier Model (OGGM), forced by simulated LIG equilibrium climate anomalies (127 ka) from the Community Climate System Model Version 3 (CCSM3). OGGM is a glacier mass balance and dynamics model, specifically designed to reconstruct global glacier volume change. Our simulations have been conducted in an equilibrium state to determine the effect of the prolonged climate forcing of the LIG. Due to unknown flow characteristics of glaciers during the LIG, we explore the parametric uncertainty in the mass balance and flow sensitivity parameters. As a point of comparison, we also conduct a series of simulations using forcing anomalies from the CCSM3 mid-Holocene (6 ka) experiment. Results from both experiments show that glacier mass balance is highly sensitive to these sensitivity parameters, pointing at the need for glacier margin calibration for OGGM in paleoclimate applications.
NASA Astrophysics Data System (ADS)
Hubert, Olivier; Lazreg, Said
2017-02-01
A growing interest of automotive industry in the use of high performance steels is observed. These materials are obtained thanks to complex manufacturing processes whose parameters fluctuations lead to strong variations of microstructure and mechanical properties. The on-line magnetic non-destructive monitoring is a relevant response to this problem but it requires fast models sensitive to different parameters of the forming process. The plastic deformation is one of these important parameters. Indeed, ferromagnetic materials are known to be sensitive to stress application and especially to plastic strains. In this paper, a macroscopic approach using the kinematic hardening is proposed to model this behavior, considering a plastic strained material as a two phase system. Relationship between kinematic hardening and residual stress is defined in this framework. Since stress fields are multiaxial, an uniaxial equivalent stress is calculated and introduced inside the so-called magneto-mechanical multidomain modeling to represent the effect of plastic strain. The modeling approach is complemented by many experiments involving magnetic and magnetostrictive measurements. They are carried out with or without applied stress, using a dual-phase steel deformed at different levels. The main interest of this material is that the mechanically hard phase, soft phase and the kinematic hardening can be clearly identified thanks to simple experiments. It is shown how this model can be extended to single phase materials.
NASA Astrophysics Data System (ADS)
Li, Shuai; Wang, Yiping; Wang, Tao; Yang, Xue; Deng, Yadong; Su, Chuqi
2017-05-01
Thermoelectric generators (TEGs) have become a topic of interest for vehicle exhaust energy recovery. Electrical power generation is deeply influenced by temperature differences, temperature uniformity and topological structures of TEGs. When the dimpled surfaces are adopted in heat exchangers, the heat transfer rates can be augmented with a minimal pressure drop. However, the temperature distribution shows a large gradient along the flow direction which has adverse effects on the power generation. In the current study, the heat exchanger performance was studied in a computational fluid dynamics (CFD) model. The dimple depth, dimple print diameter, and channel height were chosen as design variables. The objective function was defined as a combination of average temperature, temperature uniformity and pressure loss. The optimal Latin hypercube method was used to determine the experiment points as a method of design of the experiment in order to analyze the sensitivity of the design variables. A Kriging surrogate model was built and verified according to the database resulting from the CFD simulation. A multi-island genetic algorithm was used to optimize the structure in the heat exchanger based on the surrogate model. The results showed that the average temperature of the heat exchanger was most sensitive to the dimple depth. The pressure loss and temperature uniformity were most sensitive to the parameter of channel rear height, h 2. With an optimal design of channel structure, the temperature uniformity can be greatly improved compared with the initial exchanger, and the additional pressure loss also increased.
Erev, Ido; Ert, Eyal; Plonsky, Ori; Cohen, Doron; Cohen, Oded
2017-07-01
Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A comparative study of the constitutive models for silicon carbide
NASA Astrophysics Data System (ADS)
Ding, Jow-Lian; Dwivedi, Sunil; Gupta, Yogendra
2001-06-01
Most of the constitutive models for polycrystalline silicon carbide were developed and evaluated using data from either normal plate impact or Hopkinson bar experiments. At ISP, extensive efforts have been made to gain detailed insight into the shocked state of the silicon carbide (SiC) using innovative experimental methods, viz., lateral stress measurements, in-material unloading measurements, and combined compression shear experiments. The data obtained from these experiments provide some unique information for both developing and evaluating material models. In this study, these data for SiC were first used to evaluate some of the existing models to identify their strength and possible deficiencies. Motivated by both the results of this comparative study and the experimental observations, an improved phenomenological model was developed. The model incorporates pressure dependence of strength, rate sensitivity, damage evolution under both tension and compression, pressure confinement effect on damage evolution, stiffness degradation due to damage, and pressure dependence of stiffness. The model developments are able to capture most of the material features observed experimentally, but more work is needed to better match the experimental data quantitatively.
Drought resilience across ecologically dominant species: An experiment-model integration approach
NASA Astrophysics Data System (ADS)
Felton, A. J.; Warren, J.; Ricciuto, D. M.; Smith, M. D.
2017-12-01
Poorly understood are the mechanisms contributing to variability in ecosystem recovery following drought. Grasslands of the central U.S. are ecologically and economically important ecosystems, yet are also highly sensitive to drought. Although characteristics of these ecosystems change across gradients of temperature and precipitation, a consistent feature among these systems is the presence of highly abundant, dominant grass species that control biomass production. As a result, the incorporation of these species' traits into terrestrial biosphere models may constrain predictions amid increases in climatic variability. Here we report the results of a modeling-experiment (MODEX) research approach. We investigated the physiological, morphological and growth responses of the dominant grass species from each of the four major grasslands of the central U.S. (ranging from tallgrass prairie to desert grassland) following severe drought. Despite significant differences in baseline values, full recovery in leaf physiological function was evident across species, of which was consistently driven by the production of new leaves. Further, recovery in whole-plant carbon uptake tended to be driven by shifts in allocation from belowground to aboveground structures. However, there was clear variability among species in the magnitude of this dynamic as well as the relative allocation to stem versus leaf production. As a result, all species harbored the physiological capacity to recover from drought, yet we posit that variability in the recovery of whole-plant carbon uptake to be more strongly driven by variability in the sensitivity of species' morphology to soil moisture increases. The next step of this project will be to incorporate these and other existing data on these species and ecosystems into the community land model in an effort to test the sensitivity of this model to these data.
Modeling of copper sorption onto GFH and design of full-scale GFH adsorbers.
Steiner, Michele; Pronk, Wouter; Boller, Markus A
2006-03-01
During rain events, copper wash-off occurring from copper roofs results in environmental hazards. In this study, columns filled with granulated ferric hydroxide (GFH) were used to treat copper-containing roof runoff. It was shown that copper could be removed to a high extent. A model was developed to describe this removal process. The model was based on the Two Region Model (TRM), extended with an additional diffusion zone. The extended model was able to describe the copper removal in long-term experiments (up to 125 days) with variable flow rates reflecting realistic runoff events. The four parameters of the model were estimated based on data gained with specific column experiments according to maximum sensitivity for each parameter. After model validation, the parameter set was used for the design of full-scale adsorbers. These full-scale adsorbers show high removal rates during extended periods of time.
Modelling crop yield, soil organic C and P under variable long-term fertilizer management in China
NASA Astrophysics Data System (ADS)
Zhang, Jie; Xu, Guang; Xu, Minggang; Balkovič, Juraj; Azevedo, Ligia B.; Skalský, Rastislav; Wang, Jinzhou; Yu, Chaoqing
2016-04-01
Phosphorus (P) is a major limiting nutrient for plant growth. P, as a nonrenewable resource and the controlling factor of aquatic entrophication, is critical for food security and human future, and concerns sustainable resource use and environmental impacts. It is thus essential to find an integrated and effective approach to optimize phosphorus fertilizer application in the agro-ecosystem while maintaining crop yield and minimizing environmental risk. Crop P models have been used to simulate plant-soil interactions but are rarely validated with scattered long-term fertilizer control field experiments. We employed a process-based model named Environmental Policy Integrated Climate model (EPIC) to simulate grain yield, soil organic carbon (SOC) and soil available P based upon 8 field experiments in China with 11 years dataset, representing the typical Chinese soil types and agro-ecosystems of different regions. 4 treatments, including N, P, and K fertilizer (NPK), no fertilizer (CK), N and K fertilizer (NK) and N, P, K and manure (NPKM) were measured and modelled. A series of sensitivity tests were conducted to analyze the sensitivity of grain yields and soil available P to sequential fertilizer rates in typical humid, normal and drought years. Our results indicated that the EPIC model showed a significant agreement for simulating grain yields with R2=0.72, index of agreement (d)=0.87, modeling efficiency (EF)=0.68, p<0.01 and SOC with R2=0.70, d=0.86, EF=0.59, and p<0.01. EPIC can well simulate soil available P moderately and capture the temporal changes in soil P reservoirs. Both of Crop yields and soil available were found more sensitive to the fertilizer P rates in humid than drought year and soil available P is closely linked to concentrated rainfall. This study concludes that EPIC model has great potential to simulate the P cycle in croplands in China and can explore the optimum management practices.
Continuous model for the rock-scissors-paper game between bacteriocin producing bacteria.
Neumann, Gunter; Schuster, Stefan
2007-06-01
In this work, important aspects of bacteriocin producing bacteria and their interplay are elucidated. Various attempts to model the resistant, producer and sensitive Escherichia coli strains in the so-called rock-scissors-paper (RSP) game had been made in the literature. The question arose whether there is a continuous model with a cyclic structure and admitting an oscillatory dynamics as observed in various experiments. The May-Leonard system admits a Hopf bifurcation, which is, however, degenerate and hence inadequate. The traditional differential equation model of the RSP-game cannot be applied either to the bacteriocin system because it involves positive interaction terms. In this paper, a plausible competitive Lotka-Volterra system model of the RSP game is presented and the dynamics generated by that model is analyzed. For the first time, a continuous, spatially homogeneous model that describes the competitive interaction between bacteriocin-producing, resistant and sensitive bacteria is established. The interaction terms have negative coefficients. In some experiments, for example, in mice cultures, migration seemed to be essential for the reinfection in the RSP cycle. Often statistical and spatial effects such as migration and mutation are regarded to be essential for periodicity. Our model gives rise to oscillatory dynamics in the RSP game without such effects. Here, a normal form description of the limit cycle and conditions for its stability are derived. The toxicity of the bacteriocin is used as a bifurcation parameter. Exact parameter ranges are obtained for which a stable (robust) limit cycle and a stable heteroclinic cycle exist in the three-species game. These parameters are in good accordance with the observed relations for the E. coli strains. The roles of growth rate and growth yield of the three strains are discussed. Numerical calculations show that the sensitive, which might be regarded as the weakest, can have the longest sojourn times.
ERIC Educational Resources Information Center
Levine, Kenneth J.; Garland, Michelle E.
2015-01-01
This paper examines how the study-abroad experience enhances intercultural communication competence. This study used Bennett's (1986, 1993) model of ethnorelative typology of acceptance, adaptation, and integration to explore intercultural communication competency. Central to intercultural communication competency is intercultural sensitivity and…
Ostafin, Brian D; Marlatt, G Alan; Troop-Gordon, Wendy
2010-03-01
Motivational models of addiction typically propose that alcohol and drugs are desired because of their hedonic effects (i.e., increasing pleasure or reducing distress). In contrast, the incentive-sensitization theory proposes that wanting motivation and liking motivation are separable and that after repeated substance use, motivation shifts from liking to wanting. Using a sample of 85 at-risk drinkers (as defined by the National Institute on Alcohol Abuse and Alcoholism), in the current study we examined the separability of liking motivation and wanting motivation for alcohol and whether years of drinking experience was associated with an increased role for wanting motivation and a decreased role for liking motivation. Consumption was measured with a free-drinking task. Wanting motivation was assessed immediately before drinking, and liking was assessed immediately after drinking had begun. The results indicated that (a) wanting motivation predicted variance of consumption unique from that accounted for by liking motivation, (b) longer drinking experience was associated with a decreased relation between liking motivation and consumption, and (c) longer drinking experience was not associated with an increased relation between wanting motivation and consumption. The results provide partial support for the incentive-sensitization theory.
Attachment Insecurity Predicts Punishment Sensitivity in Anorexia Nervosa.
Keating, Charlotte; Castle, David J; Newton, Richard; Huang, Chia; Rossell, Susan L
2016-10-01
Individuals with anorexia nervosa (AN) experience insecure attachment. We investigated whether insecure attachment is associated with punishment and reward sensitivity in women with AN. Women with AN (n = 24) and comparison women (n = 26) (CW) completed The Eating Disorder Examination Questionnaire, Depression Anxiety Stress Scale, The Attachment Style Questionnaire, and Sensitivity to Punishment/Sensitivity to Reward Questionnaire. Participants with AN returned higher ratings for insecure attachment (anxious and avoidant) experiences and greater sensitivity to punishment (p = 0.001) than CW. In AN, sensitivity to punishment was positively correlated with anxious attachment and negative emotionality but not eating disorder symptoms. Regression analysis revealed that anxious attachment independently predicted punishment sensitivity in AN. Anxious attachment experiences are related to punishment sensitivity in AN, independent of negative emotionality and eating disorder symptoms. Results support ongoing investigation of the contribution of attachment experiences in treatment and recovery.
NASA Astrophysics Data System (ADS)
Mera, Roberto J.; Niyogi, Dev; Buol, Gregory S.; Wilkerson, Gail G.; Semazzi, Fredrick H. M.
2006-11-01
Landuse/landcover change induced effects on regional weather and climate patterns and the associated plant response or agricultural productivity are coupled processes. Some of the basic responses to climate change can be detected via changes in radiation ( R), precipitation ( P), and temperature ( T). Past studies indicate that each of these three variables can affect LCLUC response and the agricultural productivity. This study seeks to address the following question: What is the effect of individual versus simultaneous changes in R, P, and T on plant response such as crop yields in a C 3 and a C 4 plant? This question is addressed by conducting model experiments for soybean (C 3) and maize (C 4) crops using the DSSAT: Decision Support System for Agrotechnology Transfer, CROPGRO (soybean), and CERES-Maize (maize) models. These models were configured over an agricultural experiment station in Clayton, NC [35.65°N, 78.5°W]. Observed weather and field conditions corresponding to 1998 were used as the control. In the first set of experiments, the CROPGRO (soybean) and CERES-Maize (maize) responses to individual changes in R and P (25%, 50%, 75%, 150%) and T (± 1, ± 2 °C) with respect to control were studied. In the second set, R, P, and T were simultaneously changed by 50%, 150%, and ± 2 °C, and the interactions and direct effects of individual versus simultaneous variable changes were analyzed. For the model setting and the prescribed environmental changes, results from the first set of experiments indicate: (i) precipitation changes were most sensitive and directly affected yield and water loss due to evapotranspiration; (ii) radiation changes had a non-linear effect and were not as prominent as precipitation changes; (iii) temperature had a limited impact and the response was non-linear; (iv) soybeans and maize responded differently for R, P, and T, with maize being more sensitive. The results from the second set of experiments indicate that simultaneous change analyses do not necessarily agree with those from individual changes, particularly for temperature changes. Our analysis indicates that for the changing climate, precipitation (hydrological), temperature, and radiative feedbacks show a non-linear effect on yield. Study results also indicate that for studying the feedback between the land surface and the atmospheric changes, (i) there is a need for performing simultaneous parameter changes in the response assessment of cropping patterns and crop yield based on ensembles of projected climate change, and (ii) C 3 crops are generally considered more sensitive than C 4; however, the temperature-radiation related changes shown in this study also effected significant changes in C 4 crops. Future studies assessing LCLUC impacts, including those from agricultural cropping patterns and other LCULC-climate couplings, should advance beyond the sensitivity mode and consider multivariable, ensemble approaches to identify the vulnerability and feedbacks in estimating climate-related impacts.
NASA Astrophysics Data System (ADS)
Marseille, Gert-Jan; Stoffelen, Ad; Barkmeijer, Jan
2008-03-01
Lacking an established methodology to test the potential impact of prospective extensions to the global observing system (GOS) in real atmospheric cases we developed such a method, called Sensitivity Observing System Experiment (SOSE). For example, since the GOS is non uniform it is of interest to investigate the benefit of complementary observing systems filling its gaps. In a SOSE adjoint sensitivity structures are used to define a pseudo true atmospheric state for the simulation of the prospective observing system. Next, the synthetic observations are used together with real observations from the existing GOS in a state-of-the-art Numerical Weather Prediction (NWP) model to assess the potential added value of the new observing system. Unlike full observing system simulation experiments (OSSE), SOSE can be applied to real extreme events that were badly forecast operationally and only requires the simulation of the new instrument. As such SOSE is an effective tool, for example, to define observation requirements for extensions to the GOS. These observation requirements may serve as input for the design of an operational network of prospective observing systems. In a companion paper we use SOSE to simulate potential future space borne Doppler Wind Lidar (DWL) scenarios and assess their capability to sample meteorologically sensitive areas not well captured by the current GOS, in particular over the Northern Hemisphere oceans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinbrink, Nicholas M.N.; Weinheimer, Christian; Glück, Ferenc
The KATRIN experiment aims to determine the absolute neutrino mass by measuring the endpoint region of the tritium β-spectrum. As a large-scale experiment with a sharp energy resolution, high source luminosity and low background it may also be capable of testing certain theories of neutrino interactions beyond the standard model (SM). An example of a non-SM interaction are right-handed currents mediated by right-handed W bosons in the left-right symmetric model (LRSM). In this extension of the SM, an additional SU(2){sub R} symmetry in the high-energy limit is introduced, which naturally includes sterile neutrinos and predicts the seesaw mechanism. In tritiummore » β decay, this leads to an additional term from interference between left- and right-handed interactions, which enhances or suppresses certain regions near the endpoint of the beta spectrum. In this work, the sensitivity of KATRIN to right-handed currents is estimated for the scenario of a light sterile neutrino with a mass of some eV. This analysis has been performed with a Bayesian analysis using Markov Chain Monte Carlo (MCMC). The simulations show that, in principle, KATRIN will be able to set sterile neutrino mass-dependent limits on the interference strength. The sensitivity is significantly increased if the Q value of the β decay can be sufficiently constrained. However, the sensitivity is not high enough to improve current upper limits from right-handed W boson searches at the LHC.« less
NASA Astrophysics Data System (ADS)
Chen, Weiting; Yi, Xi; Zhao, Huijuan; Gao, Feng
2014-09-01
We presented a novel dual-wavelength diffuse optical imaging system which can perform 2-D or 3-D imaging fast and high-sensitively for monitoring the dynamic change of optical parameters. A newly proposed lock-in photon-counting detection method was adopted for week optical signal collection, which brought in excellent property as well as simplified geometry. Fundamental principles of the lock-in photon-counting detection were elaborately demonstrated, and the feasibility was strictly verified by the linearity experiment. Systemic performance of the prototype set up was experimentally accessed, including stray light rejection and inherent interference. Results showed that the system possessed superior anti-interference capability (under 0.58% in darkroom) compared with traditional photon-counting detection, and the crosstalk between two wavelengths was lower than 2.28%. For comprehensive assessment, 2-D phantom experiments towards relatively large dimension model (diameter of 4cm) were conducted. Different absorption targets were imaged to investigate detection sensitivity. Reconstruction image under all conditions was exciting, with a desirable SNR. Study on image quality v.s. integration time put forward a new method for accessing higher SNR with the sacrifice of measuring speed. In summary, the newly developed system showed great potential in promoting detection sensitivity as well as measuring speed. This will make substantial progress in dynamically tracking the blood concentration distribution in many clinical areas, such as small animal disease modeling, human brain activity research and thick tissues (for example, breast) diagnosis.
NASA Astrophysics Data System (ADS)
Steinbrink, Nicholas M. N.; Glück, Ferenc; Heizmann, Florian; Kleesiek, Marco; Valerius, Kathrin; Weinheimer, Christian; Hannestad, Steen
2017-06-01
The KATRIN experiment aims to determine the absolute neutrino mass by measuring the endpoint region of the tritium β-spectrum. As a large-scale experiment with a sharp energy resolution, high source luminosity and low background it may also be capable of testing certain theories of neutrino interactions beyond the standard model (SM). An example of a non-SM interaction are right-handed currents mediated by right-handed W bosons in the left-right symmetric model (LRSM). In this extension of the SM, an additional SU(2)R symmetry in the high-energy limit is introduced, which naturally includes sterile neutrinos and predicts the seesaw mechanism. In tritium β decay, this leads to an additional term from interference between left- and right-handed interactions, which enhances or suppresses certain regions near the endpoint of the beta spectrum. In this work, the sensitivity of KATRIN to right-handed currents is estimated for the scenario of a light sterile neutrino with a mass of some eV. This analysis has been performed with a Bayesian analysis using Markov Chain Monte Carlo (MCMC). The simulations show that, in principle, KATRIN will be able to set sterile neutrino mass-dependent limits on the interference strength. The sensitivity is significantly increased if the Q value of the β decay can be sufficiently constrained. However, the sensitivity is not high enough to improve current upper limits from right-handed W boson searches at the LHC.
NASA Astrophysics Data System (ADS)
Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.
2008-12-01
A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems, particularly at the laboratory scale.
Artusa, D. R.; Azzolini, O.; Balata, M.; ...
2014-10-15
Neutrinoless double beta decay (0νββ) is one of the most sensitive probes for physics beyond the Standard Model, providing unique information on the nature of neutrinos. In this paper we review the status and outlook for bolometric 0νββ decay searches. We summarize recent advances in background suppression demonstrated using bolometers with simultaneous readout of heat and light signals. We simulate several configurations of a future CUORE-like bolometer array which would utilize these improvements and present the sensitivity reach of a hypothetical next-generation bolometric0νββ experiment. We demonstrate that a bolometric experiment with the isotope mass of about 1 ton is capablemore » of reaching the sensitivity to the effective Majorana neutrino mass (|m ee|) of order 10-20 meV, thus completely exploring the so-called inverted neutrino mass hierarchy region. In conclusion, we highlight the main challenges and identify priorities for an R&D program addressing them.« less
Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Briggs, Jeffery L.
2008-01-01
The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.
[Comparison of red edge parameters of winter wheat canopy under late frost stress].
Wu, Yong-feng; Hu, Xin; Lü, Guo-hua; Ren, De-chao; Jiang, Wei-guo; Song, Ji-qing
2014-08-01
In the present study, late frost experiments were implemented under a range of subfreezing temperatures (-1 - -9 degrees C) by using a field movable climate chamber (FMCC) and a cold climate chamber, respectively. Based on the spectra of winter wheat canopy measured at noon on the first day after the frost experiments, red edge parameters REP, Dr, SDr, Dr(min), Dr/Dr(min) and Dr/SDr were extracted using maximum first derivative spectrum method (FD), linear four-point interpolation method (FPI), polynomial fitting method (POLY), inverted Gaussian fitting method (IG) and linear extrapolation technique (LE), respectively. The capacity of the red edge parameters to detect late frost stress was explicated from the aspects of the early, sensitivity and stability through correlation analysis, linear regression modeling and fluctuation analysis. The result indicates that except for REP calculated from FPI and IG method in Experiment 1, REP from the other methods was correlated with frost temperatures (P < 0.05). Thereinto, significant levels (P) of POLY and LE methods all reached 0.01. Except for POLY method in Experiment 2, Dr/SDr from the other methods were all significantly correlated with frost temperatures (P < 0.01). REP showed a trend to shift to short-wave band with decreasing temperatures. The lower the temperature, the more obvious the trend is. Of all the REP, REP calculated by LE method had the highest correlation with frost temperatures which indicated that LE method is the best for REP extraction. In Experiment 1 and 2, only Dr(min) and Dr/Dr(min), calculated by FD method simultaneously achieved the requirements for the early (their correlations with frost temperatures showed a significant level P < 0.01), sensitivity (abso- lute value of the slope of fluctuation coefficient is greater than 2.0) and stability (their correlations with frost temperatures al- ways keep a consistent direction). Dr/SDr calculated from FD and IG methods always had a low sensitivity in Experiment 2. In Experiment 1, the sensitivity of Dr/SDr from FD was moderate and IG was high. REP calculated from LE method had a lowest sensitivity in the two experiments. Totally, Dr(min) and Dr/Dr(min) calculated by FD method have the strongest detection capacity for frost temperature, which will be helpful to conducting the research on early diagnosis of late frost injury to winter wheat.
Testing Photoionization Calculations Using Chandra X-ray Spectra
NASA Technical Reports Server (NTRS)
Kallman, Tim
2008-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
NASA Astrophysics Data System (ADS)
Zhang, Guohe; Lai, Junhua; Kong, Yanmei; Jiao, Binbin; Yun, Shichang; Ye, Yuxin
2018-05-01
Ultra-low pressure application of Pirani gauge needs significant improvement of sensitivity and expansion of measureable low pressure limit. However, the performance of Pirani gauge in high vacuum regime remains critical concerns since gaseous thermal conduction with high percentage is essential requirement. In this work, the heat transfer mechanism of micro-Pirani gauge packaged in a non-hermetic chamber was investigated and analyzed compared with the one before wafer-level packaging. The cavity effect, extremely important for the efficient detection of low pressure, was numerically and experimentally analyzed considering the influence of the pressure, the temperature and the effective heat transfer area in micro-Pirani gauge chamber. The thermal conduction model is validated by experiment data of MEMS Pirani gauges with and without capping. It is found that nature gaseous convection in chamber, determined by the Rayleigh number, should be taken into consideration. The experiment and model calculated results show that thermal resistance increases in the molecule regime, and further increases after capping due to the suppression of gaseous convection. The gaseous thermal conduction accounts for an increasing percentage of thermal conduction at low pressure while little changes at high pressure after capping because of the existence of cavity effect improving the sensitivity of cavity-effect-influenced Pirani gauge for high vacuum regime.
NASA Technical Reports Server (NTRS)
McCaul, Eugene W., Jr.; Case, Jonathan L.; Zavodsky, Bradley T.; Srikishen, Jayanthi; Medlin, Jeffrey M.; Wood, Lance
2014-01-01
Inspection of output from various configurations of high-resolution, explicit convection forecast models such as the Weather Research and Forecasting (WRF) model indicates significant sensitivity to the choices of model physics pararneterizations employed. Some of the largest apparent sensitivities are related to the specifications of the cloud microphysics and planetary boundary layer physics packages. In addition, these sensitivities appear to be especially pronounced for the weakly-sheared, multicell modes of deep convection characteristic of the Deep South of the United States during the boreal summer. Possible ocean-land sensitivities also argue for further examination of the impacts of using unique ocean-land surface initialization datasets provided by the NASA Short-term Prediction Research and Transition (SPoRn Center to select NOAAlNWS weather forecast offices. To obtain better quantitative understanding of these sensitivities and also to determine the utility of the ocean-land initialization data, we have executed matrices of regional WRF forecasts for selected convective events near Mobile, AL (MOB), and Houston, TX (HGX). The matrices consist of identically initialized WRF 24-h forecasts using any of eight microphysics choices and any of three planetary boWldary layer choices. The resulting 24 simulations performed for each event within either the MOB or HGX regions are then compared to identify the sensitivities of various convective storm metrics to the physics choices. Particular emphasis is placed on sensitivities of precipitation timing, intensity, and coverage, as well as amount and coverage oflightuing activity diagnosed from storm kinematics and graupel in the mixed phase layer. The results confirm impressions gleaned from study of the behavior of variously configured WRF runs contained in the ensembles produced each spring at the Center for the Analysis and Prediction of Storms, but with the benefit of more straightforward control of the physics package choices. The design of the experiments thus allows for more direct interpretation of the sensitivities to each possible physics combination. The results should assist forecasters in their efforts to anticipate and correct for possible biases in simulated WRF convection patterns, and help the modeling community refine their model parameterizations.
NASA Technical Reports Server (NTRS)
McCaul, E. W., Jr.; Case, J. L.; Zavodsky, B. T.; Srikishen, J.; Medlin, J. M.; Wood, L.
2014-01-01
Inspection of output from various configurations of high-resolution, explicit convection forecast models such as the Weather Research and Forecasting (WRF) model indicates significant sensitivity to the choices of model physics parameterizations employed. Some of the largest apparent sensitivities are related to the specifications of the cloud microphysics and planetary boundary layer physics packages. In addition, these sensitivities appear to be especially pronounced for the weakly-sheared, multicell modes of deep convection characteristic of the Deep South of the United States during the boreal summer. Possible ocean-land sensitivities also argue for further examination of the impacts of using unique ocean-land surface initialization datasets provided by the NASA Short-term Prediction Research and Transition (SPoRT Center to select NOAA/NWS weather forecast offices. To obtain better quantitative understanding of these sensitivities and also to determine the utility of the ocean-land initialization data, we have executed matrices of regional WRF forecasts for selected convective events near Mobile, AL (MOB), and Houston, TX (HGX). The matrices consist of identically initialized WRF 24-h forecasts using any of eight microphysics choices and any of three planetary boundary layer choices. The resulting 24 simulations performed for each event within either the MOB or HGX regions are then compared to identify the sensitivities of various convective storm metrics to the physics choices. Particular emphasis is placed on sensitivities of precipitation timing, intensity, and coverage, as well as amount and coverage of lightning activity diagnosed from storm kinematics and graupel in the mixed phase layer. The results confirm impressions gleaned from study of the behavior of variously configured WRF runs contained in the ensembles produced each spring at the Center for the Analysis and Prediction of Storms, but with the benefit of more straightforward control of the physics package choices. The design of the experiments thus allows for more direct interpretation of the sensitivities to each possible physics combination. The results should assist forecasters in their efforts to anticipate and correct for possible biases in simulated WRF convection patterns, and help the modeling community refine their model parameterizations.
NASA Astrophysics Data System (ADS)
Wu, C.; Liu, X.; Zhang, K.; Diao, M.; Gettelman, A.
2016-12-01
Cirrus clouds in the upper troposphere play a key role in the Earth radiation budget, and their radiative forcing depends strongly on number concentration and size distribution of ice particles. In this study we evaluate the cloud microphysical properties simulated by the Community Atmosphere Model version 5.4 (CAM5) against the Small Particles in Cirrus (SPartICus) observations over the ARM South Great Plain (SGP) site between January and June 2010. Model simulation is performed using specific dynamics to preserve prognostic meteorology (U, V, and T) close to GEOS-5 analysis. Model results collocated with SPartICus flight tracks spatially and temporally are directly compared with the observations. We compare CAM5 simulated ice crystal number concentration (Ni), ice particle size distribution, ice water content (IWC), and Ni co-variances with temperature and vertical velocity with the statistics from SPartICus observations. All analyses are restricted to T ≤ -40°C and in a 6°×6° area centered at SGP. Model sensitivity tests are performed with different ice nucleation mechanisms and with the effects of pre-existing ice crystals to reflect the uncertainties in cirrus parameterizations. In addition, different threshold size for autoconversion of cloud ice to snow (Dcs) is also tested. We find that (1) a distinctly high Ni (100-1000 L-1) often occurred in the observations but is significantly underestimated in the model, which may be due to the smaller relative humidity with respect to ice (RHi) in the simulation that could suppress the homogeneous nucleation, (2) a positive correlation exists between Ni and vertical velocity variance (σw) at horizontal scales up to 50 km in the observation, and the model can reproduce this relationship but tends to underestimate Ni when σw is relatively small, (3) simulated Ni differs greatly among the sensitive experiments, and simulated IWC is also sensitive to the cirrus parameterizations but to a lesser extent. Moreover, the model produces much better ice particle sizes in terms of number-mean diameter (Dnm) but significantly underestimate Ni and IWC for all the designed sensitive experiments. Our results suggest that better representation of environmental conditions (e.g., RHi and water vapor) is needed to improve the formation and evolution of ice clouds in the model.
Farris, Samantha G; Uebelacker, Lisa A; Brown, Richard A; Price, Lawrence H; Desaulniers, Julie; Abrantes, Ana M
2017-12-01
Smoking increases risk of early morbidity and mortality, and risk is compounded by physical inactivity. Anxiety sensitivity (fear of anxiety-relevant somatic sensations) is a cognitive factor that may amplify the subjective experience of exertion (effort) during exercise, subsequently resulting in lower engagement in physical activity. We examined the effect of anxiety sensitivity on ratings of perceived exertion (RPE) and physiological arousal (heart rate) during a bout of exercise among low-active treatment-seeking smokers. Adult daily smokers (n = 157; M age = 44.9, SD = 11.13; 69.4% female) completed the Rockport 1.0 mile submaximal treadmill walk test. RPE and heart rate were assessed during the walk test. Multi-level modeling was used to examine the interactive effect of anxiety sensitivity × time on RPE and on heart rate at five time points during the walk test. There were significant linear and cubic time × anxiety sensitivity effects for RPE. High anxiety sensitivity was associated with greater initial increases in RPE during the walk test, with stabilized ratings towards the last 5 min, whereas low anxiety sensitivity was associated with lower initial increase in RPE which stabilized more quickly. The linear time × anxiety sensitivity effect for heart rate was not significant. Anxiety sensitivity is associated with increasing RPE during moderate-intensity exercise. Persistently rising RPE observed for smokers with high anxiety sensitivity may contribute to the negative experience of exercise, resulting in early termination of bouts of prolonged activity and/or decreased likelihood of future engagement in physical activity.
Structural development and web service based sensitivity analysis of the Biome-BGC MuSo model
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Balogh, János; Churkina, Galina; Haszpra, László; Horváth, Ferenc; Ittzés, Péter; Ittzés, Dóra; Ma, Shaoxiu; Nagy, Zoltán; Pintér, Krisztina; Barcza, Zoltán
2014-05-01
Studying the greenhouse gas exchange, mainly the carbon dioxide sink and source character of ecosystems is still a highly relevant research topic in biogeochemistry. During the past few years research focused on managed ecosystems, because human intervention has an important role in the formation of the land surface through agricultural management, land use change, and other practices. In spite of considerable developments current biogeochemical models still have uncertainties to adequately quantify greenhouse gas exchange processes of managed ecosystem. Therefore, it is an important task to develop and test process-based biogeochemical models. Biome-BGC is a widely used, popular biogeochemical model that simulates the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems. Biome-BGC was originally developed by the Numerical Terradynamic Simulation Group (NTSG) of University of Montana (http://www.ntsg.umt.edu/project/biome-bgc), and several other researchers used and modified it in the past. Our research group developed Biome-BGC version 4.1.1 to improve essentially the ability of the model to simulate carbon and water cycle in real managed ecosystems. The modifications included structural improvements of the model (e.g., implementation of multilayer soil module and drought related plant senescence; improved model phenology). Beside these improvements management modules and annually varying options were introduced and implemented (simulate mowing, grazing, planting, harvest, ploughing, application of fertilizers, forest thinning). Dynamic (annually varying) whole plant mortality was also enabled in the model to support more realistic simulation of forest stand development and natural disturbances. In the most recent model version separate pools have been defined for fruit. The model version which contains every former and new development is referred as Biome-BGC MuSo (Biome-BGC with multi-soil layer). Within the frame of the BioVeL project (http://www.biovel.eu) an open source and domain independent scientific workflow management system (http://www.taverna.org.uk) are used to support 'in silico' experimentation and easy applicability of different models including Biome-BGC MuSo. Workflows can be built upon functionally linked sets of web services like retrieval of meteorological dataset and other parameters; preparation of single run or spatial run model simulation; desk top grid technology based Monte Carlo experiment with parallel processing; model sensitivity analysis, etc. The newly developed, Monte Carlo experiment based sensitivity analysis is described in this study and results are presented about differences in the sensitivity of the original and the developed Biome-BGC model.
Final Results from the Jefferson Lab Qweak Experiment
NASA Astrophysics Data System (ADS)
Smith, Gregory
2017-09-01
The Qweak collaboration has unblinded our final result. We briefly describe the e-> p elastic scattering experiment used to extract the asymmetries measured in the two distinct running periods which constituted the experiment. The precision obtained on the final combined asymmetry is +/- 9.3 ppb. Some of the backgrounds and corrections applied in the experiment will be explained and quantified. We then provide the results of several methods we have used to extract consistent values of the proton's weak charge QWp from our asymmetry measurements. We also present results for the strange and axial form factors obtained from a fit to existing parity-violating electron scattering data. In conjunction with existing atomic parity violation results on 133Cs we extract the vector weak quark couplings C1u and C1d. The latter are combined to obtain the neutron's weak charge. From the proton's weak charge we obtain a result for sin2θW at the energy scale of our experiment, a sensitive SM test of the running of sin2θW . We also show the mass reach for new beyond-the-Standard-Model physics obtained from our determination of the proton's weak charge and its uncertainty, and discuss sensitivity to specific models. This work was supported by the U.S. Department of Energy, Office of Science, under Contract DE-AC05-06OR23177, the Natural Sciences and Engineering Research Council of Canada (NSERC), and the National Science Foundation (NSF).
A radon progeny deposition model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rielage, Keith; Elliott, Steven R; Hime, Andrew
2010-12-01
The next generation low-background detectors operating underground aim for unprecedented low levels of radioactive backgrounds. Although the radioactive decays of airborne radon (particularly {sup 222}Rn) and its subsequent progeny present in an experiment are potential backgrounds, also problematic is the deposition of radon progeny on detector materials. Exposure to radon at any stage of assembly of an experiment can result in surface contamination by progeny supported by the long half life (22 y) of {sup 210}Pb on sensitive locations of a detector. An understanding of the potential surface contamination from deposition will enable requirements of radon-reduced air and clean roommore » environments for the assembly of low background experiments. It is known that there are a number of environmental factors that govern the deposition of progeny onto surfaces. However, existing models have not explored the impact of some environmental factors important for low background experiments. A test stand has been constructed to deposit radon progeny on various surfaces under a controlled environment in order to develop a deposition model. Results from this test stand and the resulting deposition model are presented.« less
NASA Astrophysics Data System (ADS)
Demuzere, M.; De Ridder, K.; van Lipzig, N. P. M.
2008-08-01
During the ESCOMPTE campaign (Experience sur Site pour COntraindre les Modeles de Pollution atmospherique et de Transport d'Emissions), a 4-day intensive observation period was selected to evaluate the Advanced Regional Prediction System (ARPS), a nonhydrostatic meteorological mesoscale model that was optimized with a parameterization for thermal roughness length to better represent urban surfaces. The evaluation shows that the ARPS model is able to correctly reproduce temperature, wind speed, and direction for one urban and two rural measurements stations. Furthermore, simulated heat fluxes show good agreement compared to the observations, although simulated sensible heat fluxes were initially too low for the urban stations. In order to improve the latter, different roughness length parameterization schemes were tested, combined with various thermal admittance values. This sensitivity study showed that the Zilitinkevich scheme combined with and intermediate value of thermal admittance performs best.
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2016-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty.The Predictive Ecosystem Analyzer (PEcAn) is an informatics toolbox that wraps around an ecosystem model and can be used to help identify which factors drive uncertainty. We tested a suite of models (LPJ-GUESS, MAESPA, GDAY, CLM5, DALEC, ED2), which represent a range from low to high structural complexity, across a range of Free-Air CO2 Enrichment (FACE) experiments: the Kennedy Space Center Open Top Chamber Experiment, the Rhinelander FACE experiment, the Duke Forest FACE experiment and the Oak Ridge Experiment on CO2 Enrichment. These tests were implemented in a novel benchmarking workflow that is automated, repeatable, and generalized to incorporate different sites and ecological models. Observational data from the FACE experiments represent a first test of this flexible, extensible approach aimed at providing repeatable tests of model process representation.To identify and evaluate the assumptions causing inter-model differences we used PEcAn to perform model sensitivity and uncertainty analysis, not only to assess the components of NPP, but also to examine system processes such nutrient uptake and and water use. Combining the observed patterns of uncertainty between multiple models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.
Resilience and rejection sensitivity mediate long-term outcomes of parental divorce.
Schaan, Violetta K; Vögele, Claus
2016-11-01
Increasing divorce rates leave more and more children to deal with the separation of their parents. Recent research suggests that children of divorced parents more often experience psychological and physical symptoms than children of non-divorced parents. The processes that mediate the relationship between parental divorce and ill-health, however, are still elusive. This study investigated the mediating role of psychological factors such as resilience and rejection sensitivity on the long-term consequences of parental divorce in young adults. One hundred and ninety-nine participants (mean age 22.3 years) completed an online survey, including measures of mental health, childhood trauma, resilience, and rejection sensitivity. Participants with divorced parents (33 %) reported increased levels of psychological symptoms, childhood trauma, rejection sensitivity, and lower levels of resilience. The association between parental divorce and mental health was fully mediated by resilience, rejection sensitivity, and childhood trauma. The mediation model explained up to 44 % of the total variance in mental health symptoms. Resilience and rejection sensitivity are crucial factors for successful coping with the experience of parental separation. Prevention programs that help to boost children's resilience might help to reduce the long-term effects of parental divorce on their attachment style (e.g., rejection sensitivity), thereby improving their mental health on the long run. Furthermore, the results call for parental awareness and counseling to target and reduce the observed increased level of childhood trauma. Limitations concern the cross-sectional and retrospective design of the study.
A radon daughter deposition model for low background experiments
NASA Astrophysics Data System (ADS)
Rielage, K.; Guiseppe, V. E.; Mastbaum, A.; Elliott, S. R.; Hime, A.
2009-05-01
The next generation low-background detectors operating underground, such as dark matter searches and neutrinoless double-beta decay, aim for unprecedented low levels of radioactive backgrounds. Although the radioactive decays of airborne radon (particularly ^222Rn) and its subsequent daughters present in an experiment are potential backgrounds, more troublesome is the deposition of radon daughters on detector materials. Exposure to radon at any stage of assembly of an experiment can result in surface contamination by daughters supported by the long half life (22 y) of ^210Pb on sensitive locations of a detector. An understanding of the potential surface contamination will enable requirements of radon-reduced air and clean room environments for the assembly of low background experiments. It is known that there are a number of environmental factors that govern the deposition of daughters onto surfaces. However, existing models have not explored the impact of some environmental factors important for low background experiments. A test stand has been constructed to deposit radon daughters on various surfaces under a controlled environment in order to develop a deposition model. Results from this test stand and the resulting deposition model will be presented.
NASA Astrophysics Data System (ADS)
Prime, M. B.; Vaughan, D. E.; Preston, D. L.; Buttler, W. T.; Chen, S. R.; Oró, D. M.; Pack, C.
2014-05-01
Experiments applying a supported shock through mating surfaces (Atwood number = 1) with geometrical perturbations have been proposed for studying strength at strain rates up to 107/s using Richtmyer-Meshkov (RM) instabilities. Buttler et al. recently reported experimental results for RM instability growth in copper but with an unsupported shock applied by high explosives and the geometrical perturbations on the opposite free surface (Atwood number = -1). This novel configuration allowed detailed experimental observation of the instability growth and arrest. We present results and interpretation from numerical simulations of the Buttler RM instability experiments. Highly-resolved, two-dimensional simulations were performed using a Lagrangian hydrocode and the Preston-Tonks-Wallace (PTW) strength model. The model predictions show good agreement with the data. The numerical simulations are used to examine various assumptions previously made in an analytical model and to estimate the sensitivity of such experiments to material strength.
Aun, Marcelo Vivolo; Saraiva-Romanholo, Beatriz Mangueira; de Almeida, Francine Maria; Brüggemann, Thayse Regina; Kalil, Jorge; Martins, Milton de Arruda; Arantes-Costa, Fernanda Magalhães; Giavina-Bianchi, Pedro
2015-01-01
ABSTRACT Objective To develop a new experimental model of chronic allergic pulmonary disease induced by house dust mite, with marked production of specific immunoglobulin E (IgE), eosinophilic inflammatory infiltrate in the airways and remodeling, comparing two different routes of sensitization. Methods The protocol lasted 30 days. BALB/c mice were divided into six groups and were sensitized subcutaneously or intraperitoneally with saline (negative control), Dermatophagoides pteronyssinus (Der p) 50 or 500mcg in three injections. Subsequently they underwent intranasal challenge with Der p or saline for 7 days and were sacrificed 24 hours after the last challenge. We evaluated the titration of specific IgE anti-Der p, eosinophilic density in peribronchovascular space and airway remodeling. Results Both animals sensitized intraperitoneally and subcutaneously produced specific IgE anti-Der p. Peribronchovascular eosinophilia increased only in mice receiving lower doses of Der p. However, only the group sensitized with Der p 50mcg through subcutaneously route showed significant airway remodeling. Conclusion In this murine model of asthma, both pathways of sensitization led to the production of specific IgE and eosinophilia in the airways. However, only the subcutaneously route was able to induce remodeling. Furthermore, lower doses of Der p used in sensitization were better than higher ones, suggesting immune tolerance. Further studies are required to evaluate the efficacy of this model in the development of bronchial hyperresponsiveness, but it can already be replicated in experiments to create new therapeutic drugs or immunotherapeutic strategies. PMID:26761554
NASA Technical Reports Server (NTRS)
Hancock, G. D.; Waite, W. P.
1984-01-01
Two experiments were performed employing swept frequency microwaves for the purpose of investigating the reflectivity from soil volumes containing both discontinuous and continuous changes in subsurface soil moisture content. Discontinuous moisture profiles were artificially created in the laboratory while continuous moisture profiles were induced into the soil of test plots by the environment of an agricultural field. The reflectivity for both the laboratory and field experiments was measured using bi-static reflectometers operated over the frequency ranges of 1.0 to 2.0 GHz and 4.0 to 8.0 GHz. Reflectivity models that considered the discontinuous and continuous moisture profiles within the soil volume were developed and compared with the results of the experiments. This comparison shows good agreement between the smooth surface models and the measurements. In particular the comparison of the smooth surface multi-layer model for continuous moisture profiles and the yield experiment measurements points out the sensitivity of the specular component of the scattered electromagnetic energy to the movement of moisture in the soil.
Weber, Alain; Braybrook, Siobhan; Huflejt, Michal; Mosca, Gabriella; Routier-Kierzkowska, Anne-Lise; Smith, Richard S
2015-06-01
Growth in plants results from the interaction between genetic and signalling networks and the mechanical properties of cells and tissues. There has been a recent resurgence in research directed at understanding the mechanical aspects of growth, and their feedback on genetic regulation. This has been driven in part by the development of new micro-indentation techniques to measure the mechanical properties of plant cells in vivo. However, the interpretation of indentation experiments remains a challenge, since the force measures results from a combination of turgor pressure, cell wall stiffness, and cell and indenter geometry. In order to interpret the measurements, an accurate mechanical model of the experiment is required. Here, we used a plant cell system with a simple geometry, Nicotiana tabacum Bright Yellow-2 (BY-2) cells, to examine the sensitivity of micro-indentation to a variety of mechanical and experimental parameters. Using a finite-element mechanical model, we found that, for indentations of a few microns on turgid cells, the measurements were mostly sensitive to turgor pressure and the radius of the cell, and not to the exact indenter shape or elastic properties of the cell wall. By complementing indentation experiments with osmotic experiments to measure the elastic strain in turgid cells, we could fit the model to both turgor pressure and cell wall elasticity. This allowed us to interpret apparent stiffness values in terms of meaningful physical parameters that are relevant for morphogenesis. © The Author 2015. Published by Oxford University Press on behalf of the Society for Experimental Biology.
Distorted neutrino oscillations from time varying cosmic fields
NASA Astrophysics Data System (ADS)
Krnjaic, Gordan; Machado, Pedro A. N.; Necib, Lina
2018-04-01
Cold, ultralight (≪eV ) bosonic fields can induce fast temporal variation in neutrino couplings, thereby distorting neutrino oscillations. In this paper, we exploit this effect to introduce a novel probe of neutrino time variation and dark matter at long-baseline experiments. We study several representative observables and find that current and future experiments, including DUNE and JUNO, are sensitive to a wide range of model parameters over many decades in mass reach and time-variation periodicity.
White, André O.; Rauhut, Anthony S.
2014-01-01
The present experiments examined the effects of prazosin, a selective α1-adrenergic receptor antagonist, on the development of methamphetamine conditioned hyperactivity and context-specific sensitization. Mice received an injection of vehicle (distilled water) or prazosin (0.5, 1.0 or 2.0 mg/kg) 30 minutes prior to a second injection of vehicle (saline) or methamphetamine (1.0 mg/kg) during the conditioning sessions (Experiment 1). Following the conditioning sessions, mice were tested for conditioned hyperactivity and then tested for context-specific sensitization. In subsequent experiments, mice received an injection of vehicle (distilled water) or prazosin (2.0 mg/kg) immediately (Experiment 2) or 24 hours (Experiment 3) after the conditioning sessions and then tested for conditioned hyperactivity and context-specific sensitization. Prazosin dose-dependently blocked the development of methamphetamine conditioned hyperactivity and context-specific sensitization when administered prior to the methamphetamine during the conditioning phase; however nonspecific motor impairments also were observed (Experiment 1). Immediate (Experiment 2), but not the 24-hour delay (Experiment 3), post-session administration of prazosin attenuated the development of methamphetamine conditioned hyperactivity and context-specific sensitization. Nonspecific motor impairments were not observed in these latter experiments. Collectively, these results suggest that the α1-adrenergic receptor mediates the development of methamphetamine-conditioned hyperactivity and context-specific sensitization, perhaps by altering memory consolidation and/or reconsolidation processes. PMID:24487011
Deafness Simulation: A Model for Enhancing Awareness and Sensitivity among Hearing Educators.
ERIC Educational Resources Information Center
Sevigny-Skyer, Solange C.; Dagel, Delbert D.
1990-01-01
The National Technical Institute for the Deaf developed and implemented a school-based deafness simulation project for hearing faculty members called "Keeping in Touch." Faculty members wore tinnitus maskers which produced a moderate-to-severe hearing loss and subsequently discussed their experiences, feelings, and communication…
A study of material damping in large space structures
NASA Technical Reports Server (NTRS)
Highsmith, A. L.; Allen, D. H.
1989-01-01
A constitutive model was developed for predicting damping as a function of damage in continuous fiber reinforced laminated composites. The damage model is a continuum formulation, and uses internal state variables to quantify damage and its subsequent effect on material response. The model is sensitive to the stacking sequence of the laminate. Given appropriate baseline data from unidirectional material, and damping as a function of damage in one crossply laminate, damage can be predicted as a function of damage in other crossply laminates. Agreement between theory and experiment was quite good. A micromechanics model was also developed for examining the influence of damage on damping. This model explicitly includes crack surfaces. The model provides reasonable predictions of bending stiffness as a function of damage. Damping predictions are not in agreement with the experiment. This is thought to be a result of dissipation mechanisms such as friction, which are not presently included in the analysis.
Modeling of Dense Plasma Effects in Short-Pulse Laser Experiments
NASA Astrophysics Data System (ADS)
Walton, Timothy; Golovkin, Igor; Macfarlane, Joseph; Prism Computational Sciences, Madison, WI Team
2016-10-01
Warm and Hot Dense Matter produced in short-pulse laser experiments can be studied with new high resolving power x-ray spectrometers. Data interpretation implies accurate modeling of the early-time heating dynamics and the radiation conditions that are generated. Producing synthetic spectra requires a model that describes the major physical processes that occur inside the target, including the hot-electron generation and relaxation phases and the effect of target heating. An important issue concerns the sensitivity of the predicted K-line shifts to the continuum lowering model that is used. We will present a set of PrismSPECT spectroscopic simulations using various continuum lowering models: Hummer/Mihalas, Stewart-Pyatt, and Ecker-Kroll and discuss their effect on the formation of K-shell features. We will also discuss recently implemented models for dense plasma shifts for H-like, He-like and neutral systems.
EURODELTA-Trends, a multi-model experiment of air quality hindcast in Europe over 1990-2010
NASA Astrophysics Data System (ADS)
Colette, Augustin; Andersson, Camilla; Manders, Astrid; Mar, Kathleen; Mircea, Mihaela; Pay, Maria-Teresa; Raffort, Valentin; Tsyro, Svetlana; Cuvelier, Cornelius; Adani, Mario; Bessagnet, Bertrand; Bergström, Robert; Briganti, Gino; Butler, Tim; Cappelletti, Andrea; Couvidat, Florian; D'Isidoro, Massimo; Doumbia, Thierno; Fagerli, Hilde; Granier, Claire; Heyes, Chris; Klimont, Zig; Ojha, Narendra; Otero, Noelia; Schaap, Martijn; Sindelarova, Katarina; Stegehuis, Annemiek I.; Roustan, Yelva; Vautard, Robert; van Meijgaard, Erik; Garcia Vivanco, Marta; Wind, Peter
2017-09-01
The EURODELTA-Trends multi-model chemistry-transport experiment has been designed to facilitate a better understanding of the evolution of air pollution and its drivers for the period 1990-2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions mitigation measures in improving regional-scale air quality. The present paper formulates the main scientific questions and policy issues being addressed by the EURODELTA-Trends modelling experiment with an emphasis on how the design and technical features of the modelling experiment answer these questions. The experiment is designed in three tiers, with increasing degrees of computational demand in order to facilitate the participation of as many modelling teams as possible. The basic experiment consists of simulations for the years 1990, 2000, and 2010. Sensitivity analysis for the same three years using various combinations of (i) anthropogenic emissions, (ii) chemical boundary conditions, and (iii) meteorology complements it. The most demanding tier consists of two complete time series from 1990 to 2010, simulated using either time-varying emissions for corresponding years or constant emissions. Eight chemistry-transport models have contributed with calculation results to at least one experiment tier, and five models have - to date - completed the full set of simulations (and 21-year trend calculations have been performed by four models). The modelling results are publicly available for further use by the scientific community. The main expected outcomes are (i) an evaluation of the models' performances for the three reference years, (ii) an evaluation of the skill of the models in capturing observed air pollution trends for the 1990-2010 time period, (iii) attribution analyses of the respective role of driving factors (e.g. emissions, boundary conditions, meteorology), (iv) a dataset based on a multi-model approach, to provide more robust model results for use in impact studies related to human health, ecosystem, and radiative forcing.
Impact and damage of an armor composite
NASA Astrophysics Data System (ADS)
Resnyansky, A. D.; Parry, S.; Bourne, N. K.; Townsend, D.; James, B. J.
2015-06-01
The use of carbon fiber composites under shock and impact loading in aerospace, defense and automotive applications is increasingly important. Therefore prediction of the composite behavior and damage in these conditions is critical. Influence of anisotropy, fiber orientation and the rate of loading during the impact is considered in the present study and validated by comparison with experiments. The experiments deal with the plane, ballistic and Taylor impacts accompanied by high-speed photography observations and tomography of recovered samples. The CTH hydrocode is employed as the modeling platform with an advanced rate sensitive material model used for description of the deformation and damage of the transversely isotropic composite material.
Determinants of Prosocial Behavior in Included Versus Excluded Contexts
Cuadrado, Esther; Tabernero, Carmen; Steinel, Wolfgang
2016-01-01
Prosocial behavior (PSB) is increasingly becoming necessary as more and more individuals experience exclusion. In this context it is important to understand the motivational determinants of PSB. Here we report two experiments which analyzed the influence of dispositional (prosocialness; rejection sensitivity) and motivational variables (prosocial self-efficacy; prosocial collective efficacy; trust; anger; social affiliation motivation) on PSB under neutral contexts (Study 1), and once under inclusion or exclusion conditions (Study 2). Both studies provided evidence for the predicted mediation of PSB. Results in both neutral and inclusion and exclusion conditions supported our predictive model of PSB. In the model dispositional variables predicted motivational variables, which in turn predicted PSB. We showed that the investigated variables predicted PSB; this suggests that to promote PSB one could (1) foster prosocialness, prosocial self and collective efficacy, trust in others and affiliation motivation and (2) try to reduce negative feelings and the tendency to dread rejection in an attempt to reduce the negative impact that these variables have on PSB. Moreover, the few differences that emerged in the model between the inclusion and exclusion contexts suggested that in interventions with excluded individuals special care emphasis should be placed on addressing rejection sensitivity and lack of trust. PMID:26779103
Model intra-comparison of transboundary sulfate loadings over springtime east Asia
NASA Astrophysics Data System (ADS)
Goto, D.; Ohara, T.; Nakajima, T.; Takemura, T.; Kajino, M.; Dai, T.; Matsui, H.; Takami, A.; Hatakeyama, S.; Aoki, K.; Sugimoto, N.; Shimizu, A.
2013-12-01
Over east Asia, a spatial gradient of sulfate aerosols from source to outflow regions has not fully evaluated by simulations. In the present study, we executed a global aerosol-transport model (SPRINTARS) during April 2006 to investigate the spatial gradient of sulfate aerosols using multiple measurements including surface mass concentration, aerosol optical thickness, and vertical profiles of extinction coefficients for spherical particles. We also performed sensitivity experiments to estimate possible uncertainties of sulfate mass loadings caused by macrophysical processes; emission inventory, dynamic core, and spatial resolution. Among the experiments, although a difference in the surface sulfate mass concentrations over east Asia was large, none of the simulations in the present study as well as regional models reproduced the spatial gradient of the surface sulfate from the source over China to the outflow regions in Japan. The sensitivity of different macrophysical factors to the surface sulfate differs from that to sulfate loadings in the column especially in the marine boundary layers (MBL). Therefore, to properly simulate the transboundary air pollution over east Asia is required to use multiple measurements in both the source and outflow regions especially in the MBL during the polluted days.
Determinants of Prosocial Behavior in Included Versus Excluded Contexts.
Cuadrado, Esther; Tabernero, Carmen; Steinel, Wolfgang
2015-01-01
Prosocial behavior (PSB) is increasingly becoming necessary as more and more individuals experience exclusion. In this context it is important to understand the motivational determinants of PSB. Here we report two experiments which analyzed the influence of dispositional (prosocialness; rejection sensitivity) and motivational variables (prosocial self-efficacy; prosocial collective efficacy; trust; anger; social affiliation motivation) on PSB under neutral contexts (Study 1), and once under inclusion or exclusion conditions (Study 2). Both studies provided evidence for the predicted mediation of PSB. Results in both neutral and inclusion and exclusion conditions supported our predictive model of PSB. In the model dispositional variables predicted motivational variables, which in turn predicted PSB. We showed that the investigated variables predicted PSB; this suggests that to promote PSB one could (1) foster prosocialness, prosocial self and collective efficacy, trust in others and affiliation motivation and (2) try to reduce negative feelings and the tendency to dread rejection in an attempt to reduce the negative impact that these variables have on PSB. Moreover, the few differences that emerged in the model between the inclusion and exclusion contexts suggested that in interventions with excluded individuals special care emphasis should be placed on addressing rejection sensitivity and lack of trust.
Heaslip, Vanessa; Hean, Sarah; Parker, Jonathan
2016-08-09
To present a new etemic model of vulnerability. Despite vulnerability being identified as a core consequence of health and health experiences, there has been little research exploring the meaning of vulnerability as a concept. Yet, being vulnerable is known to have dire physical/mental health consequences. It is therefore a fundamental issue for nurses to address. To date, the meaning of the term vulnerability has been influenced by the work of Spiers (Journal of Advanced Nursing, 31, 2000, 715, The Essential Concepts of Nursing: Building Blocks for Practice, 2005, Elsevier, London). Spiers identified two aspects of vulnerability: the etic (external judgment of another persons' vulnerability) and the emic (internal lived experience of vulnerability). This approach has led to a plethora of research which has explored the etic (external judgment) of vulnerability and rendered the internal lived (or emic) experience invisible. Consequences of this, for marginalised communities such as Gypsy Roma Travellers include a lack of culturally sensitive services compounding health inequalities. Position paper. Drawing upon a qualitative phenomenological research study exploring the lived experience of vulnerability from a Gypsy Roma Travelling community (published previously), this paper presents a new model of vulnerability. This etemic model of vulnerability values both external and internal dimensions of vulnerability and argues for a fusion of these two opposing perspectives. If nurses and other health- and social care professionals wish to develop practice that is successful in engaging with Gypsy Roma Travellers, then there is a need to both understand and respect their community. This can be achieved through an etemic approach to understanding their vulnerability achieved by eliciting lived experience alongside the appreciation of epidemiological studies. If nurses and health practitioners used this etemic approach to practice then it would enable both the development and delivery of culturally sensitive services facilitating health access to this community. Only then, will their poor health status be successfully addressed. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Thomas, Yoann; Dumas, Franck; Andréfouët, Serge
2016-12-01
The black-lip pearl oyster (Pinctada margaritifera) is cultured extensively to produce black pearls, especially in French Polynesia atoll lagoons. This aquaculture relies on spat collection, a process that experiences spatial and temporal variability and needs to be optimized by understanding which factors influence recruitment. Here, we investigate the sensitivity of P. margaritifera larval dispersal to both physical and biological factors in the lagoon of Ahe atoll. Coupling a validated 3D larval dispersal model, a bioenergetics larval growth model following the Dynamic Energy Budget (DEB) theory, and a population dynamics model, the variability of lagoon-scale connectivity patterns and recruitment potential is investigated. The relative contribution of reared and wild broodstock to the lagoon-scale recruitment potential is also investigated. Sensitivity analyses pointed out the major effect of the broodstock population structure as well as the sensitivity to larval mortality rate and inter-individual growth variability to larval supply and to the subsequent settlement potential. The application of the growth model clarifies how trophic conditions determine the larval supply and connectivity patterns. These results provide new cues to understand the dynamics of bottom-dwelling populations in atoll lagoons, their recruitment, and discuss how to take advantage of these findings and numerical models for pearl oyster management.
Climate simulations and projections with a super-parameterized climate model
Stan, Cristiana; Xu, Li
2014-07-01
The mean climate and its variability are analyzed in a suite of numerical experiments with a fully coupled general circulation model in which subgrid-scale moist convection is explicitly represented through embedded 2D cloud-system resolving models. Control simulations forced by the present day, fixed atmospheric carbon dioxide concentration are conducted using two horizontal resolutions and validated against observations and reanalyses. The mean state simulated by the higher resolution configuration has smaller biases. Climate variability also shows some sensitivity to resolution but not as uniform as in the case of mean state. The interannual and seasonal variability are better represented in themore » simulation at lower resolution whereas the subseasonal variability is more accurate in the higher resolution simulation. The equilibrium climate sensitivity of the model is estimated from a simulation forced by an abrupt quadrupling of the atmospheric carbon dioxide concentration. The equilibrium climate sensitivity temperature of the model is 2.77 °C, and this value is slightly smaller than the mean value (3.37 °C) of contemporary models using conventional representation of cloud processes. As a result, the climate change simulation forced by the representative concentration pathway 8.5 scenario projects an increase in the frequency of severe droughts over most of the North America.« less
Adaptation and Sensitization to Proteotoxic Stress
Leak, Rehana K.
2014-01-01
Although severe stress can elicit toxicity, mild stress often elicits adaptations. Here we review the literature on stress-induced adaptations versus stress sensitization in models of neurodegenerative diseases. We also describe our recent findings that chronic proteotoxic stress can elicit adaptations if the dose is low but that high-dose proteotoxic stress sensitizes cells to subsequent challenges. In these experiments, long-term, low-dose proteasome inhibition elicited protection in a superoxide dismutase-dependent manner. In contrast, acute, high-dose proteotoxic stress sensitized cells to subsequent proteotoxic challenges by eliciting catastrophic loss of glutathione. However, even in the latter model of synergistic toxicity, several defensive proteins were upregulated by severe proteotoxicity. This led us to wonder whether high-dose proteotoxic stress can elicit protection against subsequent challenges in astrocytes, a cell type well known for their resilience. In support of this new hypothesis, we found that the astrocytes that survived severe proteotoxicity became harder to kill. The adaptive mechanism was glutathione dependent. If these findings can be generalized to the human brain, similar endogenous adaptations may help explain why neurodegenerative diseases are so delayed in appearance and so slow to progress. In contrast, sensitization to severe stress may explain why defenses eventually collapse in vulnerable neurons. PMID:24659932
Methamphetamine-induced behavioral sensitization in a rodent model of posttraumatic stress disorder.
Eagle, Andrew L; Perrine, Shane A
2013-07-01
Single prolonged stress (SPS) is a rodent model of posttraumatic stress disorder (PTSD)-like characteristics. Given that PTSD is frequently comorbid with substance abuse and dependence, including methamphetamine (METH), the current study sought to investigate the effects of SPS on METH-induced behavioral sensitization. In experiment 1, Sprague-Dawley rats were subject to SPS or control treatment and subsequently tested across four sessions of an escalating METH dosing paradigm. METH was injected (i.p.) in escalating doses (0, 0.032, 0.1, 0.32, 1.0, and 3.2mg/kg; dissolved in saline) every 15min and ambulatory activity was recorded. In experiment 2, SPS and control treated rats were injected (i.p.) with either saline or METH (5mg/kg) for five consecutive daily sessions and tested for stereotypy as well as ambulatory activity. Two days later, all animals were injected with a challenge dose of METH (2.5mg/kg) and again tested for activity. No differences in the acute response to METH were observed between SPS and controls. SPS enhanced METH induced ambulatory activity across sessions, compared to controls. METH-induced stereotypy increased across sessions, indicative of behavioral sensitization; however, SPS attenuated, not enhanced, this effect suggesting that SPS may prevent the development of stereotypy sensitization. Collectively, results show that SPS increases repeated METH-induced ambulatory activity while preventing the transition across sessions from ambulatory activity to stereotypy. These findings suggest that SPS alters drug-induced neuroplasticity associated with behavioral sensitization to METH, which may reflect an effect on the shared neurocircuitry underlying PTSD and substance dependence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas
2018-02-01
Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.
Automated Optimization of Potential Parameters
Michele, Di Pierro; Ron, Elber
2013-01-01
An algorithm and software to refine parameters of empirical energy functions according to condensed phase experimental measurements are discussed. The algorithm is based on sensitivity analysis and local minimization of the differences between experiment and simulation as a function of potential parameters. It is illustrated for a toy problem of alanine dipeptide and is applied to folding of the peptide WAAAH. The helix fraction is highly sensitive to the potential parameters while the slope of the melting curve is not. The sensitivity variations make it difficult to satisfy both observations simultaneously. We conjecture that there is no set of parameters that reproduces experimental melting curves of short peptides that are modeled with the usual functional form of a force field. PMID:24015115
Direct drive: Simulations and results from the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radha, P. B., E-mail: rbah@lle.rochester.edu; Hohenberger, M.; Edgell, D. H.
Direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivity analyses indicatemore » that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.« less
Risk-Based Fire Safety Experiment Definition for Manned Spacecraft
NASA Technical Reports Server (NTRS)
Apostolakis, G. E.; Ho, V. S.; Marcus, E.; Perry, A. T.; Thompson, S. L.
1989-01-01
Risk methodology is used to define experiments to be conducted in space which will help to construct and test the models required for accident sequence identification. The development of accident scenarios is based on the realization that whether damage occurs depends on the time competition of two processes: the ignition and creation of an adverse environment, and the detection and suppression activities. If the fire grows and causes damage faster than it is detected and suppressed, then an accident occurred. The proposed integrated experiments will provide information on individual models that apply to each of the above processes, as well as previously unidentified interactions and processes, if any. Initially, models that are used in terrestrial fire risk assessments are considered. These include heat and smoke release models, detection and suppression models, as well as damage models. In cases where the absence of gravity substantially invalidates a model, alternate models will be developed. Models that depend on buoyancy effects, such as the multizone compartment fire models, are included in these cases. The experiments will be performed in a variety of geometries simulating habitable areas, racks, and other spaces. These simulations will necessitate theoretical studies of scaling effects. Sensitivity studies will also be carried out including the effects of varying oxygen concentrations, pressures, fuel orientation and geometry, and air flow rates. The experimental apparatus described herein includes three major modules: the combustion, the fluids, and the command and power modules.
Swanson, William H.; Dul, Mitchell W.; Horner, Douglas G.; Liu, Tiffany; Tran, Irene
2014-01-01
Purpose. To develop perimetric stimuli for which sensitivities are more resistant to reduced retinal illumination than current clinical perimeters. Methods. Fifty-four people free of eye disease were dilated and tested monocularly. For each test, retinal illumination was attenuated with neutral density (ND) filters, and a standard adaptation model was fit to derive mean and SEM for the adaptation parameter (NDhalf). For different stimuli, t-tests on NDhalf were used to assess significance of differences in consistency with Weber's law. Three experiments used custom Gaussian-windowed contrast sensitivity perimetry (CSP). Experiment 1 used CSP-1, with a Gaussian temporal pulse, a spatial frequency of 0.375 cyc/deg (cpd), and SD of 1.5°. Experiment 1 also used the Humphrey Matrix perimeter, with the N-30 test using 0.25 cpd and 25 Hz flicker. Experiment 2 used a rectangular temporal pulse, SDs of 0.25° and 0.5°, and spatial frequencies of 0.0 and 1.0 cpd. Experiment 3 used CSP-2, with 5-Hz flicker, SDs from 0.5° to 1.8°, and spatial frequencies from 0.14 to 0.50 cpd. Results. In Experiment 1, CSP-1 was more consistent with Weber's law (NDhalf ± SEM = 1.86 ± 0.08 log unit) than N-30 (NDhalf = 1.03 ± 0.03 log unit; t > 9, P < 0.0001). All stimuli used in Experiments 2 and 3 had comparable consistency with Weber's law (NDhalf = 1.49–1.69 log unit; t < 2). Conclusions. Perimetric sensitivities were consistent with Weber's law when higher temporal frequencies were avoided. PMID:24370832
Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver
NASA Technical Reports Server (NTRS)
Hess, R. A.; Malsbury, T.; Atencio, A., Jr.
1992-01-01
A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.
The CHASE laboratory search for chameleon dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steffen, Jason H.; /Fermilab
2010-11-01
A scalar field is a favorite candidate for the particle responsible for dark energy. However, few theoretical means exist that can simultaneously explain the observed acceleration of the Universe and evade tests of gravity. The chameleon mechanism, whereby the properties of a particle depend upon the local environment, is one possible avenue. We present the results of the Chameleon Afterglow Search (CHASE) experiment, a laboratory probe for chameleon dark energy. CHASE marks a significant improvement other searches for chameleons both in terms of its sensitivity to the photon/chameleon coupling as well as its sensitivity to the classes of chameleon darkmore » energy models and standard power-law models. Since chameleon dark energy is virtually indistinguishable from a cosmological constant, CHASE tests dark energy models in a manner not accessible to astronomical surveys.« less
The Key Role of Pain Catastrophizing in the Disability of Patients with Acute Back Pain.
Ramírez-Maestre, C; Esteve, R; Ruiz-Párraga, G; Gómez-Pérez, L; López-Martínez, A E
2017-04-01
This study investigated the role of anxiety sensitivity, resilience, pain catastrophizing, depression, pain fear-avoidance beliefs, and pain intensity in patients with acute back pain-related disability. Two hundred and thirty-two patients with acute back pain completed questionnaires on anxiety sensitivity, resilience, pain catastrophizing, fear-avoidance beliefs, depression, pain intensity, and disability. A structural equation modelling analysis revealed that anxiety sensitivity was associated with pain catastrophizing, and resilience was associated with lower levels of depression. Pain catastrophizing was positively associated with fear-avoidance beliefs and pain intensity. Depression was associated with fear-avoidance beliefs, but was not associated with pain intensity. Finally, catastrophizing, fear-avoidance beliefs, and pain intensity were positively and significantly associated with acute back pain-related disability. Although fear-avoidance beliefs and pain intensity were associated with disability, the results showed that pain catastrophizing was a central variable in the pain experience and had significant direct associations with disability when pain was acute. Anxiety sensitivity appeared to be an important antecedent of catastrophizing, whereas the influence of resilience on the acute back pain experience was limited to its relationship with depression.
Proposal to search for mu- N -> e- N with a single event sensitivity below 10 -16
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carey, R.M.; Lynch, K.R.; Miller, J.P.
2008-10-01
We propose a new experiment, Mu2e, to search for charged lepton flavor violation with unprecedented sensitivity. We will measure the ratio of the coherent neutrinoless conversion in the field of a nucleus of a negatively charged muon into an electron to the muon capture process: R{sub {mu}e} = {mu}{sup -} + A(Z,N) {yields} e{sup -} + A(Z,N)/{mu}{sup -} + A(Z,N) {yields} {nu}{sub {mu}} + A(Z-1, N), with a sensitivity R{sub {mu}e} {le} 6 x 10{sup -17} at 90% CL. This is almost a four order-of-magnitude improvement over the existing limit. The observation of such a process would be unambiguous evidencemore » of physics beyond the Standard Model. Since the discovery of the muon in 1936, physicists have attempted to answer I.I. Rabi's famous question: 'Who ordered that?' Why is there a muon? What role does it play in the larger questions of why there are three families and flavors of quarks, leptons, and neutrinos? We know quarks mix through a mechanism described by the Cabbibo-Kobayashi-Maskawa matrix, which has been studied for forty years. Neutrino mixing has been observed in the last decade, but mixing among the family of charged leptons has never been seen. The current limits are of order 10{sup -11} - 10{sup -13} so the process is rare indeed. Why is such an experiment important and timely? A major motivation for experiments at the Large Hadron Collider (LHC) is the possible observation of supersymmetric particles in the TeV mass range. Many of these supersymmetric models predict a {mu}-e conversion signal at R{sub {mu}e} {approx} 10{sup -15}. We propose to search for {mu}-e conversion at a sensitivity that exceeds this by more than an order of magnitude. The LHC may not be able to conclusively distinguish among supersymmetric models, so Mu2e will provide invaluable information should the LHC observe a signal. In the case where the LHC finds no evidence of supersymmetry, or other beyond-the-standard-model physics, Mu2e will probe for new physics at mass scales up to 10{sup 4} TeV, far beyond the reach of any planned accelerator.« less
Active Rack Isolation System Program and Technical Status
NASA Technical Reports Server (NTRS)
Bushnell, Glenn; Fialho, Ian; Allen, James; Quraishi, Naveed
2000-01-01
The Boeing Active Rack Isolation System (ARIS) is one of the means used to isolate acceleration-sensitive scientific experiments from structurally transmitted disturbances aboard the International Space Station. The presentation provides an overview of ARIS and technical issues associated with the development of the active control system. An overview of ARIS analytical models is presented along with recent isolation performance predictions made using these models. Issues associated with commanding and capturing ARIS data are discussed and possible future options based on the ARIS ISS Characterization Experiment (ICE) Payload On-orbit Processor (POP) are outlined. An overview of the ARIS-ICE experiment scheduled to fly on ISS Flight 6A is presented. The presentation concludes with a discussion of recent- developmental work that includes passive rack damping, umbilical redesigns and advanced multivariable control design methods.
Status of the neutrino mass experiment KATRIN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bornschein, L.; Bornschein, B.; Sturm, M.
The most sensitive way to determine the neutrino mass scale without further assumptions is to measure the shape of a tritium beta spectrum near its kinematic end-point. Tritium is the nucleus of choice because of its low endpoint energy, superallowed decay and simple atomic structure. Within an international collaboration the Karlsruhe Tritium Neutrino experiment (KATRIN) is currently being built up at KIT. KATRIN will allow a model-independent measurement of the neutrino mass scale with an expected sensitivity of 0.2 eV/c{sup 2} (90% CL). KATRIN will use a source of ultrapure molecular tritium. This contribution presents the status of the KATRINmore » experiment, thereby focusing on its Calibration and Monitoring System (CMS), which is the last component being subject to research/development. After a brief overview of the KATRIN experiment in Section II the CMS is introduced in Section III. In Section IV the Beta Induced X-Ray Spectroscopy (BIXS) as method of choice to monitor the tritium activity of the KATRIN source is described and first results are presented.« less
Retrieval of tropospheric carbon monoxide for the MOPITT experiment
NASA Astrophysics Data System (ADS)
Pan, Liwen; Gille, John C.; Edwards, David P.; Bailey, Paul L.; Rodgers, Clive D.
1998-12-01
A retrieval method for deriving the tropospheric carbon monoxide (CO) profile and column amount under clear sky conditions has been developed for the Measurements of Pollution In The Troposphere (MOPITT) instrument, scheduled for launch in 1998 onboard the EOS-AM1 satellite. This paper presents a description of the method along with analyses of retrieval information content. These analyses characterize the forward measurement sensitivity, the contribution of a priori information, and the retrieval vertical resolution. Ensembles of tropospheric CO profiles were compiled both from aircraft in situ measurements and from chemical model results and were used in retrieval experiments to characterize the method and to study the sensitivity to different parameters. Linear error analyses were carried out in parallel with the ensemble experiments. Results of these experiments and analyses indicate that MOPITT CO column measurements will have better than 10% precision, and CO profile measurement will have approximately three pieces of independent information that will resolve 3-5 tropospheric layers to approximately 10% precision. These analyses are important for understanding MOPITT data, both for application of data in tropospheric chemistry studies and for comparison with in situ measurements.
NASA Astrophysics Data System (ADS)
Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.
2016-12-01
The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.
A verification and validation effort for high explosives at Los Alamos National Lab (u)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scovel, Christina A; Menikoff, Ralph S
2009-01-01
We have started a project to verify and validate ASC codes used to simulate detonation waves in high explosives. Since there are no non-trivial analytic solutions, we are going to compare simulated results with experimental data that cover a wide range of explosive phenomena. The intent is to compare both different codes and different high explosives (HE) models. The first step is to test the products equation of state used for the HE models, For this purpose, the cylinder test, flyer plate and plate-push experiments are being used. These experiments sample different regimes in thermodynamic phase space: the CJ isentropemore » for the cylinder tests, the isentrope behind an overdriven detonation wave for the flyer plate experiment, and expansion following a reflected CJ detonation for the plate-push experiment, which is sensitive to the Gruneisen coefficient. The results of our findings for PBX 9501 are presented here.« less
Review and assessment of turbulence models for hypersonic flows
NASA Astrophysics Data System (ADS)
Roy, Christopher J.; Blottner, Frederick G.
2006-10-01
Accurate aerodynamic prediction is critical for the design and optimization of hypersonic vehicles. Turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating for these systems. The first goal of this article is to update the previous comprehensive review of hypersonic shock/turbulent boundary-layer interaction experiments published in 1991 by Settles and Dodson (Hypersonic shock/boundary-layer interaction database. NASA CR 177577, 1991). In their review, Settles and Dodson developed a methodology for assessing experiments appropriate for turbulence model validation and critically surveyed the existing hypersonic experiments. We limit the scope of our current effort by considering only two-dimensional (2D)/axisymmetric flows in the hypersonic flow regime where calorically perfect gas models are appropriate. We extend the prior database of recommended hypersonic experiments (on four 2D and two 3D shock-interaction geometries) by adding three new geometries. The first two geometries, the flat plate/cylinder and the sharp cone, are canonical, zero-pressure gradient flows which are amenable to theory-based correlations, and these correlations are discussed in detail. The third geometry added is the 2D shock impinging on a turbulent flat plate boundary layer. The current 2D hypersonic database for shock-interaction flows thus consists of nine experiments on five different geometries. The second goal of this study is to review and assess the validation usage of various turbulence models on the existing experimental database. Here we limit the scope to one- and two-equation turbulence models where integration to the wall is used (i.e., we omit studies involving wall functions). A methodology for validating turbulence models is given, followed by an extensive evaluation of the turbulence models on the current hypersonic experimental database. A total of 18 one- and two-equation turbulence models are reviewed, and results of turbulence model assessments for the six models that have been extensively applied to the hypersonic validation database are compiled and presented in graphical form. While some of the turbulence models do provide reasonable predictions for the surface pressure, the predictions for surface heat flux are generally poor, and often in error by a factor of four or more. In the vast majority of the turbulence model validation studies we review, the authors fail to adequately address the numerical accuracy of the simulations (i.e., discretization and iterative error) and the sensitivities of the model predictions to freestream turbulence quantities or near-wall y+ mesh spacing. We recommend new hypersonic experiments be conducted which (1) measure not only surface quantities but also mean and fluctuating quantities in the interaction region and (2) provide careful estimates of both random experimental uncertainties and correlated bias errors for the measured quantities and freestream conditions. For the turbulence models, we recommend that a wide-range of turbulence models (including newer models) be re-examined on the current hypersonic experimental database, including the more recent experiments. Any future turbulence model validation efforts should carefully assess the numerical accuracy and model sensitivities. In addition, model corrections (e.g., compressibility corrections) should be carefully examined for their effects on a standard, low-speed validation database. Finally, as new experiments or direct numerical simulation data become available with information on mean and fluctuating quantities, they should be used to improve the turbulence models and thus increase their predictive capability.
Development of a bioenergetics model for the threespine stickleback Gasterosteus aculeatus
Hovel, Rachel A.; Beauchamp, David A.; Hansen, Adam G.; Sorel, Mark H.
2016-01-01
The Threespine Stickleback Gasterosteus aculeatus is widely distributed across northern hemisphere ecosystems, has ecological influence as an abundant planktivore, and is commonly used as a model organism, but the species lacks a comprehensive model to describe bioenergetic performance in response to varying environmental or ecological conditions. This study parameterized a bioenergetics model for the Threespine Stickleback using laboratory measurements to determine mass- and temperature-dependent functions for maximum consumption and routine respiration costs. Maximum consumption experiments were conducted across a range of temperatures from 7.5°C to 23.0°C and a range of fish weights from 0.5 to 4.5 g. Respiration experiments were conducted across a range of temperatures from 8°C to 28°C. Model sensitivity was consistent with other comparable models in that the mass-dependent parameters for maximum consumption were the most sensitive. Growth estimates based on the Threespine Stickleback bioenergetics model suggested that 22°C is the optimal temperature for growth when food is not limiting. The bioenergetics model performed well when used to predict independent, paired measures of consumption and growth observed from a separate wild population of Threespine Sticklebacks. Predicted values for consumption and growth (expressed as percent body weight per day) only deviated from observed values by 2.0%. Our model should provide insight into the physiological performance of this species across a range of environmental conditions and be useful for quantifying the trophic impact of this species in food webs containing other ecologically or economically important species.
NASA Astrophysics Data System (ADS)
Sarris, Theo S.; Close, Murray; Abraham, Phillip
2018-03-01
A test using Rhodamine WT and heat as tracers, conducted over a 78 day period in a strongly heterogeneous alluvial aquifer, was used to evaluate the utility of the combined observation dataset for aquifer characterization. A highly parameterized model was inverted, with concentration and temperature time-series as calibration targets. Groundwater heads recorded during the experiment were boundary dependent and were ignored during the inversion process. The inverted model produced a high resolution depiction of the hydraulic conductivity and porosity fields. Statistical properties of these fields are in very good agreement with estimates from previous studies at the site. Spatially distributed sensitivity analysis suggests that both solute and heat transport were most sensitive to the hydraulic conductivity and porosity fields and less sensitive to dispersivity and thermal distribution factor, with sensitivity to porosity greatly reducing outside the monitored area. The issues of model over-parameterization and non-uniqueness are addressed through identifiability analysis. Longitudinal dispersivity and thermal distribution factor are highly identifiable, however spatially distributed parameters are only identifiable near the injection point. Temperature related density effects became observable for both heat and solute, as the temperature anomaly increased above 12 degrees centigrade, and affected down gradient propagation. Finally we demonstrate that high frequency and spatially dense temperature data cannot inform a dual porosity model in the absence of frequent solute concentration measurements.
NASA Astrophysics Data System (ADS)
Burrows, S. M.; Liu, X.; Elliott, S.; Easter, R. C.; Singh, B.; Rasch, P. J.
2015-12-01
Submicron marine aerosol particles are frequently observed to contain substantial fractions of organic material, hypothesized to enter the atmosphere as part of the primary sea spray aerosol formed through bubble bursting. This organic matter in sea spray aerosol may affect cloud condensation nuclei and ice nuclei concentrations in the atmosphere, particularly in remote marine regions. Members of our team have developed a new, mechanistic representation of the enrichment of sea spray aerosol with organic matter, the OCEANFILMS parameterization (Burrows et al., 2014). This new representation uses fields from an ocean biogeochemistry model to predict properties of the emitted aerosol. We have recently implemented the OCEANFILMS representation of sea spray aerosol composition into the Community Atmosphere Model (CAM), and performed sensitivity experiments and comparisons with alternate formulations. Early results from these sensitivity simulations will be shown, including impacts on aerosols, clouds, and radiation. References: Burrows, S. M., Ogunro, O., Frossard, A. A., Russell, L. M., Rasch, P. J., and Elliott, S. M.: A physically based framework for modeling the organic fractionation of sea spray aerosol from bubble film Langmuir equilibria, Atmos. Chem. Phys., 14, 13601-13629, doi:10.5194/acp-14-13601-2014, 2014.
NASA Technical Reports Server (NTRS)
McLachlan, B. G.; Bell, J. H.; Park, H.; Kennelly, R. A.; Schreiner, J. A.; Smith, S. C.; Strong, J. M.; Gallery, J.; Gouterman, M.
1995-01-01
The pressure-sensitive paint method was used in the test of a high-sweep oblique wing model, conducted in the NASA Ames 9- by 7-ft Supersonic Wind Tunnel. Surface pressure data was acquired from both the luminescent paint and conventional pressure taps at Mach numbers between M = 1.6 and 2.0. In addition, schlieren photographs of the outer flow were used to determine the location of shock waves impinging on the model. The results show that the luminescent pressure-sensitive paint can capture both global and fine features of the static surface pressure field. Comparison with conventional pressure tap data shows good agreement between the two techniques, and that the luminescent paint data can be used to make quantitative measurements of the pressure changes over the model surface. The experiment also demonstrates the practical considerations and limitations that arise in the application of this technique under supersonic flow conditions in large-scale facilities, as well as the directions in which future research is necessary in order to make this technique a more practical wind-tunnel testing tool.
Lower-Stratospheric Control of the Frequency of Sudden Stratospheric Warming Events
NASA Astrophysics Data System (ADS)
Martineau, Patrick; Chen, Gang; Son, Seok-Woo; Kim, Joowan
2018-03-01
The sensitivity of stratospheric polar vortex variability to the basic-state stratospheric temperature profile is investigated by performing a parameter sweep experiment with a dry dynamical core general circulation model where the equilibrium temperature profiles in the polar lower and upper stratosphere are systematically varied. It is found that stratospheric variability is more sensitive to the temperature distribution in the lower stratosphere than in the upper stratosphere. In particular, a cold lower stratosphere favors a strong time-mean polar vortex with a large daily variability, promoting frequent sudden stratospheric warming events in the model runs forced with both wavenumber-1 and wavenumber-2 topographies. This sensitivity is explained by the control exerted by the lower-stratospheric basic state onto fluxes of planetary-scale wave activity from the troposphere to the stratosphere, confirming that the lower stratosphere can act like a valve for the upward propagation of wave activity. It is further shown that with optimal model parameters, stratospheric polar vortex climatology and variability mimicking Southern and Northern Hemisphere conditions are obtained with both wavenumber-1 and wavenumber-2 topographies.
Fast Atmosphere-Ocean Model Runs with Large Changes in CO2
NASA Technical Reports Server (NTRS)
Russell, Gary L.; Lacis, Andrew A.; Rind, David H.; Colose, Christopher; Opstbaum, Roger F.
2013-01-01
How does climate sensitivity vary with the magnitude of climate forcing? This question was investigated with the use of a modified coupled atmosphere-ocean model, whose stability was improved so that the model would accommodate large radiative forcings yet be fast enough to reach rapid equilibrium. Experiments were performed in which atmospheric CO2 was multiplied by powers of 2, from 1/64 to 256 times the 1950 value. From 8 to 32 times, the 1950 CO2, climate sensitivity for doubling CO2 reaches 8 C due to increases in water vapor absorption and cloud top height and to reductions in low level cloud cover. As CO2 amount increases further, sensitivity drops as cloud cover and planetary albedo stabilize. No water vapor-induced runaway greenhouse caused by increased CO2 was found for the range of CO2 examined. With CO2 at or below 1/8 of the 1950 value, runaway sea ice does occur as the planet cascades to a snowball Earth climate with fully ice covered oceans and global mean surface temperatures near 30 C.
Lessening Sensitivity: Student Experiences of Teaching and Learning Sensitive Issues
ERIC Educational Resources Information Center
Lowe, Pam
2015-01-01
Despite growing interest in learning and teaching as emotional activities, there is still very little research on experiences of sensitive issues. Using qualitative data from students from a range of social science disciplines, this study investigates student's experiences. The paper highlights how, although they found it difficult and distressing…
Multicultural Experience and Intercultural Sensitivity among South Korean Adolescents
ERIC Educational Resources Information Center
Park, Jung-Suh
2013-01-01
This study examined experience with multicultural contact and the intercultural sensitivity of majority adolescents in South Korean society, one that is rapidly shifting toward a more multicultural environment. It also analyzed the influence of these multicultural experiences on intercultural sensitivity. The results of the analysis revealed a…
Barrett, Brendan T.; Panesar, Gurvinder K.; Scally, Andrew J.; Pacey, Ian E.
2013-01-01
Background Adults with amblyopia (‘lazy eye’), long-standing strabismus (ocular misalignment) or both typically do not experience visual symptoms because the signal from weaker eye is given less weight than the signal from its fellow. Here we examine the contribution of the weaker eye of individuals with strabismus and amblyopia with both eyes open and with the deviating eye in its anomalous motor position. Methodology/Results The task consisted of a blue-on-yellow detection task along a horizontal line across the central 50 degrees of the visual field. We compare the results obtained in ten individuals with strabismic amblyopia with ten visual normals. At each field location in each participant, we examined how the sensitivity exhibited under binocular conditions compared with sensitivity from four predictions, (i) a model of binocular summation, (ii) the average of the monocular sensitivities, (iii) dominant-eye sensitivity or (iv) non-dominant-eye sensitivity. The proportion of field locations for which the binocular summation model provided the best description of binocular sensitivity was similar in normals (50.6%) and amblyopes (48.2%). Average monocular sensitivity matched binocular sensitivity in 14.1% of amblyopes’ field locations compared to 8.8% of normals’. Dominant-eye sensitivity explained sensitivity at 27.1% of field locations in amblyopes but 21.2% in normals. Non-dominant-eye sensitivity explained sensitivity at 10.6% of field locations in amblyopes but 19.4% in normals. Binocular summation provided the best description of the sensitivity profile in 6/10 amblyopes compared to 7/10 of normals. In three amblyopes, dominant-eye sensitivity most closely reflected binocular sensitivity (compared to two normals) and in the remaining amblyope, binocular sensitivity approximated to an average of the monocular sensitivities. Conclusions Our results suggest a strong positive contribution in habitual viewing from the non-dominant eye in strabismic amblyopes. This is consistent with evidence from other sources that binocular mechanisms are frequently intact in strabismic and amblyopic individuals. PMID:24205005
White, André O; Rauhut, Anthony S
2014-04-15
The present experiments examined the effects of prazosin, a selective α₁-adrenergic receptor antagonist, on the development of methamphetamine conditioned hyperactivity and context-specific sensitization. Mice received an injection of vehicle (distilled water) or prazosin (0.5, 1.0 or 2.0 mg/kg) 30 min prior to a second injection of vehicle (saline) or methamphetamine (1.0 mg/kg) during the conditioning sessions (Experiment 1). Following the conditioning sessions, mice were tested for conditioned hyperactivity and then tested for context-specific sensitization. In subsequent experiments, mice received an injection of vehicle (distilled water) or prazosin (2.0 mg/kg) immediately (Experiment 2) or 24 h (Experiment 3) after the conditioning sessions and then tested for conditioned hyperactivity and context-specific sensitization. Prazosin dose-dependently blocked the development of methamphetamine conditioned hyperactivity and context-specific sensitization when administered prior to the methamphetamine during the conditioning phase; however nonspecific motor impairments also were observed (Experiment 1). Immediate (Experiment 2), but not the 24-h delay (Experiment 3), post-session administration of prazosin attenuated the development of methamphetamine conditioned hyperactivity and context-specific sensitization. Nonspecific motor impairments were not observed in these latter experiments. Collectively, these results suggest that the α₁-adrenergic receptor mediates the development of methamphetamine-conditioned hyperactivity and context-specific sensitization, perhaps by altering memory consolidation and/or reconsolidation processes. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Qifeng
The performance of pressure sensitive adhesives (PSAs) depends strongly on the viscoelastic properties of the adhesive material itself and the surface that it is placed into contact with. In this work we use a multiple- oscillatory test with microindentation apparatus that is able to quantify the mechanical response of adhesive materials in the linear regime, and also in the highly strained regime where the adhesive layer has cavitated to form mechanically isolated brils. The experiments involved the use of hemispherical indenters made of glass or polyethylene, brought into contact with a thin adhesive layer and then retracted, with comprehensive displacement history. A set of model acrylic emulsion-based PSAs were used in the experiments which show a suprising degree of elastic character at high strain. The experiment result suggest that an adhesive failure criterion based on the stored elastic energy is appropriate for these systems. The primary effect of the substrate is to modify the maximum strain where adhesive detachment from the indenter occurs.
Measuring end-of-life care outcomes prospectively.
Steinhauser, Karen E
2005-01-01
This paper discusses the state of the science in prospective measurement in end-of-life research and identifies particular areas for focused attention. Topics include defining the scope of inquiry, evaluating experiences of patients too ill to communicate, the role of proxy and family response, measurement sensitivity to change, the role of theory in guiding measurement efforts, evaluating relationships between domains of end-of-life experience, and measurement of cultural comprehensiveness. The state of the sciences calls for future research to (1) conduct longitudinal studies to capture transitions in end-of-life trajectories; (2) evaluate the quality of proxy reporting as it varies by rater relationship, domain, and over time; (3) use state-of-the art psychometric and longitudinal techniques to validate measures and to assess sensitivity to change; (4) develop further and test conceptual models of the experience of dying; (5) study the inter-relatedness of multiple dimensions of end-of-life trajectories; (6) compile updated information evaluating available measurement tools; and (7) conduct population- based research with attention to ethnic and age diversity.
Model-experiment interaction to improve representation of phosphorus limitation in land models
NASA Astrophysics Data System (ADS)
Norby, R. J.; Yang, X.; Cabugao, K. G. M.; Childs, J.; Gu, L.; Haworth, I.; Mayes, M. A.; Porter, W. S.; Walker, A. P.; Weston, D. J.; Wright, S. J.
2015-12-01
Carbon-nutrient interactions play important roles in regulating terrestrial carbon cycle responses to atmospheric and climatic change. None of the CMIP5 models has included routines to represent the phosphorus (P) cycle, although P is commonly considered to be the most limiting nutrient in highly productive, lowland tropical forests. Model simulations with the Community Land Model (CLM-CNP) show that inclusion of P coupling leads to a smaller CO2 fertilization effect and warming-induced CO2 release from tropical ecosystems, but there are important uncertainties in the P model, and improvements are limited by a dearth of data. Sensitivity analysis identifies the relative importance of P cycle parameters in determining P availability and P limitation, and thereby helps to define the critical measurements to make in field campaigns and manipulative experiments. To improve estimates of P supply, parameters that describe maximum amount of labile P in soil and sorption-desorption processes are necessary for modeling the amount of P available for plant uptake. Biochemical mineralization is poorly constrained in the model and will be improved through field observations that link root traits to mycorrhizal activity, phosphatase activity, and root depth distribution. Model representation of P demand by vegetation, which currently is set by fixed stoichiometry and allometric constants, requires a different set of data. Accurate carbon cycle modeling requires accurate parameterization of the photosynthetic machinery: Vc,max and Jmax. Relationships between the photosynthesis parameters and foliar nutrient (N and P) content are being developed, and by including analysis of covariation with other plant traits (e.g., specific leaf area, wood density), we can provide a basis for more dynamic, trait-enabled modeling. With this strong guidance from model sensitivity and uncertainty analysis, field studies are underway in Puerto Rico and Panama to collect model-relevant data on P supply and demand functions. New FACE and soil warming experiments in P-limited ecosystems in subtropical Australia, and tropical Brazil, Puerto Rico, and Panama will provide important benchmarks for the performance of P-enabled models under future conditions.
Distorted neutrino oscillations from time varying cosmic fields
Krnjaic, Gordan; Machado, Pedro A. N.; Necib, Lina
2018-04-16
Cold, ultralight (more » $$\\ll$$ eV) bosonic fields can induce fast temporal variation in neutrino couplings, thereby distorting neutrino oscillations. In this paper, we exploit this effect to introduce a novel probe of neutrino time variation and dark matter at long-baseline experiments. We study several representative observables and find that current and future experiments, including DUNE and JUNO, are sensitive to a wide range of model parameters over many decades in mass reach and time-variation periodicity.« less
Distorted neutrino oscillations from time varying cosmic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krnjaic, Gordan; Machado, Pedro A. N.; Necib, Lina
Cold, ultralight (more » $$\\ll$$ eV) bosonic fields can induce fast temporal variation in neutrino couplings, thereby distorting neutrino oscillations. In this paper, we exploit this effect to introduce a novel probe of neutrino time variation and dark matter at long-baseline experiments. We study several representative observables and find that current and future experiments, including DUNE and JUNO, are sensitive to a wide range of model parameters over many decades in mass reach and time-variation periodicity.« less
Medical Education to Enhance Critical Consciousness: Facilitators' Experiences.
Zaidi, Zareen; Vyas, Rashmi; Verstegen, Danielle; Morahan, Page; Dornan, Tim
2017-11-01
To analyze educators' experiences of facilitating cultural discussions in two global health professions education programs and what these experiences had taught them about critical consciousness. A multicultural research team conducted in-depth interviews with 16 faculty who had extensive experience facilitating cultural discussions. They analyzed transcripts of the interviews thematically, drawing sensitizing insights from Gramsci's theory of cultural hegemony. Collaboration and conversation helped the team self-consciously examine their positions toward the data set and be critically reflexive. Participant faculty used their prior experience facilitating cultural discussions to create a "safe space" in which learners could develop critical consciousness. During multicultural interactions they recognized and explicitly addressed issues related to power differentials, racism, implicit bias, and gender bias. They noted the need to be "facile in attending to pain" as learners brought up traumatic experiences and other sensitive issues including racism and the impact of power dynamics. They built relationships with learners by juxtaposing and exploring the sometimes-conflicting norms of different cultures. Participants were reflective about their own understanding and tendency to be biased. They aimed to break free of such biases while role modeling how to have the courage to speak up. Experience had given facilitators in multicultural programs an understanding of their responsibility to promote critical consciousness and social justice. How faculty without prior experience or expertise could develop those values and skills is a topic for future research.
Use of a computer model in the understanding of erythropoietic control mechanisms
NASA Technical Reports Server (NTRS)
Dunn, C. D. R.
1978-01-01
During an eight-week visit approximately 200 simulations using the computer model for the regulation of erythopoiesis were carries out in four general areas: with the human model simulating hypoxia and dehydration, evaluation of the simulation of dehydration using the mouse model. The experiments led to two considerations for the models. Firstly, a direct relationship between erythropoietin concentration and bone marrow sensitivity to the hormone and, secondly, a partial correction of tissue hypoxia prior to compensation by an increased hematocrit. This latter change in particular produced a better simuation of the effects of hypoxia on plasma erythropoietin concentrations.
Exploring tropical forest vegetation dynamics using the FATES model
NASA Astrophysics Data System (ADS)
Koven, C. D.; Fisher, R.; Knox, R. G.; Chambers, J.; Kueppers, L. M.; Christoffersen, B. O.; Davies, S. J.; Dietze, M.; Holm, J.; Massoud, E. C.; Muller-Landau, H. C.; Powell, T.; Serbin, S.; Shuman, J. K.; Walker, A. P.; Wright, S. J.; Xu, C.
2017-12-01
Tropical forest vegetation dynamics represent a critical climate feedback in the Earth system, which is poorly represented in current global modeling approaches. We discuss recent progress on exploring these dynamics using the Functionally Assembled Terrestrial Ecosystem Simulator (FATES), a demographic vegetation model for the CESM and ACME ESMs. We will discuss benchmarks of FATES predictions for forest structure against inventory sites, sensitivity of FATES predictions of size and age structure to model parameter uncertainty, and experiments using the FATES model to explore PFT competitive dynamics and the dynamics of size and age distributions in responses to changing climate and CO2.
Posttest RELAP4 analysis of LOFT experiment L1-4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grush, W.H.; Holmstrom, H.L.O.
Results of posttest analysis of LOFT loss-of-coolant experiment L1-4 with the RELAP4 code are presented. The results are compared with the pretest prediction and the test data. Differences between the RELAP4 model used for this analysis and that used for the pretest prediction are in the areas of initial conditions, nodalization, emergency core cooling system, broken loop hot leg, and steam generator secondary. In general, these changes made only minor improvement in the comparison of the analytical results to the data. Also presented are the results of a limited study of LOFT downcomer modeling which compared the performance of themore » conventional single downcomer model with that of the new split downcomer model. A RELAP4 sensitivity calculation with artificially elevated emergency core coolant temperature was performed to highlight the need for an ECC mixing model in RELAP4.« less
1991-08-30
physiologic states. Physiologic perturbations were performed to test the sensitivity of the model system to detect effects of minoxidil -mediated... Minoxidil , leukotriene D̂ , tape stripping of stratum corneum, and topical ether-ethanol experiments produced statistically significant increases (52 to... Minoxidil 68 B. Phenylephrine 69 C. Leukotriene D̂ 69 D. Ether-Ethanol 70 E. Tape stripping 76 8. Data analysis 77 Vlll RESULTS 80 1
Sensitivity of Assimilated Tropical Tropospheric Ozone to the Meteorological Analyses
NASA Technical Reports Server (NTRS)
Hayashi, Hiroo; Stajner, Ivanka; Pawson, Steven; Thompson, Anne M.
2002-01-01
Tropical tropospheric ozone fields from two different experiments performed with an off-line ozone assimilation system developed in NASA's Data Assimilation Office (DAO) are examined. Assimilated ozone fields from the two experiments are compared with the collocated ozone profiles from the Southern Hemispheric Additional Ozonesondes (SHADOZ) network. Results are presented for 1998. The ozone assimilation system includes a chemistry-transport model, which uses analyzed winds from the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The two experiments use wind fields from different versions of GEOS DAS: an operational version of the GEOS-2 system and a prototype of the GEOS-4 system. While both versions of the DAS utilize the Physical-space Statistical Analysis System and use comparable observations, they use entirely different general circulation models and data insertion techniques. The shape of the annual-mean vertical profile of the assimilated ozone fields is sensitive to the meteorological analyses, with the GEOS-4-based ozone being closest to the observations. This indicates that the resolved transport in GEOS-4 is more realistic than in GEOS-2. Remaining uncertainties include quantification of the representation of sub-grid-scale processes in the transport calculations, which plays an important role in the locations and seasons where convection dominates the transport.
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.
Preserving Differential Privacy in Degree-Correlation based Graph Generation
Wang, Yue; Wu, Xintao
2014-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987
STS-40 orbital acceleration research experiment flight results during a typical sleep period
NASA Technical Reports Server (NTRS)
Blanchard, R. C.; Nicholson, J. Y.; Ritter, J. R.
1992-01-01
The Orbital Acceleration Research Experiment (OARE), an electrostatic accelerometer package with complete on-orbit calibration capabilities, was flown for the first time aboard the Space Shuttle on STS-40. This is also the first time an accelerometer package with nano-g sensitivity and a calibration facility has flown aboard the Space Shuttle. The instrument is designed to measure and record the Space Shuttle aerodynamic acceleration environment from the free molecule flow regime through the rarified flow transition into the hypersonic continuum regime. Because of its sensitivity, the OARE instrument defects aerodynamic behavior of the Space Shuttle while in low-earth orbit. A 2-hour orbital time period on day seven of the mission, when the crew was asleep and other spacecraft activities were at a minimum, was examined. During the flight, a 'trimmed-mean' filter was used to produce high quality, low frequency data which was successfully stored aboard the Space Shuttle in the OARE data storage system. Initial review of the data indicated that, although the expected precision was achieved, some equipment problems occurred resulting in uncertain accuracy. An acceleration model which includes aerodynamic, gravity-gradient, and rotational effects was constructed and compared with flight data. Examination of the model with the flight data shows the instrument to be sensitive to all major expected low frequency acceleration phenomena; however, some erratic instrument bias behavior persists in two axes. In these axes, the OARE data can be made to match a comprehensive atmospheric-aerodynamic model by making bias adjustments and slight linear corrections for drift. The other axis does not exhibit these difficulties and gives good agreement with the acceleration model.
Rohani, S Alireza; Ghomashchi, Soroush; Agrawal, Sumit K; Ladak, Hanif M
2017-03-01
Finite-element models of the tympanic membrane are sensitive to the Young's modulus of the pars tensa. The aim of this work is to estimate the Young's modulus under a different experimental paradigm than currently used on the human tympanic membrane. These additional values could potentially be used by the auditory biomechanics community for building consensus. The Young's modulus of the human pars tensa was estimated through inverse finite-element modelling of an in-situ pressurization experiment. The experiments were performed on three specimens with a custom-built pressurization unit at a quasi-static pressure of 500 Pa. The shape of each tympanic membrane before and after pressurization was recorded using a Fourier transform profilometer. The samples were also imaged using micro-computed tomography to create sample-specific finite-element models. For each sample, the Young's modulus was then estimated by numerically optimizing its value in the finite-element model so simulated pressurized shapes matched experimental data. The estimated Young's modulus values were 2.2 MPa, 2.4 MPa and 2.0 MPa, and are similar to estimates obtained using in-situ single-point indentation testing. The estimates were obtained under the assumptions that the pars tensa is linearly elastic, uniform, isotropic with a thickness of 110 μm, and the estimates are limited to quasi-static loading. Estimates of pars tensa Young's modulus are sensitive to its thickness and inclusion of the manubrial fold. However, they do not appear to be sensitive to optimization initialization, height measurement error, pars flaccida Young's modulus, and tympanic membrane element type (shell versus solid). Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dibike, Y. B.; Eum, H. I.; Prowse, T. D.
2017-12-01
Flows originating from alpine dominated cold region watersheds typically experience extended winter low flows followed by spring snowmelt and summer rainfall driven high flows. In a warmer climate, there will be temperature- induced shift in precipitation from snow towards rain as well as changes in snowmelt timing affecting the frequency of extreme high and low flow events which could significantly alter ecosystem services. This study examines the potential changes in the frequency and severity of hydrologic extremes in the Athabasca River watershed in Alberta, Canada based on the Variable Infiltration Capacity (VIC) hydrologic model and selected and statistically downscaled climate change scenario data from the latest Coupled Model Intercomparison Project (CMIP5). The sensitivity of these projected changes is also examined by applying different extreme flow analysis methods. The hydrological model projections show an overall increase in mean annual streamflow in the watershed and a corresponding shift in the freshet timing to earlier period. Most of the streams are projected to experience increases during the winter and spring seasons and decreases during the summer and early fall seasons, with an overall projected increases in extreme high flows, especially for low frequency events. While the middle and lower parts of the watershed are characterised by projected increases in extreme high flows, the high elevation alpine region is mainly characterised by corresponding decreases in extreme low flow events. However, the magnitude of projected changes in extreme flow varies over a wide range, especially for low frequent events, depending on the climate scenario and period of analysis, and sometimes in a nonlinear way. Nonetheless, the sensitivity of the projected changes to the statistical method of analysis is found to be relatively small compared to the inter-model variability.
NASA Astrophysics Data System (ADS)
Lopez-Yglesias, Xerxes
Part I: Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement. Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes. The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an "enhancement factor" to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth. Part II: Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, lambdaR, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; ...
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less
The Cloud Feedback Model Intercomparison Project (CFMIP) contribution to CMIP6
NASA Astrophysics Data System (ADS)
Webb, Mark J.; Andrews, Timothy; Bodas-Salcedo, Alejandro; Bony, Sandrine; Bretherton, Christopher S.; Chadwick, Robin; Chepfer, Hélène; Douville, Hervé; Good, Peter; Kay, Jennifer E.; Klein, Stephen A.; Marchand, Roger; Medeiros, Brian; Pier Siebesma, A.; Skinner, Christopher B.; Stevens, Bjorn; Tselioudis, George; Tsushima, Yoko; Watanabe, Masahiro
2017-01-01
The primary objective of CFMIP is to inform future assessments of cloud feedbacks through improved understanding of cloud-climate feedback mechanisms and better evaluation of cloud processes and cloud feedbacks in climate models. However, the CFMIP approach is also increasingly being used to understand other aspects of climate change, and so a second objective has now been introduced, to improve understanding of circulation, regional-scale precipitation, and non-linear changes. CFMIP is supporting ongoing model inter-comparison activities by coordinating a hierarchy of targeted experiments for CMIP6, along with a set of cloud-related output diagnostics. CFMIP contributes primarily to addressing the CMIP6 questions How does the Earth system respond to forcing?
and What are the origins and consequences of systematic model biases?
and supports the activities of the WCRP Grand Challenge on Clouds, Circulation and Climate Sensitivity.A compact set of Tier 1 experiments is proposed for CMIP6 to address this question: (1) what are the physical mechanisms underlying the range of cloud feedbacks and cloud adjustments predicted by climate models, and which models have the most credible cloud feedbacks? Additional Tier 2 experiments are proposed to address the following questions. (2) Are cloud feedbacks consistent for climate cooling and warming, and if not, why? (3) How do cloud-radiative effects impact the structure, the strength and the variability of the general atmospheric circulation in present and future climates? (4) How do responses in the climate system due to changes in solar forcing differ from changes due to CO2, and is the response sensitive to the sign of the forcing? (5) To what extent is regional climate change per CO2 doubling state-dependent (non-linear), and why? (6) Are climate feedbacks during the 20th century different to those acting on long-term climate change and climate sensitivity? (7) How do regional climate responses (e.g. in precipitation) and their uncertainties in coupled models arise from the combination of different aspects of CO2 forcing and sea surface warming?CFMIP also proposes a number of additional model outputs in the CMIP DECK, CMIP6 Historical and CMIP6 CFMIP experiments, including COSP simulator outputs and process diagnostics to address the following questions.
How well do clouds and other relevant variables simulated by models agree with observations?
What physical processes and mechanisms are important for a credible simulation of clouds, cloud feedbacks and cloud adjustments in climate models?
Which models have the most credible representations of processes relevant to the simulation of clouds?
How do clouds and their changes interact with other elements of the climate system?
Test of the stress sensitization model in adolescents following the pipeline explosion.
Shao, Di; Gao, Qing-Ling; Li, Jie; Xue, Jiao-Mei; Guo, Wei; Long, Zhou-Ting; Cao, Feng-Lin
2015-10-01
The stress sensitization model states that early traumatic experiences increase vulnerability to the adverse effects of subsequent stressful life events. This study examined the effect of stress sensitization on development of posttraumatic stress disorder (PTSD) symptoms in Chinese adolescents who experienced the pipeline explosion. A total of 670 participants completed self-administered questionnaires on demographic characteristics and degree of explosion exposure, the Childhood Trauma Questionnaire (CTQ), and the Posttraumatic Stress Disorder Checklist-Civilian Version (PCL-C). Associations among the variables were explored using MANOVA, and main effects and interactions were analyzed. Overall MANOVA tests with the PCL-C indicated significant differences for gender (F=6.86, p=.000), emotional abuse (F=6.79, p=.000), and explosion exposure (F=22.40, p=.000). There were significant interactions between emotional abuse and explosion exposure (F=3.98, p=.008) and gender and explosion exposure (F=2.93, p=.033). Being female, childhood emotional abuse, and a high explosion exposure were associated with high PTSD symptom levels. Childhood emotional abuse moderated the effect of explosion exposure on PTSD symptoms. Thus, stress sensitization influenced the development of PTSD symptoms in Chinese adolescents who experienced the pipeline explosion as predicted by the model. Copyright © 2015 Elsevier Inc. All rights reserved.
Effects of Incidental Emotions on Moral Dilemma Judgments: An Analysis Using the CNI Model.
Gawronski, Bertram; Conway, Paul; Armstrong, Joel; Friesdorf, Rebecca; Hütter, Mandy
2018-02-01
Effects of incidental emotions on moral dilemma judgments have garnered interest because they demonstrate the context-dependent nature of moral decision-making. Six experiments (N = 727) investigated the effects of incidental happiness, sadness, and anger on responses in moral dilemmas that pit the consequences of a given action for the greater good (i.e., utilitarianism) against the consistency of that action with moral norms (i.e., deontology). Using the CNI model of moral decision-making, we further tested whether the three kinds of emotions shape moral dilemma judgments by influencing (a) sensitivity to consequences, (b) sensitivity to moral norms, or (c) general preference for inaction versus action regardless of consequences and moral norms (or some combination of the three). Incidental happiness reduced sensitivity to moral norms without affecting sensitivity to consequences or general preference for inaction versus action. Incidental sadness and incidental anger did not show any significant effects on moral dilemma judgments. The findings suggest a central role of moral norms in the contribution of emotional responses to moral dilemma judgments, requiring refinements of dominant theoretical accounts and supporting the value of formal modeling approaches in providing more nuanced insights into the determinants of moral dilemma judgments. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Lee, ChaBum; Lee, Sun-Kyu; Tarbutton, Joshua A
2014-09-01
This paper presents a novel design and sensitivity analysis of a knife edge-based optical displacement sensor that can be embedded with nanopositioning stages. The measurement system consists of a laser, two knife edge locations, two photodetectors, and axillary optics components in a simple configuration. The knife edge is installed on the stage parallel to its moving direction and two separated laser beams are incident on knife edges. While the stage is in motion, the direct transverse and diffracted light at each knife edge is superposed producing interference at the detector. The interference is measured with two photodetectors in a differential amplification configuration. The performance of the proposed sensor was mathematically modeled, and the effect of the optical and mechanical parameters, wavelength, beam diameter, distances from laser to knife edge to photodetector, and knife edge topography, on sensor outputs was investigated to obtain a novel analytical method to predict linearity and sensitivity. From the model, all parameters except for the beam diameter have a significant influence on measurement range and sensitivity of the proposed sensing system. To validate the model, two types of knife edges with different edge topography were used for the experiment. By utilizing a shorter wavelength, smaller sensor distance and higher edge quality increased measurement sensitivity can be obtained. The model was experimentally validated and the results showed a good agreement with the theoretically estimated results. This sensor is expected to be easily implemented into nanopositioning stage applications at a low cost and mathematical model introduced here can be used for design and performance estimation of the knife edge-based sensor as a tool.
Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation
NASA Astrophysics Data System (ADS)
Choi, J.; Raguin, L. G.
2010-10-01
Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.
Improvements in Modeling Thruster Plume Erosion Damage to Spacecraft Surfaces
NASA Technical Reports Server (NTRS)
Soares, Carlos; Olsen, Randy; Steagall, Courtney; Huang, Alvin; Mikatarian, Ron; Myers, Brandon; Koontz, Steven; Worthy, Erica
2015-01-01
Spacecraft bipropellant thrusters impact spacecraft surfaces with high speed droplets of unburned and partially burned propellant. These impacts can produce erosion damage to optically sensitive hardware and systems (e.g., windows, camera lenses, solar cells and protective coatings). On the International Space Station (ISS), operational constraints are levied on the position and orientation of the solar arrays to mitigate erosion effects during thruster operations. In 2007, the ISS Program requested evaluation of erosion constraint relief to alleviate operational impacts due to an impaired Solar Alpha Rotary Joint (SARJ). Boeing Space Environments initiated an activity to identify and remove sources of conservatism in the plume induced erosion model to support an expanded range of acceptable solar array positions ? The original plume erosion model over-predicted plume erosion and was adjusted to better correlate with flight experiment results. This paper discusses findings from flight experiments and the methodology employed in modifying the original plume erosion model for better correlation of predictions with flight experiment data. The updated model has been successful employed in reducing conservatism and allowing for enhanced flexibility in ISS solar array operations.
FireStem2D – A Two-Dimensional Heat Transfer Model for Simulating Tree Stem Injury in Fires
Chatziefstratiou, Efthalia K.; Bohrer, Gil; Bova, Anthony S.; Subramanian, Ravishankar; Frasson, Renato P. M.; Scherzer, Amy; Butler, Bret W.; Dickinson, Matthew B.
2013-01-01
FireStem2D, a software tool for predicting tree stem heating and injury in forest fires, is a physically-based, two-dimensional model of stem thermodynamics that results from heating at the bark surface. It builds on an earlier one-dimensional model (FireStem) and provides improved capabilities for predicting fire-induced mortality and injury before a fire occurs by resolving stem moisture loss, temperatures through the stem, degree of bark charring, and necrotic depth around the stem. We present the results of numerical parameterization and model evaluation experiments for FireStem2D that simulate laboratory stem-heating experiments of 52 tree sections from 25 trees. We also conducted a set of virtual sensitivity analysis experiments to test the effects of unevenness of heating around the stem and with aboveground height using data from two studies: a low-intensity surface fire and a more intense crown fire. The model allows for improved understanding and prediction of the effects of wildland fire on injury and mortality of trees of different species and sizes. PMID:23894599
Amato, Davide; Heinsbroek, Jasper; Kalivas, Peter W
2018-01-01
Abstract Background Nearly half of all individuals diagnosed with schizophrenia abuse addictive substances such as cocaine. Currently, the neurobiological mechanisms in patients with schizophrenia that lead to cocaine abuse are unknown. A possible explanation for the co-morbidity between schizophrenia and addiction is that the rewarding properties of cocaine reverse the diminished motivational drive caused by chronic antipsychotic regimen. Moreover, chronic antipsychotic treatment can sensitize and amplify cocaine rewarding effects and exacerbate psychoses. Methods The rewarding properties of cocaine are attributed to the differential effects of dopamine on D1 and D2 receptor-expressing medium spiny neurons (MSNs) in the nucleus accumbens (NAc). Using in vivo Ca2+ miniature microscopic imaging, we characterize the role of D1 and D2 MSN in mono- and a cross- sensitization paradigms. D1- and D2-Cre mice were injected with a Cre dependent calcium indicator (gCaMP6f) and implanted with a gradient index (GRIN) lens above the nucleus accumbens and calcium activity was recorded using a head mounted miniature microscope. Cocaine sensitization was measured after a classic repeated cocaine regiment and antipsychotic and psychostimulant cross-sensitization was measured by a single cocaine injection after chronic pre-treatment with haloperidol. Results We found that both D1-MSN and D2-MSN populations are modulated by initial cocaine experience and further modulated during the expression of cocaine sensitization. A subpopulation of D1-MSN displayed initial activation, but reduced activity during the expression of sensitization. By contrast, the majority of D2-MSNs were suppressed by initial cocaine experience, but became active during the expression of sensitization. Furthermore, activity of D1- and D2-MSNs bidirectionally related with the observed behavioral responses to cocaine. Cross-sensitization following haloperidol treatment led to increased behavioral responses to psychostimulants. Current experiments are set out to investigate the neuronal responses of D1 and D2-MSN during cross sensitization between haloperidol and cocaine. Discussion Cocaine sensitization leads to differential neuronal responses in D1- and D2-MSN and these responses are differentially correlated with the magnitude of the sensitized behavioral response. These results reveal important new insights in the neurobiological processes in the nucleus accumbens that underlie psychostimulant sensitization and provide an important new model for studying the pharmacology of antipsychotic effects on striatal function and its potential role in increasing the susceptibility of schizophrenic patients to developing drug addiction.
NASA Astrophysics Data System (ADS)
Koch, Axelle; Schröder, Natalie; Pohlmeier, Andreas; Garré, Sarah; Vanderborght, Jan; Javaux, Mathieu
2017-04-01
Measuring water extraction by plant would allow us to better understand root water uptake processes and how soil and plant properties affect them. Yet, direct measurement of root water uptake is still challenging and determining its distribution requires coupling experimentation and modelling. In this study, we investigated how the 3D monitoring of a tracer movement in a sand container with a lupine plant could inform us about root water uptake process. A sand column (10 cm height, 5 cm inner diameter) planted with an 18-day-old white lupine was subject to a tracer experiment with a chemically inert tracer (1 mmol/L Gd-DTPA2-) applied for 6 days. Then the tracer and water fluxes were stopped. The plume was monitored in 3-D for 7 days by Magnetic Resonance Imaging (Haber-Pohlmeier et al, unp). In addition the breakthrough curve at the outlet was also measured. We used a biophysical 3-D soil-plant model: R-SWMS (Javaux et al, 2008) to extract information from this experiment. First, we ran a virtual experiment to check the assumption that Gd concentration increase around roots is proportional to the extracted soil water during the same period. We also investigated whether this type of experiment helps discriminate different root hydraulic properties with a sensitivity analysis. Then, we compared the experimental and simulated Gd concentration patterns. A preliminary (qualitative) assessment showed that measured Gd distribution patterns were better represented by the model at day 7, where the main driver of the concentration distribution was root and not soil heterogeneity (which is not taken into account in the model). The main spatial and temporal features of the transport where adequately reproduced by the model in particular during the last day. The distribution of the tracer was shown to be sensitive to the root hydraulic properties. To conclude, information about root water uptake distributions and so about root hydraulic properties could be deduced from Gd concentration maps. Keywords: R-SWMS; Modelling; MRI; Root Water Uptake; Gadolinium
Visual detection following retinal damage: predictions of an inhomogeneous retino-cortical model
NASA Astrophysics Data System (ADS)
Arnow, Thomas L.; Geisler, Wilson S.
1996-04-01
A model of human visual detection performance has been developed, based on available anatomical and physiological data for the primate visual system. The inhomogeneous retino- cortical (IRC) model computes detection thresholds by comparing simulated neural responses to target patterns with responses to a uniform background of the same luminance. The model incorporates human ganglion cell sampling distributions; macaque monkey ganglion cell receptive field properties; macaque cortical cell contrast nonlinearities; and a optical decision rule based on ideal observer theory. Spatial receptive field properties of cortical neurons were not included. Two parameters were allowed to vary while minimizing the squared error between predicted and observed thresholds. One parameter was decision efficiency, the other was the relative strength of the ganglion-cell center and surround. The latter was only allowed to vary within a small range consistent with known physiology. Contrast sensitivity was measured for sinewave gratings as a function of spatial frequency, target size and eccentricity. Contrast sensitivity was also measured for an airplane target as a function of target size, with and without artificial scotomas. The results of these experiments, as well as contrast sensitivity data from the literature were compared to predictions of the IRC model. Predictions were reasonably good for grating and airplane targets.
A Model for Pain Behavior in Individuals with Intellectual and Developmental Disabilities
ERIC Educational Resources Information Center
Meir, Lotan; Strand, Liv Inger; Alice, Kvale
2012-01-01
The dearth of information on the pain experience of individuals with intellectual and developmental disabilities (IDD) calls for a more comprehensive understanding of pain in this population. The Non-Communicating Adults Pain Checklist (NCAPC) is an 18-item behavioral scale that was recently found to be reliable, valid, sensitive and clinically…
2013-01-01
the internal variability, such as the storm track or rainfall pattern (8). Arguments have emerged for the use of small domains in certain cases as...Sensitivity experiments were performed with the WRF-ARW over Meiningen, Germany for two strong wintertime extratropical cyclones. These cases were chosen
Seidl, Rupert; Rammer, Werner
2017-07-01
Growing evidence suggests that climate change could substantially alter forest disturbances. Interactions between individual disturbance agents are a major component of disturbance regimes, yet how interactions contribute to their climate sensitivity remains largely unknown. Here, our aim was to assess the climate sensitivity of disturbance interactions, focusing on wind and bark beetle disturbances. We developed a process-based model of bark beetle disturbance, integrated into the dynamic forest landscape model iLand (already including a detailed model of wind disturbance). We evaluated the integrated model against observations from three wind events and a subsequent bark beetle outbreak, affecting 530.2 ha (3.8 %) of a mountain forest landscape in Austria between 2007 and 2014. Subsequently, we conducted a factorial experiment determining the effect of changes in climate variables on the area disturbed by wind and bark beetles separately and in combination. iLand was well able to reproduce observations with regard to area, temporal sequence, and spatial pattern of disturbance. The observed disturbance dynamics was strongly driven by interactions, with 64.3 % of the area disturbed attributed to interaction effects. A +4 °C warming increased the disturbed area by +264.7 % and the area-weighted mean patch size by +1794.3 %. Interactions were found to have a ten times higher sensitivity to temperature changes than main effects, considerably amplifying the climate sensitivity of the disturbance regime. Disturbance interactions are a key component of the forest disturbance regime. Neglecting interaction effects can lead to a substantial underestimation of the climate change sensitivity of disturbance regimes.
Sensitivity of Pacific Cold Tongue and Double-ITCZ Bias to Convective Parameterization
NASA Astrophysics Data System (ADS)
Woelfle, M.; Bretherton, C. S.; Pritchard, M. S.; Yu, S.
2016-12-01
Many global climate models struggle to accurately simulate annual mean precipitation and sea surface temperature (SST) fields in the tropical Pacific basin. Precipitation biases are dominated by the double intertropical convergence zone (ITCZ) bias where models exhibit precipitation maxima straddling the equator while only a single Northern Hemispheric maximum exists in observations. The major SST bias is the enhancement of the equatorial cold tongue. A series of coupled model simulations are used to investigate the sensitivity of the bias development to convective parameterization. Model components are initialized independently prior to coupling to allow analysis of the transient response of the system directly following coupling. These experiments show precipitation and SST patterns to be highly sensitive to convective parameterization. Simulations in which the deep convective parameterization is disabled forcing all convection to be resolved by the shallow convection parameterization showed a degradation in both the cold tongue and double-ITCZ biases as precipitation becomes focused into off-equatorial regions of local SST maxima. Simulations using superparameterization in place of traditional cloud parameterizations showed a reduced cold tongue bias at the expense of additional precipitation biases. The equatorial SST responses to changes in convective parameterization are driven by changes in near equatorial zonal wind stress. The sensitivity of convection to SST is important in determining the precipitation and wind stress fields. However, differences in convective momentum transport also play a role. While no significant improvement is seen in these simulations of the double-ITCZ, the system's sensitivity to these changes reaffirm that improved convective parameterizations may provide an avenue for improving simulations of tropical Pacific precipitation and SST.
Macro-spin modeling and experimental study of spin-orbit torque biased magnetic sensors
NASA Astrophysics Data System (ADS)
Xu, Yanjun; Yang, Yumeng; Luo, Ziyan; Xu, Baoxi; Wu, Yihong
2017-11-01
We reported a systematic study of spin-orbit torque biased magnetic sensors based on NiFe/Pt bilayers through both macro-spin modeling and experiments. The simulation results show that it is possible to achieve a linear sensor with a dynamic range of 0.1-10 Oe, power consumption of 1 μW-1mW, and sensitivity of 0.1-0.5 Ω/Oe. These characteristics can be controlled by varying the sensor dimension and current density in the Pt layer. The latter is in the range of 1 × 105-107 A/cm2. Experimental results of fabricated sensors with selected sizes agree well with the simulation results. For a Wheatstone bridge sensor comprising of four sensing elements, a sensitivity up to 0.548 Ω/Oe, linearity error below 6%, and detectivity of about 2.8 nT/√Hz were obtained. The simple structure and ultrathin thickness greatly facilitate the integration of these sensors for on-chip applications. As a proof-of-concept experiment, we demonstrate its application in detection of current flowing in an on-chip Cu wire.
Associative (not Hebbian) learning and the mirror neuron system.
Cooper, Richard P; Cook, Richard; Dickinson, Anthony; Heyes, Cecilia M
2013-04-12
The associative sequence learning (ASL) hypothesis suggests that sensorimotor experience plays an inductive role in the development of the mirror neuron system, and that it can play this crucial role because its effects are mediated by learning that is sensitive to both contingency and contiguity. The Hebbian hypothesis proposes that sensorimotor experience plays a facilitative role, and that its effects are mediated by learning that is sensitive only to contiguity. We tested the associative and Hebbian accounts by computational modelling of automatic imitation data indicating that MNS responsivity is reduced more by contingent and signalled than by non-contingent sensorimotor training (Cook et al. [7]). Supporting the associative account, we found that the reduction in automatic imitation could be reproduced by an existing interactive activation model of imitative compatibility when augmented with Rescorla-Wagner learning, but not with Hebbian or quasi-Hebbian learning. The work argues for an associative, but against a Hebbian, account of the effect of sensorimotor training on automatic imitation. We argue, by extension, that associative learning is potentially sufficient for MNS development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Scaglione, John M.; Mueller, Don E.; Wagner, John C.
2014-12-01
One of the most important remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation—in particular, the availability and use of applicable measured data to support validation, especially for fission products (FPs). Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. In this study, this paper describes a validation approach for commercial spent nuclear fuel (SNF) criticality safety (k eff) evaluations based on best-available data andmore » methods and applies the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The criticality validation approach utilizes not only available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion program to support validation of the principal actinides but also calculated sensitivities, nuclear data uncertainties, and limited available FP LCE data to predict and verify individual biases for relevant minor actinides and FPs. The results demonstrate that (a) sufficient critical experiment data exist to adequately validate k eff calculations via conventional validation approaches for the primary actinides, (b) sensitivity-based critical experiment selection is more appropriate for generating accurate application model bias and uncertainty, and (c) calculated sensitivities and nuclear data uncertainties can be used for generating conservative estimates of bias for minor actinides and FPs. Results based on the SCALE 6.1 and the ENDF/B-VII.0 cross-section libraries indicate that a conservative estimate of the bias for the minor actinides and FPs is 1.5% of their worth within the application model. Finally, this paper provides a detailed description of the approach and its technical bases, describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models, and provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data.« less
FY17 Status Report on the Initial Development of a Constitutive Model for Grade 91 Steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, M. C.; Phan, V. -T.; Sham, T. -L.
Grade 91 is a candidate structural material for high temperature advanced reactor applications. Existing ASME Section III, Subsection HB, Subpart B simplified design rules based on elastic analysis are setup as conservative screening tools with the intent to supplement these screening rules with full inelastic analysis when required. The Code provides general guidelines for suitable inelastic models but does not provide constitutive model implementations. This report describes the development of an inelastic constitutive model for Gr. 91 steel aimed at fulfilling the ASME Code requirements and being included into a new Section III Code appendix, HBB-Z. A large database ofmore » over 300 experiments on Gr. 91 was collected and converted to a standard XML form. Five families of Gr. 91 material models were identified in the literature. Of these five, two are potentially suitable for use in the ASME code. These two models were implemented and evaluated against the experimental database. Both models have deficiencies so the report develops a framework for developing and calibrating an improved model. This required creating a new modeling method for representing changes in material rate sensitivity across the full ASME allowable temperature range for Gr. 91 structural components: room temperature to 650° C. On top of this framework for rate sensitivity the report describes calibrating a model for work hardening and softening in the material using genetic algorithm optimization. Future work will focus on improving this trial model by including tension/compression asymmetry observed in experiments and necessary to capture material ratcheting under zero mean stress and by improving the optimization and analysis framework.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Troy; Bhat, Sham; Marcy, Peter
Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less
Holland, Troy; Bhat, Sham; Marcy, Peter; ...
2017-08-25
Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less
NASA Astrophysics Data System (ADS)
Kim, Go-Un; Seo, Kyong-Hwan
2018-01-01
A key physical factor in regulating the performance of Madden-Julian oscillation (MJO) simulation is examined by using 26 climate model simulations from the World Meteorological Organization's Working Group for Numerical Experimentation/Global Energy and Water Cycle Experiment Atmospheric System Study (WGNE and MJO-Task Force/GASS) global model comparison project. For this, intraseasonal moisture budget equation is analyzed and a simple, efficient physical quantity is developed. The result shows that MJO skill is most sensitive to vertically integrated intraseasonal zonal wind convergence (ZC). In particular, a specific threshold value of the strength of the ZC can be used as distinguishing between good and poor models. An additional finding is that good models exhibit the correct simultaneous convection and large-scale circulation phase relationship. In poor models, however, the peak circulation response appears 3 days after peak rainfall, suggesting unfavorable coupling between convection and circulation. For an improving simulation of the MJO in climate models, we propose that this delay of circulation in response to convection needs to be corrected in the cumulus parameterization scheme.
The effect of menthol vapor on nasal sensitivity to chemical irritation.
Wise, Paul M; Preti, George; Eades, Jason; Wysocki, Charles J
2011-10-01
Among other effects, menthol added to cigarettes may modulate sensory response to cigarette smoke either by masking "harshness" or contributing to a desirable "impact." However, harshness and impact have been imprecisely defined and assessed using subjective measures. Thus, the current experiments used an objective measure of sensitivity to chemical irritation in the nose to test the hypothesis that menthol vapor modulates sensitivity to chemical irritation in the airways. Nasal irritation thresholds were measured for 2 model compounds (acetic acid and allyl isothiocyanate) using nasal lateralization. In this technique, participants simultaneously sniff clean air in one nostril and chemical vapor in the other and attempt to identify the stimulated nostril. People cannot lateralize based on smell alone but can do so when chemicals are strong enough to feel. In one condition, participants were pretreated by sniffing menthol vapor. In a control condition, participants were pretreated by sniffing an odorless blank (within-subjects design). Pretreatment with menthol vapor decreased sensitivity to nasal irritation from acetic acid (participants required higher concentrations to lateralize) but increased sensitivity to allyl isothiocyanate (lower concentrations were required). The current experiments provide objective evidence that menthol vapor can modulate sensitivity to chemical irritation in the upper airways in humans. Cigarette smoke is a complex mixture of chemicals and particulates, and further work will be needed to determine exactly how menthol modulates smoking sensation. A better understanding could lead to treatments tailored to help menthol smokers quit by replacing the sensation of mentholated cigarettes.
Maternal peanut exposure during pregnancy and lactation reduces peanut allergy risk in offspring.
López-Expósito, Iván; Song, Ying; Järvinen, Kirsi M; Srivastava, Kamal; Li, Xiu-Min
2009-11-01
Maternal allergy is believed to be a risk factor for peanut allergy (PNA) in children. However, there is no direct evidence of maternal transmission of PNA susceptibility, and it is unknown whether maternal peanut exposure affects the development of PNA in offspring. To investigate the influence of maternal PNA on offspring reactions to the first peanut exposure, and whether maternal low-dose peanut exposure during pregnancy and lactation influences these reactions and peanut sensitization in a murine model. Five-week-old offspring of PNA C3H/HeJ mothers (PNA-Ms) were challenged intragastrically with peanut (first exposure), and reactions were determined. In a subset of the experiment, PNA-Ms were fed a low dose of peanut (PNA-M/PN) or not fed peanut (PNA-M/none) during pregnancy and lactation. Their 5-week-old offspring were challenged intragastrically with peanut, and reactions were determined. In another subset of the experiment, offspring of PNA-M/PN or PNA-M/none were sensitized with peanut intragastrically for 6 weeks, and serum peanut-specific antibodies were determined. PNA-M offspring exhibited anaphylactic reactions at first exposure to peanut that were associated with peanut-specific IgG(1) levels and prevented by a platelet activation factor antagonist. In a subset experiment, PNA-M/PN offspring showed significantly reduced first-exposure peanut reactions, increased IgG(2a), and reduced mitogen-stimulated splenocyte cytokine production compared with PNA-M/none offspring. In an additional experiment, PNA-M/PN offspring showed reduction of peanut-specific IgE to active peanut sensitization. We show for the first time maternal transmission of susceptibility to first-exposure peanut reactions and active peanut sensitization. Low-dose peanut exposure during pregnancy and lactation reduced this risk.
NASA Astrophysics Data System (ADS)
Hertzog, David
2013-04-01
The worldwide, vibrant experimental program involving precision measurements with muons will be presented. Recent achievements in this field have greatly improved our knowledge of fundamental parameters: Fermi constant (lifetime), weak-nucleon pseudoscalar coupling (μp capture), Michel decay parameters, and the proton charged radius (Lamb shift). The charged-lepton-violating decay μ->eγ sets new physics limits. Updated Standard Model theory evaluations of the muon anomalous magnetic moment has increased the significance beyond 3 σ for the deviation with respect to experiment. Next-generation experiments are mounting, with ambitious sensitivity goals for the muon-to-electron search approaching 10-17 sensitivity and for a 0.14 ppm determination of g-2. The broad physics reach of these efforts involves atomic, nuclear and particle physics communities. I will select from recent work and outline the most important efforts that are in preparation.
Shock wave treatment improves nerve regeneration in the rat.
Mense, Siegfried; Hoheisel, Ulrich
2013-05-01
The aims of the experiments were to: (1) determine whether low-energy shock wave treatment accelerates the recovery of muscle sensitivity and functionality after a nerve lesion; and (2) assess the effect of shock waves on the regeneration of injured nerve fibers. After compression of a muscle nerve in rats the effects of shock wave treatment on the sequelae of the lesion were tested. In non-anesthetized animals, pressure pain thresholds and exploratory activity were determined. The influence of the treatment on the distance of nerve regeneration was studied in immunohistochemical experiments. Both behavioral and immunohistochemical data show that shock wave treatment accelerates the recovery of muscle sensitivity and functionality and promotes regeneration of injured nerve fibers. Treatment with focused shock waves induces an improvement of nerve regeneration in a rodent model of nerve compression. Copyright © 2012 Wiley Periodicals, Inc.
Wang, Ping; Zhou, Ye; MacLaren, Stephan A.; ...
2015-11-06
Three- and two-dimensional numerical studies have been carried out to simulate recent counter-propagating shear flow experiments on the National Ignition Facility. A multi-physics three-dimensional, time-dependent radiation hydrodynamics simulation code is used. Using a Reynolds Averaging Navier-Stokes model, we show that the evolution of the mixing layer width obtained from the simulations agrees well with that measured from the experiments. A sensitivity study is conducted to illustrate a 3D geometrical effect that could confuse the measurement at late times, if the energy drives from the two ends of the shock tube are asymmetric. Implications for future experiments are discussed.
An integrative formal model of motivation and decision making: The MGPM*.
Ballard, Timothy; Yeo, Gillian; Loft, Shayne; Vancouver, Jeffrey B; Neal, Andrew
2016-09-01
We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Johns, Heather Marie; Lanier, Nicholas Edward; Kline, John L.; ...
2016-09-07
Here, we present synthetic transmission spectra generated with PrismSPECT utilizing both the ATBASE model and the Los Alamos opacity library (OPLIB) to evaluate whether an alternative choice in atomic data will impact modeling of experimental data from radiation transport experiments using Sc-doped aerogel foams (ScSi 6O 12 at 75 mg/cm 3 density). We have determined that in the 50-200 eV T e range there is a significant difference in the 1s-3p spectra, especially below 100 eV, and for T e = 200 eV above 5000 eV in photon energy. Examining synthetic spectra generated using OPLIB with 300 resolving power revealsmore » spectral sensitivity to T e changes of ~3 eV.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johns, Heather Marie; Lanier, Nicholas Edward; Kline, John L.
Here, we present synthetic transmission spectra generated with PrismSPECT utilizing both the ATBASE model and the Los Alamos opacity library (OPLIB) to evaluate whether an alternative choice in atomic data will impact modeling of experimental data from radiation transport experiments using Sc-doped aerogel foams (ScSi 6O 12 at 75 mg/cm 3 density). We have determined that in the 50-200 eV T e range there is a significant difference in the 1s-3p spectra, especially below 100 eV, and for T e = 200 eV above 5000 eV in photon energy. Examining synthetic spectra generated using OPLIB with 300 resolving power revealsmore » spectral sensitivity to T e changes of ~3 eV.« less
NASA Astrophysics Data System (ADS)
Feldman, D.; Collins, W. D.; Wielicki, B. A.; Shea, Y.; Mlynczak, M. G.; Kuo, C.; Nguyen, N.
2017-12-01
Shortwave feedbacks are a persistent source of uncertainty for climate models and a large contributor to the diagnosed range of equilibrium climate sensitivity (ECS) for the international multi-model ensemble. The processes that contribute to these feedbacks affect top-of-atmosphere energetics and produce spectral signatures that may be time-evolving. We explore the value of such spectral signatures for providing an observational constraint on model ECS by simulating top-of-atmosphere shortwave reflectance spectra across much of the energetically-relevant shortwave bandpass (300 to 2500 nm). We present centennial-length shortwave hyperspectral simulations from low, medium and high ECS models that reported to the CMIP5 archive as part of an Observing System Simulation Experiment (OSSE) in support of the CLimate Absolute Radiance and Refractivity Observatory (CLARREO). Our framework interfaces with CMIP5 archive results and is agnostic to the choice of model. We simulated spectra from the INM-CM4 model (ECS of 2.08 °K/2xCO2), the MIROC5 model (ECS of 2.70 °K/2xCO2), and the CSIRO Mk3-6-0 (ECS of 4.08 °K/2xCO2) based on those models' integrations of the RCP8.5 scenario for the 21st Century. This approach allows us to explore how perfect data records can exclude models of lower or higher climate sensitivity. We find that spectral channels covering visible and near-infrared water-vapor overtone bands can potentially exclude a low or high sensitivity model with under 15 years' of absolutely-calibrated data. These different spectral channels are sensitive to model cloud radiative effect and cloud height changes, respectively. These unprecedented calculations lay the groundwork for spectral simulations of perturbed-physics ensembles in order to identify those shortwave observations that can help narrow the range in shortwave model feedbacks and ultimately help reduce the stubbornly-large range in model ECS.
Muniandy, Kalaivani; Sankar, Prabu Siva; Xiang, Benedict Lian Shi; Soo-Beng, Alan Khoo; Balakrishnan, Venugopal; Mohana-Kumaran, Nethia
2016-11-01
Spheroids have been shown to recapitulate the tumour in vivo with properties such as the tumour microenvironment, concentration gradients, and tumour phenotype. As such, it can serve as a platform for determining the growth and invasion behaviour pattern of the cancer cells as well as be utilised for drug sensitivity assays; capable of exhibiting results that are closer to what is observed in vivo compared to two-dimensional (2D) cell culture assays. This study focused on establishing a three-dimensional (3D) cell culture model using the Nasopharyngeal Carcinoma (NPC) cell line, HK1 and analysing its growth and invasion phenotypes. The spheroids will also serve as a model to elucidate their sensitivity to the chemotherapeutic drug, Flavopiridol. The liquid overlay method was employed to generate the spheroids which was embedded in bovine collagen I matrix for growth and invasion phenotypes observation. The HK1 cells formed compact spheroids within 72 hours. Our observation from the 3 days experiments revealed that the spheroids gradually grew and invaded into the collagen matrix, showing that the HK1 spheroids are capable of growth and invasion. Progressing from these experiments, the HK1 spheroids were employed to perform a drug sensitivity assay using the chemotherapeutic drug, Flavopiridol. The drug had a dose-dependent inhibition on spheroid growth and invasion.
Cao, D.; Boehly, T. R.; Gregor, M. C.; ...
2018-05-16
Using temporally shaped laser pulses, multiple shocks can be launched in direct-drive inertial confinement fusion implosion experiments to set the shell on a desired isentrope or adiabat. The velocity of the first shock and the times at which subsequent shocks catch up to it are measured through the VISAR diagnostic on OMEGA. Simulations reproduce these velocity and shock-merger time measurements when using laser pulses designed for setting mid-adiabat (α ~ 3) implosions, but agreement degrades for lower-adiabat (α ~ 1) designs. Several possibilities for this difference are studied: errors in placing the target at the center of irradiation (target offset),more » variations in energy between the different incident beams (power imbalance), and errors in modeling the laser energy coupled into the capsule. Simulation results indicate that shock timing is most sensitive to details of the density and temperature profiles in the coronal plasma, which influences the laser energy coupled into the target, and only marginally sensitive to target offset and beam power imbalance. A new technique under development to infer coronal profiles using x-ray self-emission imaging can be applied to the pulse shapes used in shock-timing experiments. In conclusion, this will help identify improved physics models to implement in codes and consequently enhance shock-timing predictive capability for low-adiabat pulses.« less
The influence of model resolution on ozone in industrial volatile organic compound plumes.
Henderson, Barron H; Jeffries, Harvey E; Kim, Byeong-Uk; Vizuete, William G
2010-09-01
Regions with concentrated petrochemical industrial activity (e.g., Houston or Baton Rouge) frequently experience large, localized releases of volatile organic compounds (VOCs). Aircraft measurements suggest these released VOCs create plumes with ozone (O3) production rates 2-5 times higher than typical urban conditions. Modeling studies found that simulating high O3 productions requires superfine (1-km) horizontal grid cell size. Compared with fine modeling (4-kmin), the superfine resolution increases the peak O3 concentration by as much as 46%. To understand this drastic O3 change, this study quantifies model processes for O3 and "odd oxygen" (Ox) in both resolutions. For the entire plume, the superfine resolution increases the maximum O3 concentration 3% but only decreases the maximum Ox concentration 0.2%. The two grid sizes produce approximately equal Ox mass but by different reaction pathways. Derived sensitivity to oxides of nitrogen (NOx) and VOC emissions suggests resolution-specific sensitivity to NOx and VOC emissions. Different sensitivity to emissions will result in different O3 responses to subsequently encountered emissions (within the city or downwind). Sensitivity of O3 to emission changes also results in different simulated O3 responses to the same control strategies. Sensitivity of O3 to NOx and VOC emission changes is attributed to finer resolved Eulerian grid and finer resolved NOx emissions. Urban NOx concentration gradients are often caused by roadway mobile sources that would not typically be addressed with Plume-in-Grid models. This study shows that grid cell size (an artifact of modeling) influences simulated control strategies and could bias regulatory decisions. Understanding the dynamics of VOC plume dependence on grid size is the first step toward providing more detailed guidance for resolution. These results underscore VOC and NOx resolution interdependencies best addressed by finer resolution. On the basis of these results, the authors suggest a need for quantitative metrics for horizontal grid resolution in future model guidance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Don; Rearden, Bradley T; Reed, Davis Allan
2010-01-01
One of the challenges associated with implementation of burnup credit is the validation of criticality calculations used in the safety evaluation; in particular the availability and use of applicable critical experiment data. The purpose of the validation is to quantify the relationship between reality and calculated results. Validation and determination of bias and bias uncertainty require the identification of sets of critical experiments that are similar to the criticality safety models. A principal challenge for crediting fission products (FP) in a burnup credit safety evaluation is the limited availability of relevant FP critical experiments for bias and bias uncertainty determination.more » This paper provides an evaluation of the available critical experiments that include FPs, along with bounding, burnup-dependent estimates of FP biases generated by combining energy dependent sensitivity data for a typical burnup credit application with the nuclear data uncertainty information distributed with SCALE 6. A method for determining separate bias and bias uncertainty values for individual FPs and illustrative results is presented. Finally, a FP bias calculation method based on data adjustment techniques and reactivity sensitivity coefficients calculated with the SCALE sensitivity/uncertainty tools and some typical results is presented. Using the methods described in this paper, the cross-section bias for a representative high-capacity spent fuel cask associated with the ENDF/B-VII nuclear data for 16 most important stable or near stable FPs is predicted to be no greater than 2% of the total worth of the 16 FPs, or less than 0.13 % k/k.« less
The Microminipig as an Animal Model for Influenza A Virus Infection.
Iwatsuki-Horimoto, Kiyoko; Nakajima, Noriko; Shibata, Masatoshi; Takahashi, Kenta; Sato, Yuko; Kiso, Maki; Yamayoshi, Seiya; Ito, Mutsumi; Enya, Satoko; Otake, Masayoshi; Kangawa, Akihisa; da Silva Lopes, Tiago Jose; Ito, Hirotaka; Hasegawa, Hideki; Kawaoka, Yoshihiro
2017-01-15
Pigs are considered a mixing vessel for the generation of novel pandemic influenza A viruses through reassortment because of their susceptibility to both avian and human influenza viruses. However, experiments to understand reassortment in pigs in detail have been limited because experiments with regular-sized pigs are difficult to perform. Miniature pigs have been used as an experimental animal model, but they are still large and require relatively large cages for housing. The microminipig is one of the smallest miniature pigs used for experiments. Introduced in 2010, microminipigs weigh around 10 kg at an early stage of maturity (6 to 7 months old) and are easy to handle. To evaluate the microminipig as an animal model for influenza A virus infection, we compared the receptor distribution of 10-week-old male pigs (Yorkshire Large White) and microminipigs. We found that both animals have SAα2,3Gal and SAα2,6Gal in their respiratory tracts, with similar distributions of both receptor types. We further found that the sensitivity of microminipigs to influenza A viruses was the same as that of larger miniature pigs. Our findings indicate that the microminipig could serve as a novel model animal for influenza A virus infection. The microminipig is one of the smallest miniature pigs in the world and is used as an experimental animal model for life science research. In this study, we evaluated the microminipig as a novel animal model for influenza A virus infection. The distribution of influenza virus receptors in the respiratory tract of the microminipig was similar to that of the pig, and the sensitivity of microminipigs to influenza A viruses was the same as that of miniature pigs. Our findings suggest that microminipigs represent a novel animal model for influenza A virus infection. Copyright © 2017 American Society for Microbiology.
NASA Astrophysics Data System (ADS)
Xue, L.; Newman, A. J.; Ikeda, K.; Rasmussen, R.; Clark, M. P.; Monaghan, A. J.
2016-12-01
A high-resolution (a 1.5 km grid spacing domain nested within a 4.5 km grid spacing domain) 10-year regional climate simulation over the entire Hawaiian archipelago is being conducted at the National Center for Atmospheric Research (NCAR) using the Weather Research and Forecasting (WRF) model version 3.7.1. Numerical sensitivity simulations of the Hawaiian Rainband Project (HaRP, a filed experiment from July to August in 1990) showed that the simulated precipitation properties are sensitive to initial and lateral boundary conditions, sea surface temperature (SST), land surface models, vertical resolution and cloud droplet concentration. The validations of model simulated statistics of the trade wind inversion, temperature, wind field, cloud cover, and precipitation over the islands against various observations from soundings, satellites, weather stations and rain gauges during the period from 2003 to 2012 will be presented at the meeting.
Sensitivity of geographic information system outputs to errors in remotely sensed data
NASA Technical Reports Server (NTRS)
Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.
1981-01-01
The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.
NASA Technical Reports Server (NTRS)
Yung, C. S.; Lansing, F. L.
1983-01-01
A 37.85 cu m (10,000 gallons) per year (nominal) passive solar powered water distillation system was installed and is operational in the Venus Deep Space Station. The system replaced an old, electrically powered water distiller. The distilled water produced with its high electrical resistivity is used to cool the sensitive microwave equipment. A detailed thermal model was developed to simulate the performance of the distiller and study its sensitivity under varying environment and load conditions. The quasi-steady state portion of the model is presented together with the formulas for heat and mass transfer coefficients used. Initial results indicated that a daily water evaporation efficiency of 30% can be achieved. A comparison made between a full day performance simulation and the actual field measurements gave good agreement between theory and experiment, which verified the model.
Modeling biogechemical reactive transport in a fracture zone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molinero, Jorge; Samper, Javier; Yang, Chan Bing, and Zhang, Guoxiang
2005-01-14
A coupled model of groundwater flow, reactive solute transport and microbial processes for a fracture zone of the Aspo site at Sweden is presented. This is the model of the so-called Redox Zone Experiment aimed at evaluating the effects of tunnel construction on the geochemical conditions prevailing in a fracture granite. It is found that a model accounting for microbially-mediated geochemical processes is able to reproduce the unexpected measured increasing trends of dissolved sulfate and bicarbonate. The model is also useful for testing hypotheses regarding the role of microbial processes and evaluating the sensitivity of model results to changes inmore » biochemical parameters.« less
Experimental constraint on quark electric dipole moments
NASA Astrophysics Data System (ADS)
Liu, Tianbo; Zhao, Zhiwen; Gao, Haiyan
2018-04-01
The electric dipole moments (EDMs) of nucleons are sensitive probes of additional C P violation sources beyond the standard model to account for the baryon number asymmetry of the universe. As a fundamental quantity of the nucleon structure, tensor charge is also a bridge that relates nucleon EDMs to quark EDMs. With a combination of nucleon EDM measurements and tensor charge extractions, we investigate the experimental constraint on quark EDMs, and its sensitivity to C P violation sources from new physics beyond the electroweak scale. We obtain the current limits on quark EDMs as 1.27 ×10-24 e .cm for the up quark and 1.17 ×10-24 e .cm for the down quark at the scale of 4 GeV2 . We also study the impact of future nucleon EDM and tensor charge measurements, and show that upcoming new experiments will improve the constraint on quark EDMs by about 3 orders of magnitude leading to a much more sensitive probe of new physics models.
Impact of Albedo Contrast Between Cirrus and Boundary-Layer Clouds on Climate Sensitivity
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lindzen, R. S.; Hou, A. Y.; Lau, William K. M. (Technical Monitor)
2001-01-01
In assessing the iris effect suggested by Lindzen et al. (2001), Fu et al. (2001) found that the response of high-level clouds to the sea surface temperature had an effect of reducing the climate sensitivity to external radiative forcing, but the effect was not as strong as LCH found. This weaker reduction in climate sensitivity was due to the smaller contrasts in albedos and effective emitting temperatures between cirrus clouds and the neighboring regions. FBH specified the albedos and the outgoing longwave radiation (OLR) in the LCH 3.5-box radiative-convective model by requiring that the model radiation budgets at the top of the atmosphere be consistent with that inferred from the Earth Radiation Budget Experiment (ERBE). In point of fact, the constraint by radiation budgets alone is not sufficient for deriving the correct contrast in radiation properties between cirrus clouds and the neighboring regions, and the approach of FBH to specifying those properties is, we feel inappropriate for assessing the iris effect.
Sensitivity of Asian Summer Monsoon precipitation to tropical sea surface temperature anomalies
NASA Astrophysics Data System (ADS)
Fan, Lei; Shin, Sang-Ik; Liu, Zhengyu; Liu, Qinyu
2016-10-01
Sensitivity of Asian Summer Monsoon (ASM) precipitation to tropical sea surface temperature (SST) anomalies was estimated from ensemble simulations of two atmospheric general circulation models (GCMs) with an array of idealized SST anomaly patch prescriptions. Consistent sensitivity patterns were obtained in both models. Sensitivity of Indian Summer Monsoon (ISM) precipitation to cooling in the East Pacific was much weaker than to that of the same magnitude in the local Indian-western Pacific, over which a meridional pattern of warm north and cold south was most instrumental in increasing ISM precipitation. This indicates that the strength of the ENSO-ISM relationship is due to the large-amplitude East Pacific SST anomaly rather than its sensitivity value. Sensitivity of the East Asian Summer Monsoon (EASM), represented by the Yangtze-Huai River Valley (YHRV, also known as the meiyu-baiu front) precipitation, is non-uniform across the Indian Ocean basin. YHRV precipitation was most sensitive to warm SST anomalies over the northern Indian Ocean and the South China Sea, whereas the southern Indian Ocean had the opposite effect. This implies that the strengthened EASM in the post-Niño year is attributable mainly to warming of the northern Indian Ocean. The corresponding physical links between these SST anomaly patterns and ASM precipitation were also discussed. The relevance of sensitivity maps was justified by the high correlation between sensitivity-map-based reconstructed time series using observed SST anomaly patterns and actual precipitation series derived from ensemble-mean atmospheric GCM runs with time-varying global SST prescriptions during the same period. The correlation results indicated that sensitivity maps derived from patch experiments were far superior to those based on regression methods.
Search for dark photons using data from CRESST-II Phase 2
NASA Astrophysics Data System (ADS)
Gütlein, A.; Angloher, G.; Bento, A.; Bucci, C.; Canonica, L.; Defay, X.; Erb, A.; Feilitzsch, F. v.; Ferreiro Iachellini, N.; Gorla, P.; Hauff, D.; Jochum, J.; Kiefer, M.; Kluck, H.; Kraus, H.; Lanfranchi, J.-C.; Loebell, J.; Mancuso, M.; Münster, A.; Pagliarone, C.; Petricca, F.; Potzel, W.; Pröbst, F.; Puig, R.; Reindl, F.; Schäffner, K.; Schieck, J.; Schönert, S.; Seidel, W.; Stahlberg, M.; Stodolsky, L.; Strandhagen, C.; Strauss, R.; Tanzke, A.; Trinh Thi, H. H.; Türkoǧlu, C.; Uffinger, M.; Ulrich, A.; Usherov, I.; Wawoczny, S.; Willers, M.; Wüstrich, M.; Zöller, A.
2017-09-01
Understanding the nature and origin of dark matter is one of the most important challenges for modern particle physics. During the previous decade the sensitivities of direct dark matter searches have improved by several orders of magnitude. These experiments focus their work mainly on the search for dark-matter particles interacting with nuclei (e.g. Weakly Interacting Massive Particles, WIMPs). However, there exists a large variety of different candidates for dark-matter particles. One of these candidates, the so-called dark photon, is a long-lived vector boson with a kinetic mixing to the standard-model photon. In this work we present the preliminary results of our search for dark photons. Using data from the direct dark matter search CRESST-II Phase 2 we can improve the existing constraints for the kinetic mixing for dark-photon masses between 0.3 and 0.5 keV/c2. In addition, we also present projected sensitivities for the next phases of the CRESST-III experiment showing great potential to improve the sensitivity for dark-photon masses below 1 keV.
DAEδALUS and dark matter detection
Kahn, Yonatan; Krnjaic, Gordan; Thaler, Jesse; ...
2015-03-05
Among laboratory probes of dark matter, fixed-target neutrino experiments are particularly well suited to search for light weakly coupled dark sectors. Here in this paper, we show that the DAEδALUS source setup$-$an 800 MeV proton beam impinging on a target of graphite and copper$-$can improve the present LSND bound on dark photon models by an order of magnitude over much of the accessible parameter space for light dark matter when paired with a suitable neutrino detector such as LENA. Interestingly, both DAEδALUS and LSND are sensitive to dark matter produced from off-shell dark photons. We show for the first timemore » that LSND can be competitive with searches for visible dark photon decays and that fixed-target experiments have sensitivity to a much larger range of heavy dark photon masses than previously thought. We review the mechanism for dark matter production and detection through a dark photon mediator, discuss the beam-off and beam-on backgrounds, and present the sensitivity in dark photon kinetic mixing for both the DAEδALUS/LENA setup and LSND in both the on- and off-shell regimes.« less
Research on fiber Bragg grating heart sound sensing and wavelength demodulation method
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Miao, Chang-Yun; Gao, Hua; Gan, Jing-Meng; Li, Hong-Qiang
2010-11-01
Heart sound includes a lot of physiological and pathological information of heart and blood vessel. Heart sound detecting is an important method to gain the heart status, and has important significance to early diagnoses of cardiopathy. In order to improve sensitivity and reduce noise, a heart sound measurement method based on fiber Bragg grating was researched. By the vibration principle of plane round diaphragm, a heart sound sensor structure of fiber Bragg grating was designed and a heart sound sensing mathematical model was established. A formula of heart sound sensitivity was deduced and the theoretical sensitivity of the designed sensor is 957.11pm/KPa. Based on matched grating method, the experiment system was built, by which the excursion of reflected wavelength of the sensing grating was detected and the information of heart sound was obtained. Experiments show that the designed sensor can detect the heart sound and the reflected wavelength variety range is about 70pm. When the sampling frequency is 1 KHz, the extracted heart sound waveform by using the db4 wavelet has the same characteristics with a standard heart sound sensor.
NASA Astrophysics Data System (ADS)
Fajber, R. A.; Kushner, P. J.; Laliberte, F. B.
2017-12-01
In the midlatitude atmosphere, baroclinic eddies are able to raise warm, moist air from the surface into the midtroposphere where it condenses and warms the atmosphere through latent heating. This coupling between dynamics and moist thermodynamics motivates using a conserved moist thermodynamic variable, such as the equivalent potential temperature, to study the midlatitude circulation and associated heat transport since it implicitly accounts for latent heating. When the equivalent potential temperature is used to zonally average the circulation, the moist isentropic circulation takes the form of a single cell in each hemisphere. By utilising the statistical transformed Eulerian mean (STEM) circulation we are able to parametrize the moist isentropic circulation in terms of second order dynamic and moist thermodynamic statistics. The functional dependence of the STEM allows us to analytically calculate functional derivatives that reveal the spatially varying sensitivity of the moist isentropic circulation to perturbations in different statistics. Using the STEM functional derivatives as sensitivity kernels we interpret changes in the moist isentropic circulation from two experiments: surface heating in an idealised moist model, and a climate change scenario in a comprehensive atmospheric general circulation model. In both cases we find that the changes in the moist isentropic circulation are well predicted by the functional sensitivities, and that the total heat transport is more sensitive to changes in dynamical processes driving local changes in poleward heat transport than it is to thermodynamic and/or radiative processes driving changes to the distribution of equivalent potential temperature.
The Alcohol Sensitivity Questionnaire: Evidence for Construct Validity
Fleming, Kimberly A.; Bartholow, Bruce D.; Hilgard, Joseph B.; McCarthy, Denis M.; O’Neill, Susan E.; Steinley, Douglas; Sher, Kenneth J.
2016-01-01
Background Variability in sensitivity to the acute effects of alcohol is an important risk factor for the development of alcohol use disorder (AUD). The most commonly used retrospective self-report measure of sensitivity, the Self-Rating of the Effects of Alcohol form (SRE), queries a limited number of alcohol effects and relies on respondents’ ability to recall experiences that might have occurred in the distant past. Here, we investigated the construct validity of an alternative measure that queries a larger number of alcohol effects, the Alcohol Sensitivity Questionnaire (ASQ), and compared it to the SRE in predicting momentary subjective responses to an acute dose of alcohol. Method Healthy young adults (N = 423) completed the SRE and the ASQ and then were randomly assigned to consume either alcohol or a placebo beverage (between-subjects manipulation). Stimulation and sedation (Biphasic Alcohol Effects Scale) and subjective intoxication were measured multiple times after drinking. Results Hierarchical linear models showed that the ASQ reliably predicted each of these outcomes following alcohol but not placebo consumption, provided unique prediction beyond that associated with differences in recent alcohol involvement, and was preferred over the SRE (in terms of model fit) in direct model comparisons of stimulation and sedation. Conclusions The ASQ compared favorably with the better-known SRE in predicting increased stimulation and reduced sedation following an acute alcohol challenge. The ASQ appears to be a valid self-report measure of alcohol sensitivity and therefore holds promise for identifying individuals at-risk for AUD and related problems. PMID:27012527
Can tonne-scale direct detection experiments discover nuclear dark matter?
NASA Astrophysics Data System (ADS)
Butcher, Alistair; Kirk, Russell; Monroe, Jocelyn; West, Stephen M.
2017-10-01
Models of nuclear dark matter propose that the dark sector contains large composite states consisting of dark nucleons in analogy to Standard Model nuclei. We examine the direct detection phenomenology of a particular class of nuclear dark matter model at the current generation of tonne-scale liquid noble experiments, in particular DEAP-3600 and XENON1T. In our chosen nuclear dark matter scenario distinctive features arise in the recoil energy spectra due to the non-point-like nature of the composite dark matter state. We calculate the number of events required to distinguish these spectra from those of a standard point-like WIMP state with a decaying exponential recoil spectrum. In the most favourable regions of nuclear dark matter parameter space, we find that a few tens of events are needed to distinguish nuclear dark matter from WIMPs at the 3 σ level in a single experiment. Given the total exposure time of DEAP-3600 and XENON1T we find that at best a 2 σ distinction is possible by these experiments individually, while 3 σ sensitivity is reached for a range of parameters by the combination of the two experiments. We show that future upgrades of these experiments have potential to distinguish a large range of nuclear dark matter models from that of a WIMP at greater than 3 σ.
Can tonne-scale direct detection experiments discover nuclear dark matter?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butcher, Alistair; Kirk, Russell; Monroe, Jocelyn
Models of nuclear dark matter propose that the dark sector contains large composite states consisting of dark nucleons in analogy to Standard Model nuclei. We examine the direct detection phenomenology of a particular class of nuclear dark matter model at the current generation of tonne-scale liquid noble experiments, in particular DEAP-3600 and XENON1T. In our chosen nuclear dark matter scenario distinctive features arise in the recoil energy spectra due to the non-point-like nature of the composite dark matter state. We calculate the number of events required to distinguish these spectra from those of a standard point-like WIMP state with amore » decaying exponential recoil spectrum. In the most favourable regions of nuclear dark matter parameter space, we find that a few tens of events are needed to distinguish nuclear dark matter from WIMPs at the 3 σ level in a single experiment. Given the total exposure time of DEAP-3600 and XENON1T we find that at best a 2 σ distinction is possible by these experiments individually, while 3 σ sensitivity is reached for a range of parameters by the combination of the two experiments. We show that future upgrades of these experiments have potential to distinguish a large range of nuclear dark matter models from that of a WIMP at greater than 3 σ .« less
Development of a superconducting position sensor for the Satellite Test of the Equivalence Principle
NASA Astrophysics Data System (ADS)
Clavier, Odile Helene
The Satellite Test of the Equivalence Principle (STEP) is a joint NASA/ESA mission that proposes to measure the differential acceleration of two cylindrical test masses orbiting the earth in a drag-free satellite to a precision of 10-18 g. Such an experiment would conceptually reproduce Galileo's tower of Pisa experiment with a much longer time of fall and greatly reduced disturbances. The superconducting test masses are constrained in all degrees of freedom except their axial direction (the sensitive axis) using superconducting bearings. The STEP accelerometer measures the differential position of the masses in their sensitive direction using superconducting inductive pickup coils coupled to an extremely sensitive magnetometer called a DC-SQUID (Superconducting Quantum Interference Device). Position sensor development involves the design, manufacture and calibration of pickup coils that will meet the acceleration sensitivity requirement. Acceleration sensitivity depends on both the displacement sensitivity and stiffness of the position sensor. The stiffness must kept small while maintaining stability of the accelerometer. Using a model for the inductance of the pickup coils versus displacement of the test masses, a computer simulation calculates the sensitivity and stiffness of the accelerometer in its axial direction. This simulation produced a design of pickup coils for the four STEP accelerometers. Manufacture of the pickup coils involves standard photolithography techniques modified for superconducting thin-films. A single-turn pickup coil was manufactured and produced a successful superconducting coil using thin-film Niobium. A low-temperature apparatus was developed with a precision position sensor to measure the displacement of a superconducting plate (acting as a mock test mass) facing the coil. The position sensor was designed to detect five degrees of freedom so that coupling could be taken into account when measuring the translation of the plate relative to the coil. The inductance was measured using a DC-SQUID coupled to the pickup coil. The experimental results agree with the model used in the simulation thereby validating the concept used for the design. The STEP program now has the confidence necessary to design and manufacture a position sensor for the flight accelerometer.
Climate Modeling and Causal Identification for Sea Ice Predictability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark
This project aims to better understand causes of ongoing changes in the Arctic climate system, particularly as decreasing sea ice trends have been observed in recent decades and are expected to continue in the future. As part of the Sea Ice Prediction Network, a multi-agency effort to improve sea ice prediction products on seasonal-to-interannual time scales, our team is studying sensitivity of sea ice to a collection of physical process and feedback mechanism in the coupled climate system. During 2017 we completed a set of climate model simulations using the fully coupled ACME-HiLAT model. The simulations consisted of experiments inmore » which cloud, sea ice, and air-ocean turbulent exchange parameters previously identified as important for driving output uncertainty in climate models were perturbed to account for parameter uncertainty in simulated climate variables. We conducted a sensitivity study to these parameters, which built upon a previous study we made for standalone simulations (Urrego-Blanco et al., 2016, 2017). Using the results from the ensemble of coupled simulations, we are examining robust relationships between climate variables that emerge across the experiments. We are also using causal discovery techniques to identify interaction pathways among climate variables which can help identify physical mechanisms and provide guidance in predictability studies. This work further builds on and leverages the large ensemble of standalone sea ice simulations produced in our previous w14_seaice project.« less