NASA Astrophysics Data System (ADS)
Shiklomanov, A. N.; Cowdery, E.; Dietze, M.
2016-12-01
Recent syntheses of global trait databases have revealed that although the functional diversity among plant species is immense, this diversity is constrained by trade-offs between plant strategies. However, the use of among-trait and trait-environment correlations at the global scale for both qualitative ecological inference and land surface modeling has several important caveats. An alternative approach is to preserve the existing PFT-based model structure while using statistical analyses to account for uncertainty and variability in model parameters. In this study, we used a hierarchical Bayesian model of foliar traits in the TRY database to test the following hypotheses: (1) Leveraging the covariance between foliar traits will significantly constrain our uncertainty in their distributions; and (2) Among-trait covariance patterns are significantly different among and within PFTs, reflecting differences in trade-offs associated with biome-level evolution, site-level community assembly, and individual-level ecophysiological acclimation. We found that among-trait covariance significantly constrained estimates of trait means, and the additional information provided by across-PFT covariance led to more constraint still, especially for traits and PFTs with low sample sizes. We also found that among-trait correlations were highly variable among PFTs, and were generally inconsistent with correlations within PFTs. The hierarchical multivariate framework developed in our study can readily be enhanced with additional levels of hierarchy to account for geographic, species, and individual-level variability.
Astrophysical Model Selection in Gravitational Wave Astronomy
NASA Technical Reports Server (NTRS)
Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.
2012-01-01
Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.
Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform
NASA Astrophysics Data System (ADS)
Gato-Rivera, B.; Semikhatov, A. M.
1992-08-01
A direct relation between the conformal formalism for 2D quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the W ( l) -constrained KP hierarchy to the ( p‧, p‧) minimal model, with the tau function being given by the correlator of a product of (dressed) ( l, 1) [or (1, l)] operators, provided the Miwa parameter ni and the free parameter (an abstract bc spin) present in the constraint are expressed through the ratio p‧/ p and the level l.
Stimulus Dependence of Correlated Variability across Cortical Areas
Cohen, Marlene R.
2016-01-01
The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. PMID:27413163
NASA Astrophysics Data System (ADS)
Guenthner, William R.; Reiners, Peter W.; Drake, Henrik; Tillberg, Mikael
2017-07-01
Craton cores far from plate boundaries have traditionally been viewed as stable features that experience minimal vertical motion over 100-1000 Ma time scales. Here we show that the Fennoscandian Shield in southeastern Sweden experienced several episodes of burial and exhumation from 1800 Ma to the present. Apatite, titanite, and zircon (U-Th)/He ages from surface samples and drill cores constrain the long-term, low-temperature history of the Laxemar region. Single grain titanite and zircon (U-Th)/He ages are negatively correlated (104-838 Ma for zircon and 160-945 Ma for titanite) with effective uranium (eU = U + 0.235 × Th), a measurement proportional to radiation damage. Apatite ages are 102-258 Ma and are positively correlated with eU. These correlations are interpreted with damage-diffusivity models, and the modeled zircon He age-eU correlations constrain multiple episodes of heating and cooling from 1800 Ma to the present, which we interpret in the context of foreland basin systems related to the Neoproterozoic Sveconorwegian and Paleozoic Caledonian orogens. Inverse time-temperature models constrain an average burial temperature of 217°C during the Sveconorwegian, achieved between 944 Ma and 851 Ma, and 154°C during the Caledonian, achieved between 366 Ma and 224 Ma. Subsequent cooling to near-surface temperatures in both cases could be related to long-term exhumation caused by either postorogenic collapse or mantle dynamics related to the final assembly of Rodinia and Pangaea. Our titanite He age-eU correlations cannot currently be interpreted in the same fashion; however, this study represents one of the first examples of a damage-diffusivity relationship in this system, which deserves further research attention.
Random versus maximum entropy models of neural population activity
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry
2017-04-01
The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.
Unbiased estimates of galaxy scaling relations from photometric redshift surveys
NASA Astrophysics Data System (ADS)
Rossi, Graziano; Sheth, Ravi K.
2008-06-01
Many physical properties of galaxies correlate with one another, and these correlations are often used to constrain galaxy formation models. Such correlations include the colour-magnitude relation, the luminosity-size relation, the fundamental plane, etc. However, the transformation from observable (e.g. angular size, apparent brightness) to physical quantity (physical size, luminosity) is often distance dependent. Noise in the distance estimate will lead to biased estimates of these correlations, thus compromising the ability of photometric redshift surveys to constrain galaxy formation models. We describe two methods which can remove this bias. One is a generalization of the Vmax method, and the other is a maximum-likelihood approach. We illustrate their effectiveness by studying the size-luminosity relation in a mock catalogue, although both methods can be applied to other scaling relations as well. We show that if one simply uses photometric redshifts one obtains a biased relation; our methods correct for this bias and recover the true relation.
The Holometer: An instrument to probe Planckian quantum geometry
Chou, Aaron; Glass, Henry; Gustafson, H. Richard; ...
2017-02-28
This paper describes the Fermilab Holometer, an instrument for measuring correlations of position variations over a four-dimensional volume of space-time. The apparatus consists of two co-located, but independent and isolated, 40 m power-recycled Michelson interferometers, whose outputs are cross-correlated to 25 MHz. The data are sensitive to correlations of differential position across the apparatus over a broad band of frequencies up to and exceeding the inverse light crossing time, 7.6 MHz. A noise model constrained by diagnostic and environmental data distinguishes among physical origins of measured correlations, and is used to verify shot-noise-limited performance. These features allow searches for exoticmore » quantum correlations that depart from classical trajectories at spacelike separations, with a strain noise power spectral density sensitivity smaller than the Planck time. As a result, the Holometer in current and future configurations is projected to provide precision tests of a wide class of models of quantum geometry at the Planck scale, beyond those already constrained by currently operating gravitational wave observatories.« less
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
Effect of resource constraints on intersimilar coupled networks.
Shai, S; Dobson, S
2012-12-01
Most real-world networks do not live in isolation but are often coupled together within a larger system. Recent studies have shown that intersimilarity between coupled networks increases the connectivity of the overall system. However, unlike connected nodes in a single network, coupled nodes often share resources, like time, energy, and memory, which can impede flow processes through contention when intersimilarly coupled. We study a model of a constrained susceptible-infected-recovered (SIR) process on a system consisting of two random networks sharing the same set of nodes, where nodes are limited to interact with (and therefore infect) a maximum number of neighbors at each epidemic time step. We obtain that, in agreement with previous studies, when no limit exists (regular SIR model), positively correlated (intersimilar) coupling results in a lower epidemic threshold than negatively correlated (interdissimilar) coupling. However, in the case of the constrained SIR model, the obtained epidemic threshold is lower with negatively correlated coupling. The latter finding differentiates our work from previous studies and provides another step towards revealing the qualitative differences between single and coupled networks.
Effect of resource constraints on intersimilar coupled networks
NASA Astrophysics Data System (ADS)
Shai, S.; Dobson, S.
2012-12-01
Most real-world networks do not live in isolation but are often coupled together within a larger system. Recent studies have shown that intersimilarity between coupled networks increases the connectivity of the overall system. However, unlike connected nodes in a single network, coupled nodes often share resources, like time, energy, and memory, which can impede flow processes through contention when intersimilarly coupled. We study a model of a constrained susceptible-infected-recovered (SIR) process on a system consisting of two random networks sharing the same set of nodes, where nodes are limited to interact with (and therefore infect) a maximum number of neighbors at each epidemic time step. We obtain that, in agreement with previous studies, when no limit exists (regular SIR model), positively correlated (intersimilar) coupling results in a lower epidemic threshold than negatively correlated (interdissimilar) coupling. However, in the case of the constrained SIR model, the obtained epidemic threshold is lower with negatively correlated coupling. The latter finding differentiates our work from previous studies and provides another step towards revealing the qualitative differences between single and coupled networks.
Constrained Total Energy Expenditure and Metabolic Adaptation to Physical Activity in Adult Humans.
Pontzer, Herman; Durazo-Arvizu, Ramon; Dugas, Lara R; Plange-Rhule, Jacob; Bovet, Pascal; Forrester, Terrence E; Lambert, Estelle V; Cooper, Richard S; Schoeller, Dale A; Luke, Amy
2016-02-08
Current obesity prevention strategies recommend increasing daily physical activity, assuming that increased activity will lead to corresponding increases in total energy expenditure and prevent or reverse energy imbalance and weight gain [1-3]. Such Additive total energy expenditure models are supported by exercise intervention and accelerometry studies reporting positive correlations between physical activity and total energy expenditure [4] but are challenged by ecological studies in humans and other species showing that more active populations do not have higher total energy expenditure [5-8]. Here we tested a Constrained total energy expenditure model, in which total energy expenditure increases with physical activity at low activity levels but plateaus at higher activity levels as the body adapts to maintain total energy expenditure within a narrow range. We compared total energy expenditure, measured using doubly labeled water, against physical activity, measured using accelerometry, for a large (n = 332) sample of adults living in five populations [9]. After adjusting for body size and composition, total energy expenditure was positively correlated with physical activity, but the relationship was markedly stronger over the lower range of physical activity. For subjects in the upper range of physical activity, total energy expenditure plateaued, supporting a Constrained total energy expenditure model. Body fat percentage and activity intensity appear to modulate the metabolic response to physical activity. Models of energy balance employed in public health [1-3] should be revised to better reflect the constrained nature of total energy expenditure and the complex effects of physical activity on metabolic physiology. Copyright © 2016 Elsevier Ltd. All rights reserved.
The added value of remote sensing products in constraining hydrological models
NASA Astrophysics Data System (ADS)
Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus
2017-04-01
The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.
NASA Astrophysics Data System (ADS)
Sherrington, David; Davison, Lexie; Buhot, Arnaud; Garrahan, Juan P.
2002-02-01
We report a study of a series of simple model systems with only non-interacting Hamiltonians, and hence simple equilibrium thermodynamics, but with constrained dynamics of a type initially suggested by foams and idealized covalent glasses. We demonstrate that macroscopic dynamical features characteristic of real and more complex model glasses, such as two-time decays in energy and auto-correlation functions, arise from the dynamics and we explain them qualitatively and quantitatively in terms of annihilation-diffusion concepts and theory. The comparison is with strong glasses. We also consider fluctuation-dissipation relations and demonstrate subtleties of interpretation. We find no FDT breakdown when the correct normalization is chosen.
Empirical models of Jupiter's interior from Juno data. Moment of inertia and tidal Love number k2
NASA Astrophysics Data System (ADS)
Ni, Dongdong
2018-05-01
Context. The Juno spacecraft has significantly improved the accuracy of gravitational harmonic coefficients J4, J6 and J8 during its first two perijoves. However, there are still differences in the interior model predictions of core mass and envelope metallicity because of the uncertainties in the hydrogen-helium equations of state. New theoretical approaches or observational data are hence required in order to further constrain the interior models of Jupiter. A well constrained interior model of Jupiter is helpful for understanding not only the dynamic flows in the interior, but also the formation history of giant planets. Aims: We present the radial density profiles of Jupiter fitted to the Juno gravity field observations. Also, we aim to investigate our ability to constrain the core properties of Jupiter using its moment of inertia and tidal Love number k2 which could be accessible by the Juno spacecraft. Methods: In this work, the radial density profile was constrained by the Juno gravity field data within the empirical two-layer model in which the equations of state are not needed as an input model parameter. Different two-layer models are constructed in terms of core properties. The dependence of the calculated moment of inertia and tidal Love number k2 on the core properties was investigated in order to discern their abilities to further constrain the internal structure of Jupiter. Results: The calculated normalized moment of inertia (NMOI) ranges from 0.2749 to 0.2762, in reasonable agreement with the other predictions. There is a good correlation between the NMOI value and the core properties including masses and radii. Therefore, measurements of NMOI by Juno can be used to constrain both the core mass and size of Jupiter's two-layer interior models. For the tidal Love number k2, the degeneracy of k2 is found and analyzed within the two-layer interior model. In spite of this, measurements of k2 can still be used to further constrain the core mass and size of Jupiter's two-layer interior models.
Constrained range expansion and climate change assessments
Yohay Carmel; Curtis H. Flather
2006-01-01
Modeling the future distribution of keystone species has proved to be an important approach to assessing the potential ecological consequences of climate change (Loehle and LeBlanc 1996; Hansen et al. 2001). Predictions of range shifts are typically based on empirical models derived from simple correlative relationships between climatic characteristics of occupied and...
2dFLenS and KiDS: determining source redshift distributions with cross-correlations
NASA Astrophysics Data System (ADS)
Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian
2017-03-01
We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.
NASA Astrophysics Data System (ADS)
Jensen, Daniel; Wasserman, Adam; Baczewski, Andrew
The construction of approximations to the exchange-correlation potential for warm dense matter (WDM) is a topic of significant recent interest. In this work, we study the inverse problem of Kohn-Sham (KS) DFT as a means of guiding functional design at zero temperature and in WDM. Whereas the forward problem solves the KS equations to produce a density from a specified exchange-correlation potential, the inverse problem seeks to construct the exchange-correlation potential from specified densities. These two problems require different computational methods and convergence criteria despite sharing the same mathematical equations. We present two new inversion methods based on constrained variational and PDE-constrained optimization methods. We adapt these methods to finite temperature calculations to reveal the exchange-correlation potential's temperature dependence in WDM-relevant conditions. The different inversion methods presented are applied to both non-interacting and interacting model systems for comparison. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94.
NASA Astrophysics Data System (ADS)
Li, Duo; Liu, Yajing
2017-04-01
Along-strike segmentation of slow-slip events (SSEs) and nonvolcanic tremors in Cascadia may reflect heterogeneities of the subducting slab or overlying continental lithosphere. However, the nature behind this segmentation is not fully understood. We develop a 3-D model for episodic SSEs in northern and central Cascadia, incorporating both seismological and gravitational observations to constrain the heterogeneities in the megathrust fault properties. The 6 year automatically detected tremors are used to constrain the rate-state friction parameters. The effective normal stress at SSE depths is constrained by along-margin free-air and Bouguer gravity anomalies. The along-strike variation in the long-term plate convergence rate is also taken into consideration. Simulation results show five segments of ˜Mw6.0 SSEs spontaneously appear along the strike, correlated to the distribution of tremor epicenters. Modeled SSE recurrence intervals are equally comparable to GPS observations using both types of gravity anomaly constraints. However, the model constrained by free-air anomaly does a better job in reproducing the cumulative slip as well as more consistent surface displacements with GPS observations. The modeled along-strike segmentation represents the averaged slip release over many SSE cycles, rather than permanent barriers. Individual slow-slip events can still propagate across the boundaries, which may cause interactions between adjacent SSEs, as observed in time-dependent GPS inversions. In addition, the moment-duration scaling is sensitive to the selection of velocity criteria for determining when SSEs occur. Hence, the detection ability of the current GPS network should be considered in the interpretation of slow earthquake source parameter scaling relations.
NASA Astrophysics Data System (ADS)
Nerney, E. G.; Bagenal, F.; Yoshioka, K.; Schmidt, C.
2017-12-01
Io emits volcanic gases into space at a rate of about a ton per second. The gases become ionized and trapped in Jupiter's strong magnetic field, forming a torus of plasma that emits 2 terawatts of UV emissions. In recent work re-analyzing UV emissions observed by Voyager, Galileo, & Cassini, we found plasma conditions consistent with a physical chemistry model with a neutral source of dissociated sulfur dioxide from Io (Nerney et al., 2017). In further analysis of UV observations from JAXA's Hisaki mission (using our spectral emission model) we constrain the torus composition with ground based observations. The physical chemistry model (adapted from Delamere et al., 2005) is then used to match derived plasma conditions. We correlate the oxygen to sulfur ratio of the neutral source with volcanic eruptions to understand the change in magnetospheric plasma conditions. Our goal is to better understand and constrain both the temporal and spatial variability of the flow of mass and energy from Io's volcanic atmosphere to Jupiter's dynamic magnetosphere.
Three-Point Correlations in the COBE DMR 2 Year Anisotropy Maps
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Banday, A. J.; Bennett, C. L.; Gorski, K. M.; Kogut, A.
1995-01-01
We compute the three-point temperature correlation function of the COBE Differential Microwave Radiometer (DMR) 2 year sky maps to search for evidence of non-Gaussian temperature fluctuations. We detect three-point correlations in our sky with a substantially higher signal-to-noise ratio than from the first-year data. However, the magnitude of the signal is consistent with the level of cosmic variance expected from Gaussian fluctuations, even when the low-order multipole moments, up to l = 9, are filtered from the data. These results do not strongly constrain most existing models of structure formation, but the absence of intrinsic three-point correlations on large angular scales is an important consistency test for such models.
NASA Astrophysics Data System (ADS)
Zhu, H.
2017-12-01
Recently, seismologists observed increasing seismicity in North Texas and Oklahoma. Based on seismic observations and other geophysical measurements, some studies suggested possible links between the increasing seismicity and wastewater injection during unconventional oil and gas exploration. To better monitor seismic events and investigate their mechanisms, we need an accurate 3D crustal wavespeed model for North Texas and Oklahoma. Considering the uneven distribution of earthquakes in this region, seismic tomography with local earthquake records have difficulties to achieve good illumination. To overcome this limitation, in this study, ambient noise cross-correlation functions are used to constrain subsurface variations in wavespeeds. I use adjoint tomography to iteratively fit frequency-dependent phase differences between observed and predicted band-limited Green's functions. The spectral-element method is used to numerically calculate the band-limited Green's functions and the adjoint method is used to calculate misfit gradients with respect to wavespeeds. 25 preconditioned conjugate gradient iterations are used to update model parameters and minimize data misfits. Features in the new crustal model M25 correlates with geological units in the study region, including the Llano uplift, the Anadarko basin and the Ouachita orogenic front. In addition, these seismic anomalies correlate with gravity and magnetic observations. This new model can be used to better constrain earthquake source parameters in North Texas and Oklahoma, such as epicenter location and moment tensor solutions, which are important for investigating potential relations between seismicity and unconventional oil and gas exploration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saide, Pablo E.; Peterson, David A.; de Silva, Arlindo
We couple airborne, ground-based, and satellite observations; conduct regional simulations; and develop and apply an inversion technique to constrain hourly smoke emissions from the Rim Fire, the third largest observed in California, USA. Emissions constrained with multiplatform data show notable nocturnal enhancements (sometimes over a factor of 20), correlate better with daily burned area data, and are a factor of 2–4 higher than a priori estimates, highlighting the need for improved characterization of diurnal profiles and day-to-day variability when modeling extreme fires. Constraining only with satellite data results in smaller enhancements mainly due to missing retrievals near the emissions source,more » suggesting that top-down emission estimates for these events could be underestimated and a multiplatform approach is required to resolve them. Predictions driven by emissions constrained with multiplatform data present significant variations in downwind air quality and in aerosol feedback on meteorology, emphasizing the need for improved emissions estimates during exceptional events.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Tianxing; Lin, Hai-Qing; Gubernatis, James E.
2015-09-01
By using the constrained-phase quantum Monte Carlo method, we performed a systematic study of the pairing correlations in the ground state of the doped Kane-Mele-Hubbard model on a honeycomb lattice. We find that pairing correlations with d + id symmetry dominate close to half filling, but pairing correlations with p+ip symmetry dominate as hole doping moves the system below three-quarters filling. We correlate these behaviors of the pairing correlations with the topology of the Fermi surfaces of the non-interacting problem. We also find that the effective pairing correlation is enhanced greatly as the interaction increases, and these superconducting correlations aremore » robust against varying the spin-orbit coupling strength. Finally, our numerical results suggest a possible way to realize spin triplet superconductivity in doped honeycomb-like materials or ultracold atoms in optical traps.« less
Cosmological constraints on extended Galileon models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felice, Antonio De; Tsujikawa, Shinji, E-mail: antoniod@nu.ac.th, E-mail: shinji@rs.kagu.tus.ac.jp
2012-03-01
The extended Galileon models possess tracker solutions with de Sitter attractors along which the dark energy equation of state is constant during the matter-dominated epoch, i.e. w{sub DE} = −1−s, where s is a positive constant. Even with this phantom equation of state there are viable parameter spaces in which the ghosts and Laplacian instabilities are absent. Using the observational data of the supernovae type Ia, the cosmic microwave background (CMB), and baryon acoustic oscillations, we place constraints on the tracker solutions at the background level and find that the parameter s is constrained to be s = 0.034{sub −0.034}{supmore » +0.327} (95 % CL) in the flat Universe. In order to break the degeneracy between the models we also study the evolution of cosmological density perturbations relevant to the large-scale structure (LSS) and the Integrated-Sachs-Wolfe (ISW) effect in CMB. We show that, depending on the model parameters, the LSS and the ISW effect is either positively or negatively correlated. It is then possible to constrain viable parameter spaces further from the observational data of the ISW-LSS cross-correlation as well as from the matter power spectrum.« less
An infinite set of Ward identities for adiabatic modes in cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinterbichler, Kurt; Hui, Lam; Khoury, Justin, E-mail: khinterbichler@perimeterinstitute.ca, E-mail: lh399@columbia.edu, E-mail: jkhoury@sas.upenn.edu
2014-01-01
We show that the correlation functions of any single-field cosmological model with constant growing-modes are constrained by an infinite number of novel consistency relations, which relate N+1-point correlation functions with a soft-momentum scalar or tensor mode to a symmetry transformation on N-point correlation functions of hard-momentum modes. We derive these consistency relations from Ward identities for an infinite tower of non-linearly realized global symmetries governing scalar and tensor perturbations. These symmetries can be labeled by an integer n. At each order n, the consistency relations constrain — completely for n = 0,1, and partially for n ≥ 2 — themore » q{sup n} behavior of the soft limits. The identities at n = 0 recover Maldacena's original consistency relations for a soft scalar and tensor mode, n = 1 gives the recently-discovered conformal consistency relations, and the identities for n ≥ 2 are new. As a check, we verify directly that the n = 2 identity is satisfied by known correlation functions in slow-roll inflation.« less
NASA Astrophysics Data System (ADS)
Zhu, Hejun
2018-04-01
Recently, seismologists observed increasing seismicity in North Texas and Oklahoma. Based on seismic observations and other geophysical measurements, numerous studies suggested links between the increasing seismicity and wastewater injection during unconventional oil and gas exploration. To better monitor seismic events and investigate their triggering mechanisms, we need an accurate 3D crustal wavespeed model for the study region. Considering the uneven distribution of earthquakes in this area, seismic tomography with local earthquake records have difficulties achieving even illumination. To overcome this limitation, in this study, ambient noise cross-correlation functions are used to constrain subsurface variations in wavespeeds. I use adjoint tomography to iteratively fit frequency-dependent phase differences between observed and predicted band-limited Green's functions. The spectral-element method is used to numerically calculate the band-limited Green's functions and the adjoint method is used to calculate misfit gradients with respect to wavespeeds. Twenty five preconditioned conjugate gradient iterations are used to update model parameters and minimize data misfits. Features in the new crustal model TO25 correlates well with geological provinces in the study region, including the Llano uplift, the Anadarko basin and the Ouachita orogenic front, etc. In addition, there are relatively good correlations between seismic results with gravity and magnetic observations. This new crustal model can be used to better constrain earthquake source parameters in North Texas and Oklahoma, such as epicenter location as well as moment tensor solutions, which are important for investigating triggering mechanisms between these induced earthquakes and unconventional oil and gas exploration activities.
Tsuchimochi, Takashi; Henderson, Thomas M; Scuseria, Gustavo E; Savin, Andreas
2010-10-07
Our previously developed constrained-pairing mean-field theory (CPMFT) is shown to map onto an unrestricted Hartree-Fock (UHF) type method if one imposes a corresponding pair constraint to the correlation problem that forces occupation numbers to occur in pairs adding to one. In this new version, CPMFT has all the advantages of standard independent particle models (orbitals and orbital energies, to mention a few), yet unlike UHF, it can dissociate polyatomic molecules to the correct ground-state restricted open-shell Hartree-Fock atoms or fragments.
Correlations in the (Sub)Mil1imeter Background from ACT x BLAST
NASA Technical Reports Server (NTRS)
Hajian, Amir; Battaglia,Nick; Bock, James J.; Bond, J. Richard; Nolta, Michael R.; Sievers, Jon; Wollack, Ed
2011-01-01
We present measurements of the auto- and cross-frequency correlation power spectra of the cosmic (sub)millimeter background at: 250, 350, and 500 microns (1200, 860, and 600 GHz) from observations made with the Balloon-borne Large Aperture Submillimeter Telescope, BLAST; and at 1380 and 2030 microns (218 and 148 GHz) from observations made with the Atacama Cosmology Telescope, ACT. The overlapping observations cover 8.6 deg(sup 2) in an area relatively free of Galactic dust near the south ecliptic pole (SEP). The ACT bands are sensitive to radiation from the CMB, the Sunyaev-Zel'dovich (SZ) effect from galaxy clusters, and to emission by radio and dusty star-forming galaxies (DSFGs), while the dominant contribution to the BLAST bands is from DSFGs. We confirm and extend the BLAST analysis of clustering with an independent pipeline, and also detect correlations between the ACT and BLAST maps at over 25(sigma) significance, which we interpret as a detection of the DSFGs in the ACT maps. In addition to a Poisson component in the cross-frequency power spectra, we detect a clustered signal at 4(sigma), and using a model for the DSFG evolution and number counts, we successfully fit all our spectra with a linear clustering model and a bias that depends only on red shift and not on scale. Finally, the data are compared to, and generally agree with, phenomenological models for the DSFG population. This study represents a first of its kind, and demonstrates the constraining power of the cross-frequency correlation technique to constrain models for the DSFGs. Similar analyses with more data will impose tight constraints 011 future models.
Sensitivity Analysis Tailored to Constrain 21st Century Terrestrial Carbon-Uptake
NASA Astrophysics Data System (ADS)
Muller, S. J.; Gerber, S.
2013-12-01
The long-term fate of terrestrial carbon (C) in response to climate change remains a dominant source of uncertainty in Earth-system model projections. Increasing atmospheric CO2 could be mitigated by long-term net uptake of C, through processes such as increased plant productivity due to "CO2-fertilization". Conversely, atmospheric conditions could be exacerbated by long-term net release of C, through processes such as increased decomposition due to higher temperatures. This balance is an important area of study, and a major source of uncertainty in long-term (>year 2050) projections of planetary response to climate change. We present results from an innovative application of sensitivity analysis to LM3V, a dynamic global vegetation model (DGVM), intended to identify observed/observable variables that are useful for constraining long-term projections of C-uptake. We analyzed the sensitivity of cumulative C-uptake by 2100, as modeled by LM3V in response to IPCC AR4 scenario climate data (1860-2100), to perturbations in over 50 model parameters. We concurrently analyzed the sensitivity of over 100 observable model variables, during the extant record period (1970-2010), to the same parameter changes. By correlating the sensitivities of observable variables with the sensitivity of long-term C-uptake we identified model calibration variables that would also constrain long-term C-uptake projections. LM3V employs a coupled carbon-nitrogen cycle to account for N-limitation, and we find that N-related variables have an important role to play in constraining long-term C-uptake. This work has implications for prioritizing field campaigns to collect global data that can help reduce uncertainties in the long-term land-atmosphere C-balance. Though results of this study are specific to LM3V, the processes that characterize this model are not completely divorced from other DGVMs (or reality), and our approach provides valuable insights into how data can be leveraged to be better constrain projections for the land carbon sink.
Attentional modulation of neuronal variability in circuit models of cortex
Kanashiro, Tatjana; Ocker, Gabriel Koch; Cohen, Marlene R; Doiron, Brent
2017-01-01
The circuit mechanisms behind shared neural variability (noise correlation) and its dependence on neural state are poorly understood. Visual attention is well-suited to constrain cortical models of response variability because attention both increases firing rates and their stimulus sensitivity, as well as decreases noise correlations. We provide a novel analysis of population recordings in rhesus primate visual area V4 showing that a single biophysical mechanism may underlie these diverse neural correlates of attention. We explore model cortical networks where top-down mediated increases in excitability, distributed across excitatory and inhibitory targets, capture the key neuronal correlates of attention. Our models predict that top-down signals primarily affect inhibitory neurons, whereas excitatory neurons are more sensitive to stimulus specific bottom-up inputs. Accounting for trial variability in models of state dependent modulation of neuronal activity is a critical step in building a mechanistic theory of neuronal cognition. DOI: http://dx.doi.org/10.7554/eLife.23978.001 PMID:28590902
Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Modal test/analysis correlation of Space Station structures using nonlinear sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Satellite Gravity Drilling the Earth
NASA Technical Reports Server (NTRS)
vonFrese, R. R. B.; Potts, L. V.; Leftwich, T. E.; Kim, H. R.; Han, S.-H.; Taylor, P. T.; Ashgharzadeh, M. F.
2005-01-01
Analysis of satellite-measured gravity and topography can provide crust-to-core mass variation models for new insi@t on the geologic evolution of the Earth. The internal structure of the Earth is mostly constrained by seismic observations and geochemical considerations. We suggest that these constraints may be augmented by gravity drilling that interprets satellite altitude free-air gravity observations for boundary undulations of the internal density layers related to mass flow. The approach involves separating the free-air anomalies into terrain-correlated and -decorrelated components based on the correlation spectrum between the anomalies and the gravity effects of the terrain. The terrain-decorrelated gravity anomalies are largely devoid of the long wavelength interfering effects of the terrain gravity and thus provide enhanced constraints for modeling mass variations of the mantle and core. For the Earth, subcrustal interpretations of the terrain-decorrelated anomalies are constrained by radially stratified densities inferred from seismic observations. These anomalies, with frequencies that clearly decrease as the density contrasts deepen, facilitate mapping mass flow patterns related to the thermodynamic state and evolution of the Earth's interior.
NASA Astrophysics Data System (ADS)
Ma, Yin-Zhe; Gong, Guo-Dong; Sui, Ning; He, Ping
2018-03-01
We calculate the cross-correlation function < (Δ T/T)({v}\\cdot \\hat{n}/σ _v) > between the kinetic Sunyaev-Zeldovich (kSZ) effect and the reconstructed peculiar velocity field using linear perturbation theory, with the aim of constraining the optical depth τ and peculiar velocity bias of central galaxies with Planck data. We vary the optical depth τ and the velocity bias function bv(k) = 1 + b(k/k0)n, and fit the model to the data, with and without varying the calibration parameter y0 that controls the vertical shift of the correlation function. By constructing a likelihood function and constraining the τ, b and n parameters, we find that the quadratic power-law model of velocity bias, bv(k) = 1 + b(k/k0)2, provides the best fit to the data. The best-fit values are τ = (1.18 ± 0.24) × 10-4, b=-0.84^{+0.16}_{-0.20} and y0=(12.39^{+3.65}_{-3.66})× 10^{-9} (68 per cent confidence level). The probability of b > 0 is only 3.12 × 10-8 for the parameter b, which clearly suggests a detection of scale-dependent velocity bias. The fitting results indicate that the large-scale (k ≤ 0.1 h Mpc-1) velocity bias is unity, while on small scales the bias tends to become negative. The value of τ is consistent with the stellar mass-halo mass and optical depth relationship proposed in the literature, and the negative velocity bias on small scales is consistent with the peak background split theory. Our method provides a direct tool for studying the gaseous and kinematic properties of galaxies.
NASA Astrophysics Data System (ADS)
Xia, Jun-Qing; Yu, Hai; Wang, Guo-Jian; Tian, Shu-Xun; Li, Zheng-Xiang; Cao, Shuo; Zhu, Zong-Hong
2017-01-01
In this paper, we use a recently compiled data set, which comprises 118 galactic-scale strong gravitational lensing (SGL) systems to constrain the statistical property of the SGL system as well as the curvature of the universe without assuming any fiducial cosmological model. Based on the singular isothermal ellipsoid (SIE) model of the SGL system, we obtain that the constrained curvature parameter {{{Ω }}}{{k}} is close to zero from the SGL data, which is consistent with the latest result of Planck measurement. More interestingly, we find that the parameter f in the SIE model is strongly correlated with the curvature {{{Ω }}}{{k}}. Neglecting this correlation in the analysis will significantly overestimate the constraining power of SGL data on the curvature. Furthermore, the obtained constraint on f is different from previous results: f=1.105+/- 0.030 (68% confidence level [C.L.]), which means that the standard singular isothermal sphere (SIS) model (f = 1) is disfavored by the current SGL data at more than a 3σ C.L. We also divide all of the SGL data into two parts according to the centric stellar velocity dispersion {σ }{{c}} and find that the larger the value of {σ }{{c}} for the subsample, the more favored the standard SIS model is. Finally, we extend the SIE model by assuming the power-law density profiles for the total mass density, ρ ={ρ }0{(r/{r}0)}-α , and luminosity density, ν ={ν }0{(r/{r}0)}-δ , and obtain the constraints on the power-law indices: α =1.95+/- 0.04 and δ =2.40+/- 0.13 at a 68% C.L. When assuming the power-law index α =δ =γ , this scenario is totally disfavored by the current SGL data, {χ }\\min ,γ 2-{χ }\\min ,{SIE}2≃ 53.
The Constrained Vapor Bubble Experiment - Interfacial Flow Region
NASA Technical Reports Server (NTRS)
Kundan, Akshay; Wayner, Peter C., Jr.; Plawsky, Joel L.
2015-01-01
Internal heat transfer coefficient of the CVB correlated to the presence of the interfacial flow region. Competition between capillary and Marangoni flow caused Flooding and not a Dry-out region. Interfacial flow region growth is arrested at higher power inputs. 1D heat model confirms the presence of interfacial flow region. 1D heat model confirms the arresting phenomena of interfacial flow region Visual observations are essential to understanding.
NASA Astrophysics Data System (ADS)
Reichardt, C. L.; Shaw, L.; Zahn, O.; Aird, K. A.; Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Cho, H. M.; Crawford, T. M.; Crites, A. T.; de Haan, T.; Dobbs, M. A.; Dudley, J.; George, E. M.; Halverson, N. W.; Holder, G. P.; Holzapfel, W. L.; Hoover, S.; Hou, Z.; Hrubes, J. D.; Joy, M.; Keisler, R.; Knox, L.; Lee, A. T.; Leitch, E. M.; Lueker, M.; Luong-Van, D.; McMahon, J. J.; Mehl, J.; Meyer, S. S.; Millea, M.; Mohr, J. J.; Montroy, T. E.; Natoli, T.; Padin, S.; Plagge, T.; Pryke, C.; Ruhl, J. E.; Schaffer, K. K.; Shirokoff, E.; Spieler, H. G.; Staniszewski, Z.; Stark, A. A.; Story, K.; van Engelen, A.; Vanderlinde, K.; Vieira, J. D.; Williamson, R.
2012-08-01
We present the first three-frequency South Pole Telescope (SPT) cosmic microwave background (CMB) power spectra. The band powers presented here cover angular scales 2000 < l < 9400 in frequency bands centered at 95, 150, and 220 GHz. At these frequencies and angular scales, a combination of the primary CMB anisotropy, thermal and kinetic Sunyaev-Zel'dovich (SZ) effects, radio galaxies, and cosmic infrared background (CIB) contributes to the signal. We combine Planck/HFI and SPT data at 220 GHz to constrain the amplitude and shape of the CIB power spectrum and find strong evidence for nonlinear clustering. We explore the SZ results using a variety of cosmological models for the CMB and CIB anisotropies and find them to be robust with one exception: allowing for spatial correlations between the thermal SZ effect and CIB significantly degrades the SZ constraints. Neglecting this potential correlation, we find the thermal SZ power at 150 GHz and l = 3000 to be 3.65 ± 0.69 μK2, and set an upper limit on the kinetic SZ power to be less than 2.8 μK2 at 95% confidence. When a correlation between the thermal SZ and CIB is allowed, we constrain a linear combination of thermal and kinetic SZ power: D tSZ 3000 + 0.5D 3000 kSZ = 4.60 ± 0.63 μK2, consistent with earlier measurements. We use the measured thermal SZ power and an analytic, thermal SZ model calibrated with simulations to determine σ8 = 0.807 ± 0.016. Modeling uncertainties involving the astrophysics of the intracluster medium rather than the statistical uncertainty in the measured band powers are the dominant source of uncertainty on σ8. We also place an upper limit on the kinetic SZ power produced by patchy reionization; a companion paper uses these limits to constrain the reionization history of the universe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reichardt, C. L.; George, E. M.; Holzapfel, W. L.
2012-08-10
We present the first three-frequency South Pole Telescope (SPT) cosmic microwave background (CMB) power spectra. The band powers presented here cover angular scales 2000 < l < 9400 in frequency bands centered at 95, 150, and 220 GHz. At these frequencies and angular scales, a combination of the primary CMB anisotropy, thermal and kinetic Sunyaev-Zel'dovich (SZ) effects, radio galaxies, and cosmic infrared background (CIB) contributes to the signal. We combine Planck/HFI and SPT data at 220 GHz to constrain the amplitude and shape of the CIB power spectrum and find strong evidence for nonlinear clustering. We explore the SZ resultsmore » using a variety of cosmological models for the CMB and CIB anisotropies and find them to be robust with one exception: allowing for spatial correlations between the thermal SZ effect and CIB significantly degrades the SZ constraints. Neglecting this potential correlation, we find the thermal SZ power at 150 GHz and l = 3000 to be 3.65 {+-} 0.69 {mu}K{sup 2}, and set an upper limit on the kinetic SZ power to be less than 2.8 {mu}K{sup 2} at 95% confidence. When a correlation between the thermal SZ and CIB is allowed, we constrain a linear combination of thermal and kinetic SZ power: D{sup tSZ}{sub 3000} + 0.5D{sub 3000}{sup kSZ} = 4.60 {+-} 0.63 {mu}K{sup 2}, consistent with earlier measurements. We use the measured thermal SZ power and an analytic, thermal SZ model calibrated with simulations to determine {sigma}{sub 8} = 0.807 {+-} 0.016. Modeling uncertainties involving the astrophysics of the intracluster medium rather than the statistical uncertainty in the measured band powers are the dominant source of uncertainty on {sigma}{sub 8}. We also place an upper limit on the kinetic SZ power produced by patchy reionization; a companion paper uses these limits to constrain the reionization history of the universe.« less
Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2012-01-01
A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed
NASA Astrophysics Data System (ADS)
Kyrke-Smith, Teresa M.; Gudmundsson, G. Hilmar; Farrell, Patrick E.
2017-11-01
We investigate correlations between seismically derived estimates of basal acoustic impedance and basal slipperiness values obtained from a surface-to-bed inversion using a Stokes ice flow model. Using high-resolution measurements along several seismic profiles on Pine Island Glacier (PIG), we find no significant correlation at kilometer scale between acoustic impedance and either retrieved basal slipperiness or basal drag. However, there is a stronger correlation when comparing average values along the individual profiles. We hypothesize that the correlation appears at the length scales over which basal variations are important to large-scale ice sheet flow. Although the seismic technique is sensitive to the material properties of the bed, at present there is no clear way of incorporating high-resolution seismic measurements of bed properties on ice streams into ice flow models. We conclude that more theoretical work needs to be done before constraints on mechanical conditions at the ice-bed interface from acoustic impedance measurements can be of direct use to ice sheet models.
Shu, Bao; Liu, Hui; Xu, Longwei; Qian, Chuang; Gong, Xiaopeng; An, Xiangdong
2018-04-14
For GPS medium-long baseline real-time kinematic (RTK) positioning, the troposphere parameter is introduced along with coordinates, and the model is ill-conditioned due to its strong correlation with the height parameter. For BeiDou Navigation Satellite System (BDS), additional difficulties occur due to its special satellite constellation. In fact, relative zenith troposphere delay (RZTD) derived from high-precision empirical zenith troposphere models can be introduced. Thus, the model strength can be improved, which is also called the RZTD-constrained RTK model. In this contribution, we first analyze the factors affecting the precision of BDS medium-long baseline RTK; thereafter, 15 baselines ranging from 38 km to 167 km in different troposphere conditions are processed to assess the performance of RZTD-constrained RTK. Results show that the troposphere parameter is difficult to distinguish from the height component, even with long time filtering for BDS-only RTK. Due to the lack of variation in geometry for the BDS geostationary Earth orbit satellite, the long convergence time of ambiguity parameters may reduce the height precision of GPS/BDS-combined RTK in the initial period. When the RZTD-constrained model was used in BDS and GPS/BDS-combined situations compared with the traditional RTK, the standard deviation of the height component for the fixed solution was reduced by 52.4% and 34.0%, respectively.
Liu, Hui; Xu, Longwei; Qian, Chuang; Gong, Xiaopeng; An, Xiangdong
2018-01-01
For GPS medium-long baseline real-time kinematic (RTK) positioning, the troposphere parameter is introduced along with coordinates, and the model is ill-conditioned due to its strong correlation with the height parameter. For BeiDou Navigation Satellite System (BDS), additional difficulties occur due to its special satellite constellation. In fact, relative zenith troposphere delay (RZTD) derived from high-precision empirical zenith troposphere models can be introduced. Thus, the model strength can be improved, which is also called the RZTD-constrained RTK model. In this contribution, we first analyze the factors affecting the precision of BDS medium-long baseline RTK; thereafter, 15 baselines ranging from 38 km to 167 km in different troposphere conditions are processed to assess the performance of RZTD-constrained RTK. Results show that the troposphere parameter is difficult to distinguish from the height component, even with long time filtering for BDS-only RTK. Due to the lack of variation in geometry for the BDS geostationary Earth orbit satellite, the long convergence time of ambiguity parameters may reduce the height precision of GPS/BDS-combined RTK in the initial period. When the RZTD-constrained model was used in BDS and GPS/BDS-combined situations compared with the traditional RTK, the standard deviation of the height component for the fixed solution was reduced by 52.4% and 34.0%, respectively. PMID:29661999
NASA Astrophysics Data System (ADS)
Crida, Aurélien; Ligi, Roxanne; Dorn, Caroline; Lebreton, Yveline
2018-06-01
The characterization of exoplanets relies on that of their host star. However, stellar evolution models cannot always be used to derive the mass and radius of individual stars, because many stellar internal parameters are poorly constrained. Here, we use the probability density functions (PDFs) of directly measured parameters to derive the joint PDF of the stellar and planetary mass and radius. Because combining the density and radius of the star is our most reliable way of determining its mass, we find that the stellar (respectively planetary) mass and radius are strongly (respectively moderately) correlated. We then use a generalized Bayesian inference analysis to characterize the possible interiors of 55 Cnc e. We quantify how our ability to constrain the interior improves by accounting for correlation. The information content of the mass–radius correlation is also compared with refractory element abundance constraints. We provide posterior distributions for all interior parameters of interest. Given all available data, we find that the radius of the gaseous envelope is 0.08+/- 0.05{R}p. A stronger correlation between the planetary mass and radius (potentially provided by a better estimate of the transit depth) would significantly improve interior characterization and reduce drastically the uncertainty on the gas envelope properties.
Crotty, Patrick; García-Bellido, Juan; Lesgourgues, Julien; Riazuelo, Alain
2003-10-24
We obtain very stringent bounds on the possible cold dark matter, baryon, and neutrino isocurvature contributions to the primordial fluctuations in the Universe, using recent cosmic microwave background and large scale structure data. Neglecting the possible effects of spatial curvature, tensor perturbations, and reionization, we perform a Bayesian likelihood analysis with nine free parameters, and find that the amplitude of the isocurvature component cannot be larger than about 31% for the cold dark matter mode, 91% for the baryon mode, 76% for the neutrino density mode, and 60% for the neutrino velocity mode, at 2sigma, for uncorrelated models. For correlated adiabatic and isocurvature components, the fraction could be slightly larger. However, the cross-correlation coefficient is strongly constrained, and maximally correlated/anticorrelated models are disfavored. This puts strong bounds on the curvaton model.
NASA Astrophysics Data System (ADS)
Gilliot, Mickaël; Hadjadj, Aomar; Stchakovsky, Michel
2017-11-01
An original method of ellipsometric data inversion is proposed based on the use of constrained splines. The imaginary part of the dielectric function is represented by a series of splines, constructed with particular constraints on slopes at the node boundaries to avoid well-know oscillations of natural splines. The nodes are used as fit parameters. The real part is calculated using Kramers-Kronig relations. The inversion can be performed in successive inversion steps with increasing resolution. This method is used to characterize thin zinc oxide layers obtained by a sol-gel and spin-coating process, with a particular recipe yielding very thin layers presenting nano-porosity. Such layers have particular optical properties correlated with thickness, morphological and structural properties. The use of the constrained spline method is particularly efficient for such materials which may not be easily represented by standard dielectric function models.
B Physics, Hg EDM, and Lepton Flavor Violation in SUSY Models
NASA Astrophysics Data System (ADS)
Shimizu, Yasuhiro
2005-06-01
We consider the correlation between the CP asymmetry in B → ϕKs (S
Evolution of Cygnus X-3 through its Radio and X-ray States
NASA Astrophysics Data System (ADS)
Szostek, A.; Zdziarski, A. A.; McCollough, M. L.
2009-05-01
Based on X-ray spectra and studies of the long-term correlated behavior between radio and soft X-ray, we present a detailed evolution of Cyg X-3 through its radio and X-ray states. We comment on the nature of the hard X-ray tail and possible Simbol X contribution in constraining the models.
Technical Note: On the use of nudging for aerosol–climate model intercomparison studies
Zhang, K.; Wan, H.; Liu, X.; ...
2014-08-26
Nudging as an assimilation technique has seen increased use in recent years in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5 (CAM5), due to the systematic temperature bias in the standard model and the sensitivity ofmore » simulated ice formation to anthropogenic aerosol concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on long-wave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations, and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. Results from both CAM5 and a second aerosol–climate model ECHAM6-HAM2 also indicate that compared to the wind-and-temperature nudging, constraining only winds leads to better agreement with the free-running model in terms of the estimated shortwave cloud forcing and the simulated convective activities. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects since it provides well-constrained meteorology without strongly perturbing the model's mean climate.« less
Technical Note: On the use of nudging for aerosol-climate model intercomparison studies
NASA Astrophysics Data System (ADS)
Zhang, K.; Wan, H.; Liu, X.; Ghan, S. J.; Kooperman, G. J.; Ma, P.-L.; Rasch, P. J.; Neubauer, D.; Lohmann, U.
2014-08-01
Nudging as an assimilation technique has seen increased use in recent years in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5 (CAM5), due to the systematic temperature bias in the standard model and the sensitivity of simulated ice formation to anthropogenic aerosol concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on long-wave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations, and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. Results from both CAM5 and a second aerosol-climate model ECHAM6-HAM2 also indicate that compared to the wind-and-temperature nudging, constraining only winds leads to better agreement with the free-running model in terms of the estimated shortwave cloud forcing and the simulated convective activities. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects since it provides well-constrained meteorology without strongly perturbing the model's mean climate.
Dynamics of interacting quintessence models: Observational constraints
NASA Astrophysics Data System (ADS)
Olivares, Germán; Atrio-Barandela, Fernando; Pavón, Diego
2008-03-01
Interacting quintessence models have been proposed to explain or, at least, alleviate the coincidence problem of late cosmic acceleration. In this paper we are concerned with two aspects of these kind of models: (i) the dynamical evolution of the model of Chimento et al. [L. P. Chimento, A. S. Jakubi, D. Pavón, and W. Zimdahl, Phys. Rev. D 67, 083513 (2003).PRVDAQ0556-282110.1103/PhysRevD.67.083513], i.e., whether its cosmological evolution gives rise to a right sequence of radiation, dark matter, and dark energy dominated eras, and (ii) whether the dark matter dark energy ratio asymptotically evolves towards a nonzero constant. After showing that the model correctly reproduces these eras, we correlate three data sets that constrain the interaction at three redshift epochs: z≤104, z=103, and z=1. We discuss the model selection and argue that even if the model under consideration fulfills both requirements, it is heavily constrained by observation. The prospects that the coincidence problem can be explained by the coupling of dark matter to dark energy are not clearly favored by the data.
NASA Astrophysics Data System (ADS)
Souri, Amir H.; Choi, Yunsoo; Pan, Shuai; Curci, Gabriele; Nowlan, Caroline R.; Janz, Scott J.; Kowalewski, Matthew G.; Liu, Junjie; Herman, Jay R.; Weinheimer, Andrew J.
2018-03-01
A number of satellite-based instruments have become an essential part of monitoring emissions. Despite sound theoretical inversion techniques, the insufficient samples and the footprint size of current observations have introduced an obstacle to narrow the inversion window for regional models. These key limitations can be partially resolved by a set of modest high-quality measurements from airborne remote sensing. This study illustrates the feasibility of nitrogen dioxide (NO2) columns from the Geostationary Coastal and Air Pollution Events Airborne Simulator (GCAS) to constrain anthropogenic NOx emissions in the Houston-Galveston-Brazoria area. We convert slant column densities to vertical columns using a radiative transfer model with (i) NO2 profiles from a high-resolution regional model (1 × 1 km2) constrained by P-3B aircraft measurements, (ii) the consideration of aerosol optical thickness impacts on radiance at NO2 absorption line, and (iii) high-resolution surface albedo constrained by ground-based spectrometers. We characterize errors in the GCAS NO2 columns by comparing them to Pandora measurements and find a striking correlation (r > 0.74) with an uncertainty of 3.5 × 1015 molecules cm-2. On 9 of 10 total days, the constrained anthropogenic emissions by a Kalman filter yield an overall 2-50% reduction in polluted areas, partly counterbalancing the well-documented positive bias of the model. The inversion, however, boosts emissions by 94% in the same areas on a day when an unprecedented local emissions event potentially occurred, significantly mitigating the bias of the model. The capability of GCAS at detecting such an event ensures the significance of forthcoming geostationary satellites for timely estimates of top-down emissions.
NASA Technical Reports Server (NTRS)
Downie, John D.
1995-01-01
Images with signal-dependent noise present challenges beyond those of images with additive white or colored signal-independent noise in terms of designing the optimal 4-f correlation filter that maximizes correlation-peak signal-to-noise ratio, or combinations of correlation-peak metrics. Determining the proper design becomes more difficult when the filter is to be implemented on a constrained-modulation spatial light modulator device. The design issues involved for updatable optical filters for images with signal-dependent film-grain noise and speckle noise are examined. It is shown that although design of the optimal linear filter in the Fourier domain is impossible for images with signal-dependent noise, proper nonlinear preprocessing of the images allows the application of previously developed design rules for optimal filters to be implemented on constrained-modulation devices. Thus the nonlinear preprocessing becomes necessary for correlation in optical systems with current spatial light modulator technology. These results are illustrated with computer simulations of images with signal-dependent noise correlated with binary-phase-only filters and ternary-phase-amplitude filters.
Gu, Yong; Angelaki, Dora E; DeAngelis, Gregory C
2014-07-01
Trial by trial covariations between neural activity and perceptual decisions (quantified by choice Probability, CP) have been used to probe the contribution of sensory neurons to perceptual decisions. CPs are thought to be determined by both selective decoding of neural activity and by the structure of correlated noise among neurons, but the respective roles of these factors in creating CPs have been controversial. We used biologically-constrained simulations to explore this issue, taking advantage of a peculiar pattern of CPs exhibited by multisensory neurons in area MSTd that represent self-motion. Although models that relied on correlated noise or selective decoding could both account for the peculiar pattern of CPs, predictions of the selective decoding model were substantially more consistent with various features of the neural and behavioral data. While correlated noise is essential to observe CPs, our findings suggest that selective decoding of neuronal signals also plays important roles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fattoyev, F. J.; Piekarewicz, J.
The sensitivity of the stellar moment of inertia to the neutron-star matter equation of state is examined using accurately calibrated relativistic mean-field models. We probe this sensitivity by tuning both the density dependence of the symmetry energy and the high-density component of the equation of state, properties that are at present poorly constrained by existing laboratory data. Particularly attractive is the study of the fraction of the moment of inertia contained in the solid crust. Analytic treatments of the crustal moment of inertia reveal a high sensitivity to the transition pressure at the core-crust interface. This may suggest the existencemore » of a strong correlation between the density dependence of the symmetry energy and the crustal moment of inertia. However, no correlation was found. We conclude that constraining the density dependence of the symmetry energy - through, for example, the measurement of the neutron skin thickness in {sup 208}Pb - will place no significant bound on either the transition pressure or the crustal moment of inertia.« less
The African and Pacific Superplume Structures Constrained by Assembly and Breakup of Pangea
NASA Astrophysics Data System (ADS)
Zhang, N.; Zhong, S.; Leng, W.; Li, Z.
2009-12-01
Seismic tomography studies indicate that the Earth’s mantle structure is characterized by African and Pacific seismically slow velocity anomalies (i.e., superplumes) and circum-Pacific seismically fast anomalies (i.e., a globally spherical harmonic degree-2 structure). McNamara and Zhong (2005) have demonstrated that the African and Pacific superplume structures result from dynamic interaction between mantle convection and surface plate motion history in the last 120 Ma. However, their models produce slightly stronger degree 3 structure than degree 2 near the CMB. Here, we construct a proxy model of plate motions for the African hemisphere for the last 450 Ma since the Early Paleozoic using the paleogeographic reconstruction of continents constrained by paleomagnetic and geological observations. Using this proxy model for plate motion history as the time-dependent surface boundary conditions for a 3-dimensional spherical model of thermochemical mantle convection, we calculate the present-day mantle structure and explore the evolution of mantle structures since the Early Paleozoic. Our model calculations reproduce well the present-day mantle structure including the African and Pacific superplumes. The power spectra of our calculated present-day temperature field shows that the strongest power occurs at degree 2 in the lower mantle while in the upper mantle the strongest power is at degree 3. The degree correlation between tomography model S20RTS and our calculated temperature field shows a high correlation at the degree 1 and degree 2 in the lower mantle while the upper mantle and the short wavelength structures do not correlate well. The summed degree correlation for the lower mantle shows a relatively good correlation for the bottom 300 km of the mantle but the correlation is significantly reduced at depth 600 km above the CMB. For the evolution of mantle structures, we focus on the evolution of the African superplume. Our results suggest that the mantle in the African hemisphere before the assembly of Pangea is predominated by the cold downwelling structure resulting from plate convergence between Gondwana and Laurussia and the cold Africa hemisphere changes to hot due to the return flows from the circum-Pangea subduction after Pangea formation. Based on our results, we suggest that the African superplume structure may be formed no earlier than ~230 Ma ago (i.e., ~100 Ma after the assembly of Pangea).
Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection
NASA Astrophysics Data System (ADS)
Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan
2017-08-01
Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.
Constrained variation in Jastrow method at high density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, J.C.; Bishop, R.F.; Irvine, J.M.
1976-11-01
A method is derived for constraining the correlation function in a Jastrow variational calculation which permits the truncation of the cluster expansion after two-body terms, and which permits exact minimization of the two-body cluster by functional variation. This method is compared with one previously proposed by Pandharipande and is found to be superior both theoretically and practically. The method is tested both on liquid /sup 3/He, by using the Lennard--Jones potential, and on the model system of neutrons treated as Boltzmann particles (''homework'' problem). Good agreement is found both with experiment and with other calculations involving the explicit evaluation ofmore » higher-order terms in the cluster expansion. The method is then applied to a more realistic model of a neutron gas up to a density of 4 neutrons per F/sup 3/, and is found to give ground-state energies considerably lower than those of Pandharipande. (AIP)« less
Constraining the top-Higgs sector of the standard model effective field theory
NASA Astrophysics Data System (ADS)
Cirigliano, V.; Dekens, W.; de Vries, J.; Mereghetti, E.
2016-08-01
Working in the framework of the Standard Model effective field theory, we study chirality-flipping couplings of the top quark to Higgs and gauge bosons. We discuss in detail the renormalization-group evolution to lower energies and investigate direct and indirect contributions to high- and low-energy C P -conserving and C P -violating observables. Our analysis includes constraints from collider observables, precision electroweak tests, flavor physics, and electric dipole moments. We find that indirect probes are competitive or dominant for both C P -even and C P -odd observables, even after accounting for uncertainties associated with hadronic and nuclear matrix elements, illustrating the importance of including operator mixing in constraining the Standard Model effective field theory. We also study scenarios where multiple anomalous top couplings are generated at the high scale, showing that while the bounds on individual couplings relax, strong correlations among couplings survive. Finally, we find that enforcing minimal flavor violation does not significantly affect the bounds on the top couplings.
Constrained structural dynamic model verification using free vehicle suspension testing methods
NASA Technical Reports Server (NTRS)
Blair, Mark A.; Vadlamudi, Nagarjuna
1988-01-01
Verification of the validity of a spacecraft's structural dynamic math model used in computing ascent (or in the case of the STS, ascent and landing) loads is mandatory. This verification process requires that tests be carried out on both the payload and the math model such that the ensuing correlation may validate the flight loads calculations. To properly achieve this goal, the tests should be performed with the payload in the launch constraint (i.e., held fixed at only the payload-booster interface DOFs). The practical achievement of this set of boundary conditions is quite difficult, especially with larger payloads, such as the 12-ton Hubble Space Telescope. The development of equations in the paper will show that by exciting the payload at its booster interface while it is suspended in the 'free-free' state, a set of transfer functions can be produced that will have minima that are directly related to the fundamental modes of the payload when it is constrained in its launch configuration.
Sharp Boundary Inversion of 2D Magnetotelluric Data using Bayesian Method.
NASA Astrophysics Data System (ADS)
Zhou, S.; Huang, Q.
2017-12-01
Normally magnetotelluric(MT) inversion method cannot show the distribution of underground resistivity with clear boundary, even if there are obviously different blocks. Aiming to solve this problem, we develop a Bayesian structure to inverse 2D MT sharp boundary data, using boundary location and inside resistivity as the random variables. Firstly, we use other MT inversion results, like ModEM, to analyze the resistivity distribution roughly. Then, we select the suitable random variables and change its data format to traditional staggered grid parameters, which can be used to do finite difference forward part. Finally, we can shape the posterior probability density(PPD), which contains all the prior information and model-data correlation, by Markov Chain Monte Carlo(MCMC) sampling from prior distribution. The depth, resistivity and their uncertainty can be valued. It also works for sensibility estimation. We applied the method to a synthetic case, which composes two large abnormal blocks in a trivial background. We consider the boundary smooth and the near true model weight constrains that mimic joint inversion or constrained inversion, then we find that the model results a more precise and focused depth distribution. And we also test the inversion without constrains and find that the boundary could also be figured, though not as well. Both inversions have a good valuation of resistivity. The constrained result has a lower root mean square than ModEM inversion result. The data sensibility obtained via PPD shows that the resistivity is the most sensible, center depth comes second and both sides are the worst.
NASA Astrophysics Data System (ADS)
Parviainen, Hannu
2015-10-01
PyLDTk automates the calculation of custom stellar limb darkening (LD) profiles and model-specific limb darkening coefficients (LDC) using the library of PHOENIX-generated specific intensity spectra by Husser et al. (2013). It facilitates exoplanet transit light curve modeling, especially transmission spectroscopy where the modeling is carried out for custom narrow passbands. PyLDTk construct model-specific priors on the limb darkening coefficients prior to the transit light curve modeling. It can also be directly integrated into the log posterior computation of any pre-existing transit modeling code with minimal modifications to constrain the LD model parameter space directly by the LD profile, allowing for the marginalization over the whole parameter space that can explain the profile without the need to approximate this constraint by a prior distribution. This is useful when using a high-order limb darkening model where the coefficients are often correlated, and the priors estimated from the tabulated values usually fail to include these correlations.
Charting the parameter space of the global 21-cm signal
NASA Astrophysics Data System (ADS)
Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan; Lotem, Matan
2017-12-01
The early star-forming Universe is still poorly constrained, with the properties of high-redshift stars, the first heating sources and reionization highly uncertain. This leaves observers planning 21-cm experiments with little theoretical guidance. In this work, we explore the possible range of high-redshift parameters including the star formation efficiency and the minimal mass of star-forming haloes; the efficiency, spectral energy distribution and redshift evolution of the first X-ray sources; and the history of reionization. These parameters are only weakly constrained by available observations, mainly the optical depth to the cosmic microwave background. We use realistic semi-numerical simulations to produce the global 21-cm signal over the redshift range z = 6-40 for each of 193 different combinations of the astrophysical parameters spanning the allowed range. We show that the expected signal fills a large parameter space, but with a fixed general shape for the global 21-cm curve. Even with our wide selection of models, we still find clear correlations between the key features of the global 21-cm signal and underlying astrophysical properties of the high-redshift Universe, namely the Ly α intensity, the X-ray heating rate and the production rate of ionizing photons. These correlations can be used to directly link future measurements of the global 21-cm signal to astrophysical quantities in a mostly model-independent way. We identify additional correlations that can be used as consistency checks.
The Supernovae Analysis Application (SNAP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas
The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less
The Supernovae Analysis Application (SNAP)
Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas; ...
2017-09-06
The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less
The Supernovae Analysis Application (SNAP)
NASA Astrophysics Data System (ADS)
Bayless, Amanda J.; Fryer, Chris L.; Wollaeger, Ryan; Wiggins, Brandon; Even, Wesley; de la Rosa, Janie; Roming, Peter W. A.; Frey, Lucy; Young, Patrick A.; Thorpe, Rob; Powell, Luke; Landers, Rachel; Persson, Heather D.; Hay, Rebecca
2017-09-01
The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginning to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.
Technical Note: On the use of nudging for aerosol-climate model intercomparison studies
Zhang, K.; Wan, H.; Liu, X.; ...
2014-04-24
Nudging is an assimilation technique widely used in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5, due to the systematic temperature bias in the standard model and the sensitivity of simulated ice formation to anthropogenic aerosolmore » concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on longwave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects through ice clouds, since it provides well-constrained meteorology without strongly perturbing the model's mean climate.« less
Technical Note: On the use of nudging for aerosol-climate model intercomparison studies
NASA Astrophysics Data System (ADS)
Zhang, K.; Wan, H.; Liu, X.; Ghan, S. J.; Kooperman, G. J.; Ma, P.-L.; Rasch, P. J.
2014-04-01
Nudging is an assimilation technique widely used in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5, due to the systematic temperature bias in the standard model and the sensitivity of simulated ice formation to anthropogenic aerosol concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on longwave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects through ice clouds, since it provides well-constrained meteorology without strongly perturbing the model's mean climate.
Gu, Yong; Angelaki, Dora E; DeAngelis, Gregory C
2014-01-01
Trial by trial covariations between neural activity and perceptual decisions (quantified by choice Probability, CP) have been used to probe the contribution of sensory neurons to perceptual decisions. CPs are thought to be determined by both selective decoding of neural activity and by the structure of correlated noise among neurons, but the respective roles of these factors in creating CPs have been controversial. We used biologically-constrained simulations to explore this issue, taking advantage of a peculiar pattern of CPs exhibited by multisensory neurons in area MSTd that represent self-motion. Although models that relied on correlated noise or selective decoding could both account for the peculiar pattern of CPs, predictions of the selective decoding model were substantially more consistent with various features of the neural and behavioral data. While correlated noise is essential to observe CPs, our findings suggest that selective decoding of neuronal signals also plays important roles. DOI: http://dx.doi.org/10.7554/eLife.02670.001 PMID:24986734
Zhang, Yongsheng; Wei, Heng; Zheng, Kangning
2017-01-01
Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188
Caste load and the evolution of reproductive skew.
Holman, Luke
2014-01-01
Reproductive skew theory seeks to explain how reproduction is divided among group members in animal societies. Existing theory is framed almost entirely in terms of selection, though nonadaptive processes must also play some role in the evolution of reproductive skew. Here I propose that a genetic correlation between helper fecundity and breeder fecundity may frequently constrain the evolution of reproductive skew. This constraint is part of a wider phenomenon that I term "caste load," which is defined as the decline in mean fitness caused by caste-specific selection pressures, that is, differential selection on breeding and nonbreeding individuals. I elaborate the caste load hypothesis using quantitative and population genetic arguments and individual-based simulations. Although selection can sometimes erode genetic correlations and resolve caste load, this may be constrained when mutations have similar pleiotropic effects on breeder and helper traits. I document evidence for caste load, identify putative genomic adaptations to it, and suggest future research directions. The models highlight the value of considering adaptation within the boundaries imposed by genetic architecture and incidentally reaffirm that monogamy promotes the evolutionary transition to eusociality.
NASA Astrophysics Data System (ADS)
Wang, Yaping
One of the primary goals of the spin physics program at STAR is to constrain the polarized gluon distribution function, Δg(x), by measuring the longitudinal double-spin asymmetry (ALL) of various final-state channels. Using a jet in the mid-rapidity region |η| < 0.9 correlated with an azimuthally back-to-back π0 in the forward rapidity region 0.8 < η < 2.0 provides a new possibility to access the Δg(x) distribution at Bjorken-x down to 0.01. Compared to inclusive jet or inclusive π0 measurements, this channel also allows to constrain the initial parton kinematics. In these proceedings, we will present the status of the analysis of the π0-jet ALL in longitudinally polarized proton+proton collisions at s =510 GeV with 80 pb‑1 of data taken during the 2012 RHIC run. We also compare the projected ALL uncertainties to theoretical predictions of the ALL by next-to-leading order (NLO) model calculations with different polarized parton distribution functions.
Mantle viscosity structure constrained by joint inversions of seismic velocities and density
NASA Astrophysics Data System (ADS)
Rudolph, M. L.; Moulik, P.; Lekic, V.
2017-12-01
The viscosity structure of Earth's deep mantle affects the thermal evolution of Earth, the ascent of mantle upwellings, sinking of subducted oceanic lithosphere, and the mixing of compositional heterogeneities in the mantle. Modeling the long-wavelength dynamic geoid allows us to constrain the radial viscosity profile of the mantle. Typically, in inversions for the mantle viscosity structure, wavespeed variations are mapped into density variations using a constant- or depth-dependent scaling factor. Here, we use a newly developed joint model of anisotropic Vs, Vp, density and transition zone topographies to generate a suite of solutions for the mantle viscosity structure directly from the seismologically constrained density structure. The density structure used to drive our forward models includes contributions from both thermal and compositional variations, including important contributions from compositionally dense material in the Large Low Velocity Provinces at the base of the mantle. These compositional variations have been neglected in the forward models used in most previous inversions and have the potential to significantly affect large-scale flow and thus the inferred viscosity structure. We use a transdimensional, hierarchical, Bayesian approach to solve the inverse problem, and our solutions for viscosity structure include an increase in viscosity below the base of the transition zone, in the shallow lower mantle. Using geoid dynamic response functions and an analysis of the correlation between the observed geoid and mantle structure, we demonstrate the underlying reason for this inference. Finally, we present a new family of solutions in which the data uncertainty is accounted for using covariance matrices associated with the mantle structure models.
NASA Astrophysics Data System (ADS)
Ballantyne, David R.
2016-04-01
Deep X-ray surveys have provided a comprehensive and largely unbiased view of AGN evolution stretching back to z˜5. However, it has been challenging to use the survey results to connect this evolution to the cosmological environment that AGNs inhabit. Exploring this connection will be crucial to understanding the triggering mechanisms of AGNs and how these processes manifest in observations at all wavelengths. In anticipation of upcoming wide-field X-ray surveys that will allow quantitative analysis of AGN environments, we present a method to observationally constrain the Conditional Luminosity Function (CLF) of AGNs at a specific z. Once measured, the CLF allows the calculation of the AGN bias, mean dark matter halo mass, AGN lifetime, halo occupation number, and AGN correlation function - all as a function of luminosity. The CLF can be constrained using a measurement of the X-ray luminosity function and the correlation length at different luminosities. The method is demonstrated at z ≈0 and 0.9, and clear luminosity dependence in the AGN bias and mean halo mass is predicted at both z. The results support the idea that there are at least two different modes of AGN triggering: one, at high luminosity, that only occurs in high mass, highly biased haloes, and one that can occur over a wide range of halo masses and leads to luminosities that are correlated with halo mass. This latter mode dominates at z<0.9. The CLFs for Type 2 and Type 1 AGNs are also constrained at z ≈0, and we find evidence that unobscured quasars are more likely to be found in higher mass halos than obscured quasars. Thus, the AGN unification model seems to fail at quasar luminosities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dainotti, Maria Giovanna; Petrosian, Vahe'; Singal, Jack
2013-09-10
Gamma-ray bursts (GRBs), which have been observed up to redshifts z Almost-Equal-To 9.5, can be good probes of the early universe and have the potential to test cosmological models. Dainotti's analysis of GRB Swift afterglow light curves with known redshifts and a definite X-ray plateau shows an anti-correlation between the rest-frame time when the plateau ends (the plateau end time) and the calculated luminosity at that time (or approximately an anti-correlation between plateau duration and luminosity). Here, we present an update of this correlation with a larger data sample of 101 GRBs with good light curves. Since some of thismore » correlation could result from the redshift dependences of these intrinsic parameters, namely, their cosmological evolution, we use the Efron-Petrosian method to reveal the intrinsic nature of this correlation. We find that a substantial part of the correlation is intrinsic and describe how we recover it and how this can be used to constrain physical models of the plateau emission, the origin of which is still unknown. The present result could help to clarify the debated nature of the plateau emission.« less
Using Neural Networks to Describe Tracer Correlations
NASA Technical Reports Server (NTRS)
Lary, D. J.; Mueller, M. D.; Mussa, H. Y.
2003-01-01
Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.
Could geoengineering research help answer one of the biggest questions in climate science?
NASA Astrophysics Data System (ADS)
Wood, Robert; Ackerman, Thomas; Rasch, Philip; Wanser, Kelly
2017-07-01
Anthropogenic aerosol impacts on clouds constitute the largest source of uncertainty in quantifying the radiative forcing of climate, and hinders our ability to determine Earth's climate sensitivity to greenhouse gas increases. Representation of aerosol-cloud interactions in global models is particularly challenging because these interactions occur on typically unresolved scales. Observational studies show influences of aerosol on clouds, but correlations between aerosol and clouds are insufficient to constrain aerosol forcing because of the difficulty in separating aerosol and meteorological impacts. In this commentary, we argue that this current impasse may be overcome with the development of approaches to conduct control experiments whereby aerosol particle perturbations can be introduced into patches of marine low clouds in a systematic manner. Such cloud perturbation experiments constitute a fresh approach to climate science and would provide unprecedented data to untangle the effects of aerosol particles on cloud microphysics and the resulting reflection of solar radiation by clouds. The control experiments would provide a critical test of high-resolution models that are used to develop an improved representation aerosol-cloud interactions needed to better constrain aerosol forcing in global climate models.
Constrained variability of modeled T:ET ratio across biomes
NASA Astrophysics Data System (ADS)
Fatichi, Simone; Pappas, Christoforos
2017-07-01
A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.
Statistical Issues in Galaxy Cluster Cosmology
NASA Technical Reports Server (NTRS)
Mantz, Adam
2013-01-01
The number and growth of massive galaxy clusters are sensitive probes of cosmological structure formation. Surveys at various wavelengths can detect clusters to high redshift, but the fact that cluster mass is not directly observable complicates matters, requiring us to simultaneously constrain scaling relations of observable signals with mass. The problem can be cast as one of regression, in which the data set is truncated, the (cosmology-dependent) underlying population must be modeled, and strong, complex correlations between measurements often exist. Simulations of cosmological structure formation provide a robust prediction for the number of clusters in the Universe as a function of mass and redshift (the mass function), but they cannot reliably predict the observables used to detect clusters in sky surveys (e.g. X-ray luminosity). Consequently, observers must constrain observable-mass scaling relations using additional data, and use the scaling relation model in conjunction with the mass function to predict the number of clusters as a function of redshift and luminosity.
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
NASA Astrophysics Data System (ADS)
Garcia-Appadoo, D. A.; West, A. A.; Dalcanton, J. J.; Cortese, L.; Disney, M. J.
2009-03-01
We have used the Parkes Multibeam system and the Sloan Digital Sky Survey to assemble a sample of 195 galaxies selected originally from their HI signature to avoid biases against unevolved or low surface brightness objects. For each source nine intrinsic properties are measured homogeneously, as well as inclination and an optical spectrum. The sample, which should be almost entirely free of either misidentification or confusion, includes a wide diversity of galaxies ranging from inchoate, low surface brightness dwarfs to giant spirals. Despite this diversity there are five clear correlations among their properties. They include a common dynamical mass-to-light ratio within their optical radii, a correlation between surface brightness and luminosity and a common HI surface density. Such correlation should provide strong constrains on models of galaxy formation and evolution.
Reflected stochastic differential equation models for constrained animal movement
Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.
2017-01-01
Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.
Conserved charge fluctuations at vanishing and non-vanishing chemical potential
NASA Astrophysics Data System (ADS)
Karsch, Frithjof
2017-11-01
Up to 6th order cumulants of fluctuations of net baryon-number, net electric charge and net strangeness as well as correlations among these conserved charge fluctuations are now being calculated in lattice QCD. These cumulants provide a wealth of information on the properties of strong-interaction matter in the transition region from the low temperature hadronic phase to the quark-gluon plasma phase. They can be used to quantify deviations from hadron resonance gas (HRG) model calculations which frequently are used to determine thermal conditions realized in heavy ion collision experiments. Already some second order cumulants like the correlations between net baryon-number and net strangeness or net electric charge differ significantly at temperatures above 155 MeV in QCD and HRG model calculations. We show that these differences increase at non-zero baryon chemical potential constraining the applicability range of HRG model calculations to even smaller values of the temperature.
Epstein, Scott A; Riipinen, Ilona; Donahue, Neil M
2010-01-15
To model the temperature-induced partitioning of semivolatile organics in laboratory experiments or atmospheric models, one must know the appropriate heats of vaporization. Current treatments typically assume a constant value of the heat of vaporization or else use specific values from a small set of surrogate compounds. With published experimental vapor-pressure data from over 800 organic compounds, we have developed a semiempirical correlation between the saturation concentration (C*, microg m(-3)) and the heat of vaporization (deltaH(VAP), kJ mol(-1)) for organics in the volatility basis set. Near room temperature, deltaH(VAP) = -11 log(10)C(300)(*) + 129. Knowledge of the relationship between C* and deltaH(VAP) constrains a free parameter in thermodenuder data analysis. A thermodenuder model using our deltaH(VAP) values agrees well with thermal behavior observed in laboratory experiments.
A marked correlation function for constraining modified gravity models
NASA Astrophysics Data System (ADS)
White, Martin
2016-11-01
Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.
Constraining external reverse shock physics of gamma-ray bursts from ROTSE-III limits
NASA Astrophysics Data System (ADS)
Cui, Xiao-Hong; Zou, Yuan-Chuan; Wei, Jun-Jie; Zheng, Wei-Kang; Wu, Xue-Feng
2018-02-01
Assuming that early optical emission is dominated by external reverse shock (RS) in the standard model of gamma-ray bursts (GRBs), we intend to constrain RS models with an initial Lorentz factor Γ0 of the outflows based on the ROTSE-III observations. We consider two cases of RS behaviour: relativistic shock and non-relativistic shock. For a homogeneous interstellar medium (ISM) and the wind circum-burst environment, constraints can be achieved by the fact that the peak flux Fν at the RS crossing time should be lower than the observed upper limit Fν, limit. We consider the different spectral regimes in which the observed optical frequency νopt may locate, which are divided by the orders for the minimum synchrotron frequency νm and the cooling frequency νc. Considering the homogeneous and wind environments around GRBs, we find that the relativistic RS case can be constrained by the (upper and lower) limits of Γ0 in a large range from about hundreds to thousands for 36 GRBs reported by ROTSE-III. Constraints on the non-relativistic RS case are achieved with limits of Γ0 ranging from ∼30 to ∼350 for 26 bursts. The lower limits of Γ0 achieved for the relativistic RS model are disfavored based on the previously discovered correlation between the initial Lorentz factor Γ0 and the isotropic gamma-ray energy Eγ, iso released in the prompt phase.
Multi-Species Inversion and IAGOS Airborne Data for a Better Constraint of Continental Scale Fluxes
NASA Astrophysics Data System (ADS)
Boschetti, F.; Gerbig, C.; Janssens-Maenhout, G. G. A.; Thouret, V.; Totsche, K. U.; Nedelec, P.; Marshall, J.
2016-12-01
Airborne measurements of CO2, CO, and CH4 in the context of IAGOS (In-service Aircraft for a Global Observing System) will provide profiles from take-off and landing of airliners. These observations are useful for constraining sources and sinks in the vicinity of major metropolitan areas. A proposed improvement of the top-down method to constrain sources and sinks is the use of a multispecies inversion. Different species such as CO2 and CO have partial overlapping in emission patterns for given fuel-combustion related sectors, and thus share part of the uncertainties, both related to the a priori knowledge of emissions, and to model-data mismatch error. Our approach employs a regional modeling framework that combines the Lagrangian particle dispersion model STILT with high resolution (10 km x 10 km) EDGARv4.3 emission inventory, differentiated by emission sector and fuel type for CO2, CO, and CH4, and combined with VPRM for biospheric fluxes of CO2. We validated the modeling framework with observations of CO profiles available through IAGOS. Using synthetic IAGOS profile observations, we evaluate the benefits using correlation between different species' uncertainties on the performance of the atmospheric inversion. With this approach we were able to reproduce CO observations with an average correlation of 0.56. Yet, simulated mixing where lower ratio by a factor of 2.3 reflecting a low bias in the emission inventory. Mean uncertainty reduction achieved for CO2 fossil fuel emissions amounts to 41%; for photosynthesis and respiration flux it is 41% and 45%, respectively. For CO and CH4 the uncertainty reduction is roughly 62% and 66% respectively. Considering correlation between different species, posterior uncertainty can be reduced up to 23%; such reduction depends on the assumed error structure of the prior and on the considered timeframe. The study suggests a significant constraint on regional emissions using multi-species inversions of IAGOS in-situ observations.
Rise in central west Greenland surface melt unprecedented over the last three centuries
NASA Astrophysics Data System (ADS)
Trusel, Luke; Das, Sarah; Osman, Matthew; Evans, Matthew; Smith, Ben; McConnell, Joe; Noël, Brice; van den Broeke, Michiel
2017-04-01
Greenland Ice Sheet surface melting has intensified and expanded over the last several decades and is now a leading component of ice sheet mass loss. Here, we constrain the multi-century temporal evolution of surface melt across central west Greenland by quantifying layers of refrozen melt within well-dated firn and ice cores collected in 2014 and 2015, as well as from a core collected in 2004. We find significant agreement among ice core, satellite, and regional climate model melt datasets over recent decades, confirming the fidelity of the ice core melt stratigraphy as a reliable record of past variability in the magnitude of surface melt. We also find a significant correlation between the melt records derived from our new 100-m GC-2015 core (2436 m.a.s.l.) and the older (2004) 150-m D5 core (2472 m.a.s.l.) located 50 km to the southeast. This agreement demonstrates the robustness of the ice core-derived melt histories and the potential for reconstructing regional melt evolution from a single site, despite local variability in melt percolation and refreeze processes. Our array of upper percolation zone cores reveals that although the overall frequency of melt at these sites has not increased, the intensification of melt over the last three decades is unprecedented within at least the last 365 years. Utilizing the regional climate model RACMO 2.3, we show that this melt intensification is a nonlinear response to warming summer air temperatures, thus underscoring the heightened sensitivity of this sector of Greenland to further climate warming. Finally, we examine spatial correlations between the ice core melt records and modeled melt fields across the ice sheet to assess the broader representation of each ice core record. This analysis reveals wide-ranging significant correlations, including to modeled meltwater runoff. As such, our ice core melt records may furthermore offer unique, observationally-constrained insights into past variability in ice sheet mass loss.
Constraints on the dark matter and dark energy interactions from weak lensing bispectrum tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
An, Rui; Feng, Chang; Wang, Bin, E-mail: an_rui@sjtu.edu.cn, E-mail: chang.feng@uci.edu, E-mail: wang_b@sjtu.edu.cn
We estimate uncertainties of cosmological parameters for phenomenological interacting dark energy models using weak lensing convergence power spectrum and bispectrum. We focus on the bispectrum tomography and examine how well the weak lensing bispectrum with tomography can constrain the interactions between dark sectors, as well as other cosmological parameters. Employing the Fisher matrix analysis, we forecast parameter uncertainties derived from weak lensing bispectra with a two-bin tomography and place upper bounds on strength of the interactions between the dark sectors. The cosmic shear will be measured from upcoming weak lensing surveys with high sensitivity, thus it enables us to usemore » the higher order correlation functions of weak lensing to constrain the interaction between dark sectors and will potentially provide more stringent results with other observations combined.« less
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
INTERFERENCE AS AN ORIGIN OF THE PEAKED NOISE IN ACCRETING X-RAY BINARIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veledina, Alexandra, E-mail: alexandra.veledina@gmail.com
2016-12-01
We propose a physical model for the peaked noise in the X-ray power density spectra of accreting X-ray binaries. We interpret its appearance as an interference of two Comptonization continua: one coming from the upscattering of seed photons from the cold thin disk and the other fed by the synchrotron emission of the hot flow. Variations of both X-ray components are caused by fluctuations in mass accretion rate, but there is a delay between them corresponding to the propagation timescale from the disk Comptonization radius to the region of synchrotron Comptonization. If the disk and synchrotron Comptonization are correlated, themore » humps in the power spectra are harmonically related and the dips between them appear at frequencies related as odd numbers 1:3:5. If they are anti-correlated, the humps are related as 1:3:5, but the dips are harmonically related. Similar structures are expected to be observed in accreting neutron star binaries and supermassive black holes. The delay can be easily recovered from the frequency of peaked noise and further used to constrain the combination of the viscosity parameter and disk height-to-radius ratio α ( H / R ){sup 2} of the accretion flow. We model multi-peak power spectra of black hole X-ray binaries GX 339–4 and XTE J1748–288 to constrain these parameters.« less
NASA Astrophysics Data System (ADS)
Jilinski, Pavel; Meju, Max A.; Fontes, Sergio L.
2013-10-01
The commonest technique for determination of the continental-oceanic crustal boundary or transition (COB) zone is based on locating and visually correlating bathymetric and potential field anomalies and constructing crustal models constrained by seismic data. In this paper, we present a simple method for spatial correlation of bathymetric and potential field geophysical anomalies. Angular differences between gradient directions are used to determine different types of correlation between gravity and bathymetric or magnetic data. It is found that the relationship between bathymetry and gravity anomalies can be correctly identified using this method. It is demonstrated, by comparison with previously published models for the southwest African margin, that this method enables the demarcation of the zone of transition from oceanic to continental crust assuming that this it is associated with geophysical anomalies, which can be correlated using gradient directions rather than magnitudes. We also applied this method, supported by 2-D gravity modelling, to the more complex Liberia and Cote d'Ivoire-Ghana sectors of the West African transform margin and obtained results that are in remarkable agreement with past predictions of the COB in that region. We suggest the use of this method for a first-pass interpretation as a prelude to rigorous modelling of the COB in frontier areas.
A Model-Data Fusion Approach for Constraining Modeled GPP at Global Scales Using GOME2 SIF Data
NASA Astrophysics Data System (ADS)
MacBean, N.; Maignan, F.; Lewis, P.; Guanter, L.; Koehler, P.; Bacour, C.; Peylin, P.; Gomez-Dans, J.; Disney, M.; Chevallier, F.
2015-12-01
Predicting the fate of the ecosystem carbon, C, stocks and their sensitivity to climate change relies heavily on our ability to accurately model the gross carbon fluxes, i.e. photosynthesis and respiration. However, there are large differences in the Gross Primary Productivity (GPP) simulated by different land surface models (LSMs), not only in terms of mean value, but also in terms of phase and amplitude when compared to independent data-based estimates. This strongly limits our ability to provide accurate predictions of carbon-climate feedbacks. One possible source of this uncertainty is from inaccurate parameter values resulting from incomplete model calibration. Solar Induced Fluorescence (SIF) has been shown to have a linear relationship with GPP at the typical spatio-temporal scales used in LSMs (Guanter et al., 2011). New satellite-derived SIF datasets have the potential to constrain LSM parameters related to C uptake at global scales due to their coverage. Here we use SIF data derived from the GOME2 instrument (Köhler et al., 2014) to optimize parameters related to photosynthesis and leaf phenology of the ORCHIDEE LSM, as well as the linear relationship between SIF and GPP. We use a multi-site approach that combines many model grid cells covering a wide spatial distribution within the same optimization (e.g. Kuppel et al., 2014). The parameters are constrained per Plant Functional type as the linear relationship described above varies depending on vegetation structural properties. The relative skill of the optimization is compared to a case where only satellite-derived vegetation index data are used to constrain the model, and to a case where both data streams are used. We evaluate the results using an independent data-driven estimate derived from FLUXNET data (Jung et al., 2011) and with a new atmospheric tracer, Carbonyl sulphide (OCS) following the approach of Launois et al. (ACPD, in review). We show that the optimization reduces the strong positive bias of the ORCHIDEE model and increases the correlation compared to independent estimates. Differences in spatial patterns and gradients between simulated GPP and observed SIF remain largely unchanged however, suggesting that the underlying representation of vegetation type and/or structure and functioning in the model requires further investigation.
Quantum Common Causes and Quantum Causal Models
NASA Astrophysics Data System (ADS)
Allen, John-Mark A.; Barrett, Jonathan; Horsman, Dominic C.; Lee, Ciarán M.; Spekkens, Robert W.
2017-07-01
Reichenbach's principle asserts that if two observed variables are found to be correlated, then there should be a causal explanation of these correlations. Furthermore, if the explanation is in terms of a common cause, then the conditional probability distribution over the variables given the complete common cause should factorize. The principle is generalized by the formalism of causal models, in which the causal relationships among variables constrain the form of their joint probability distribution. In the quantum case, however, the observed correlations in Bell experiments cannot be explained in the manner Reichenbach's principle would seem to demand. Motivated by this, we introduce a quantum counterpart to the principle. We demonstrate that under the assumption that quantum dynamics is fundamentally unitary, if a quantum channel with input A and outputs B and C is compatible with A being a complete common cause of B and C , then it must factorize in a particular way. Finally, we show how to generalize our quantum version of Reichenbach's principle to a formalism for quantum causal models and provide examples of how the formalism works.
NASA Astrophysics Data System (ADS)
Chatterjee, D.; Gulminelli, F.; Raduta, Ad. R.; Margueron, J.
2017-12-01
The question of correlations among empirical equation of state (EoS) parameters constrained by nuclear observables is addressed in a Thomas-Fermi meta-modeling approach. A recently proposed meta-modeling for the nuclear EoS in nuclear matter is augmented with a single finite size term to produce a minimal unified EoS functional able to describe the smooth part of the nuclear ground state properties. This meta-model can reproduce the predictions of a large variety of models, and interpolate continuously between them. An analytical approximation to the full Thomas-Fermi integrals is further proposed giving a fully analytical meta-model for nuclear masses. The parameter space is sampled and filtered through the constraint of nuclear mass reproduction with Bayesian statistical tools. We show that this simple analytical meta-modeling has a predictive power on masses, radii, and skins comparable to full Hartree-Fock or extended Thomas-Fermi calculations with realistic energy functionals. The covariance analysis on the posterior distribution shows that no physical correlation is present between the different EoS parameters. Concerning nuclear observables, a strong correlation between the slope of the symmetry energy and the neutron skin is observed, in agreement with previous studies.
Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes.
Lv, Jie; Havlak, Paul; Putnam, Nicholas H
2011-10-05
Many metazoan genomes conserve chromosome-scale gene linkage relationships ("macro-synteny") from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ) is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. We examine a family of simple (one-parameter) extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context ("DCJ-[C]"), and is available as open source software from http://github.com/putnamlab/dcj-c. A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.
2017-01-01
The evolution of wing pattern in Lepidoptera is a popular area of inquiry but few studies have examined microlepidoptera, with fewer still focusing on intraspecific variation. The tineid genus Moerarchis Durrant, 1914 includes two species with high intraspecific variation of wing pattern. A subset of the specimens examined here provide, to my knowledge, the first examples of wing patterns that follow both the ‘alternating wing-margin’ and ‘uniform wing-margin’ models in different regions along the costa. These models can also be evaluated along the dorsum of Moerarchis, where a similar transition between the two models can be seen. Fusion of veins is shown not to effect wing pattern, in agreement with previous inferences that the plesiomorphic location of wing veins constrains the development of colour pattern. The significant correlation between wing length and number of wing pattern elements in Moerarchis australasiella shows that wing size can act as a major determinant of wing pattern complexity. Lastly, some M. australasiella specimens have wing patterns that conform entirely to the ‘uniform wing-margin’ model and contain more than six bands, providing new empirical insight into the century-old question of how wing venation constrains wing patterns with seven or more bands. PMID:28405390
Speaker-independent phoneme recognition with a binaural auditory image model
NASA Astrophysics Data System (ADS)
Francis, Keith Ivan
1997-09-01
This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaeliviita, Jussi; Savelainen, Matti; Talvitie, Marianne
2012-07-10
We constrain cosmological models where the primordial perturbations have an adiabatic and a (possibly correlated) cold dark matter (CDM) or baryon isocurvature component. We use both a phenomenological approach, where the power spectra of primordial perturbations are parameterized with amplitudes and spectral indices, and a slow-roll two-field inflation approach where slow-roll parameters are used as primary parameters, determining the spectral indices and the tensor-to-scalar ratio. In the phenomenological case, with CMB data, the upper limit to the CDM isocurvature fraction is {alpha} < 6.4% at k = 0.002 Mpc{sup -1} and 15.4% at k = 0.01 Mpc{sup -1}. The non-adiabaticmore » contribution to the CMB temperature variance is -0.030 < {alpha}{sub T} < 0.049 at the 95% confidence level. Including the supernova (SN) (or large-scale structure) data, these limits become {alpha} < 7.0%, 13.7%, and -0.048 < {alpha}{sub T} < 0.042 (or {alpha} < 10.2%, 16.0%, and -0.071 < {alpha}{sub T} < 0.024). The CMB constraint on the tensor-to-scalar ratio, r < 0.26 at k = 0.01 Mpc{sup -1}, is not affected by the non-adiabatic modes. In the slow-roll two-field inflation approach, the spectral indices are constrained close to 1. This leads to tighter limits on the isocurvature fraction; with the CMB data {alpha} < 2.6% at k = 0.01 Mpc{sup -1}, but the constraint on {alpha}{sub T} is not much affected, -0.058 < {alpha}{sub T} < 0.045. Including SN (or LSS) data, these limits become {alpha} < 3.2% and -0.056 < {alpha}{sub T} < 0.030 (or {alpha} < 3.4% and -0.063 < {alpha}{sub T} < -0.008). In addition to the generally correlated models, we study also special cases where the adiabatic and isocurvature modes are uncorrelated or fully (anti)correlated. We calculate Bayesian evidences (model probabilities) in 21 different non-adiabatic cases and compare them to the corresponding adiabatic models, and find that in all cases the data support the pure adiabatic model.« less
NASA Astrophysics Data System (ADS)
Ke, Weiyao; Moreland, J. Scott; Bernhard, Jonah E.; Bass, Steffen A.
2017-10-01
We study the initial three-dimensional spatial configuration of the quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions using centrality and pseudorapidity-dependent measurements of the medium's charged particle density and two-particle correlations. A cumulant-generating function is first used to parametrize the rapidity dependence of local entropy deposition and extend arbitrary boost-invariant initial conditions to nonzero beam rapidities. The model is then compared to p +Pb and Pb + Pb charged-particle pseudorapidity densities and two-particle pseudorapidity correlations and systematically optimized using Bayesian parameter estimation to extract high-probability initial condition parameters. The optimized initial conditions are then compared to a number of experimental observables including the pseudorapidity-dependent anisotropic flows, event-plane decorrelations, and flow correlations. We find that the form of the initial local longitudinal entropy profile is well constrained by these experimental measurements.
Ab initio Studies of Magnetism in the Iron Chalcogenides FeTe and FeSe
NASA Astrophysics Data System (ADS)
Hirayama, Motoaki; Misawa, Takahiro; Miyake, Takashi; Imada, Masatoshi
2015-09-01
The iron chalcogenides FeTe and FeSe belong to the family of iron-based superconductors. We study the magnetism in these compounds in the normal state using the ab initio downfolding scheme developed for strongly correlated electron systems. In deriving ab initio low-energy effective models, we employ the constrained GW method to eliminate the double counting of electron correlations originating from the exchange correlations already taken into account in the density functional theory. By solving the derived ab initio effective models, we reveal that the elimination of the double counting is important in reproducing the bicollinear antiferromagnetic order in FeTe, as is observed in experiments. We also show that the elimination of the double counting induces a unique degeneracy of several magnetic orders in FeSe, which may explain the absence of the magnetic ordering. We discuss the relationship between the degeneracy and the recently found puzzling phenomena in FeSe as well as the magnetic ordering found under pressure.
NASA Astrophysics Data System (ADS)
Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin
2016-11-01
This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.
Abril Hernández, José-María
2016-01-01
Unsupported (210)Pb ((210)Pbexc) vs. mass depth profiles do not contain enough information as to extract a unique chronology when both, (210)Pbexc fluxes and mass sediment accumulation rates (SAR) independently vary with time. Restrictive assumptions are needed to develop a suitable dating tool. A statistical correlation between fluxes and SAR seems to be a quite general rule. This paper builds up a new (210)Pb-based dating tool by using such a statistical correlation. It operates with SAR and initial activities that closely follow normal distributions, what leads to the expected correlation between fluxes and SAR. An intelligent algorithm solves their best arrangement downcore to fit the experimental (210)Pbexc vs. mass depth profile, generating then solutions for the chronological line, and for the histories of SAR and fluxes. Parametric maps of a χ-function serve to find out the solution and to support error estimates. Optionally, the model's answers can be better constrained through the use of time markers. The performance of the model is illustrated with a synthetic core, and with real cases using published data for varved sediment cores. Copyright © 2015 Elsevier Ltd. All rights reserved.
Constrained reduced-order models based on proper orthogonal decomposition
Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...
2017-04-09
A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.
Feasibility of shutter-speed DCE-MRI for improved prostate cancer detection.
Li, Xin; Priest, Ryan A; Woodward, William J; Tagge, Ian J; Siddiqui, Faisal; Huang, Wei; Rooney, William D; Beer, Tomasz M; Garzotto, Mark G; Springer, Charles S
2013-01-01
The feasibility of shutter-speed model dynamic-contrast-enhanced MRI pharmacokinetic analyses for prostate cancer detection was investigated in a prebiopsy patient cohort. Differences of results from the fast-exchange-regime-allowed (FXR-a) shutter-speed model version and the fast-exchange-limit-constrained (FXL-c) standard model are demonstrated. Although the spatial information is more limited, postdynamic-contrast-enhanced MRI biopsy specimens were also examined. The MRI results were correlated with the biopsy pathology findings. Of all the model parameters, region-of-interest-averaged K(trans) difference [ΔK(trans) ≡ K(trans)(FXR-a) - K(trans)(FXL-c)] or two-dimensional K(trans)(FXR-a) vs. k(ep)(FXR-a) values were found to provide the most useful biomarkers for malignant/benign prostate tissue discrimination (at 100% sensitivity for a population of 13, the specificity is 88%) and disease burden determination. (The best specificity for the fast-exchange-limit-constrained analysis is 63%, with the two-dimensional plot.) K(trans) and k(ep) are each measures of passive transcapillary contrast reagent transfer rate constants. Parameter value increases with shutter-speed model (relative to standard model) analysis are larger in malignant foci than in normal-appearing glandular tissue. Pathology analyses verify the shutter-speed model (FXR-a) promise for prostate cancer detection. Parametric mapping may further improve pharmacokinetic biomarker performance. Copyright © 2012 Wiley Periodicals, Inc.
1974-01-01
REGRESSION MODEL - THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January 1974 Nelson Delfino d’Avila Mascarenha;? Image...Report 520 DIGITAL IMAGE RESTORATION UNDER A REGRESSION MODEL THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January...a two- dimensional form adequately describes the linear model . A dis- cretization is performed by using quadrature methods. By trans
Clues on the Milky Way disc formation from population synthesis simulations
NASA Astrophysics Data System (ADS)
Robin, A. C.; Reylé, C.; Bienaymé, O.; Fernandez-Trincado, J. G.; Amôres, E. B.
2016-09-01
In recent years the stellar populations of the Milky Way have been investigated from large scale surveys in different ways, from pure star count analysis to detailed studies based on spectroscopic surveys. While in the former case the data can constrain the scale height and scale length thanks to completeness, they suffer from high correlation between these two values. On the other hand, spectroscopic surveys suffer from complex selection functions which hardly allow to derive accurate density distributions. The scale length in particular has been difficult to be constrained, resulting in discrepant values in the literature. Here, we investigate the thick disc characteristics by comparing model simulations with large scale data sets. The simulations are done from the population synthesis model of Besançon. We explore the parameters of the thick disc (shape, local density, age, metallicity) using a Monte Carlo Markov Chain method to constrain the model free parameters (Robin et al. 2014). Correlations between parameters are limited due to the vast spatial coverage of the used surveys (SDSS + 2MASS). We show that the thick disc was created during a long phase of formation, starting about 12 Gyr ago and finishing about 10 Gyr ago, during which gravitational contraction occurred, both vertically and radially. Moreover, in its early phase the thick disc was flaring in the outskirts. We conclude that the thick disc has been created prior to the thin disc during a gravitational collapse phase, slowed down by turbulence related to a high star formation rate, as explained for example in Bournaud et al. (2009) or Lehnert et al. (2009). Our result does not favor a formation from an initial thin disc thickened later by merger events or by secular evolution of the thin disc. We then study the in-plane distribution of stars in the thin disc from 2MASS and show that the thin disc scale length varies as a function of age, indicating an inside out formation. Moreover, we investigate the warp and flare and demonstrate that the warp amplitude is changing with time and the node angle is slightly precessing. Finally, we show comparisons between the new model and spectroscopic surveys. The new model allows to correctly simulate the kinematics, the metallicity, and α-abundance distributions in the solar neighbourhood as well as in the bulge region.
Understanding recent eastern Horn of Africa rainfall variability and change
Liebmann, Brant; Hoerling, Martin P.; Funk, Christopher C.; Blade, Ileana; Dole, Randall M.; Allured, Dave; Quan, Xiaowei; Eischeid, Jon K.
2014-01-01
The recent upward trend in the October–December wet season is rather weak, however, and its statistical significance is compromised by strong year-to-year fluctuations. October–December eastern Horn rain variability is strongly associated with El Niño–Southern Oscillation and Indian Ocean dipole phenomena on interannual scales, in both model and observations. The interannual October–December correlation between the ensemble-average and observed Horn rainfall 0.87. By comparison, interannual March–May Horn precipitation is only weakly constrained by SST anomalies.
A simple model for the dependence on local detonation speed of the product entropy
NASA Astrophysics Data System (ADS)
Hetherington, David C.; Whitworth, Nicholas J.
2012-03-01
The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of singlespeed programmed burn to DSD/WBL (Detonation Shock Dynamics / Whitham Bdzil Lambourn). The problem with this advance is that the previously conventional approach to the hydrodynamic stage of the model results in the entropy of the detonation products (s) having the wrong correlation with detonation speed (D). Instead of being higher where D is lower, the conventional method leads to s being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and s is realistically correlated with D.
A Simple Model for the Dependence on Local Detonation Speed (D) of the Product Entropy (S)
NASA Astrophysics Data System (ADS)
Hetherington, David
2011-06-01
The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of single-speed programmed burn to DSD. However, with this advance has come the problem that the previously conventional approach to the hydrodynamic stage of the model results in S having the wrong correlation with D. Instead of being higher where the detonation speed is lower, i.e. where reaction occurs at lower compression, the conventional method leads to S being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and S is realistically correlated with D.
2013-01-01
Introduction Sociality has evolved independently multiple times across the spider phylogeny, and despite wide taxonomic and geographical breadth the social species are characterized by a common geographical constrain to tropical and subtropical areas. Here we investigate the environmental factors that drive macro-ecological patterns in social and solitary species in a genus that shows a Mediterranean–Afro-Oriental distribution (Stegodyphus). Both selected drivers (productivity and seasonality) may affect the abundance of potential prey insects, but seasonality may further directly affect survival due to mortality caused by extreme climatic events. Based on a comprehensive dataset including information about the distribution of three independently derived social species and 13 solitary congeners we tested the hypotheses that the distribution of social Stegodyphus species relative to solitary congeners is: (1) restricted to habitats of high vegetation productivity and (2) constrained to areas with a stable climate (low precipitation seasonality). Results Using spatial logistic regression modelling and information-theoretic model selection, we show that social species occur at higher vegetation productivity than solitary, while precipitation seasonality received limited support as a predictor of social spider occurrence. An analysis of insect biomass data across the Stegodyphus distribution range confirmed that vegetation productivity is positively correlated to potential insect prey biomass. Conclusions Habitat productivity constrains the distribution of social spiders across continents compared to their solitary congeners, with group-living in spiders being restricted to areas with relatively high vegetation productivity and insect prey biomass. As known for other taxa, permanent sociality likely evolves in response to high predation pressure and imposes within-group competition for resources. Our results suggest that group living is contingent upon productive environmental conditions where elevated prey abundance meet the increased demand for food of social groups. PMID:23433065
NASA Astrophysics Data System (ADS)
Rubin, Adam; PTF
2018-01-01
I will discuss our results studying light curves of hydrogen-rich supernovae during the first few days after explosion. The first days of emission encode important information about the physical system, and it is possible to relate the early-time light curve to the radius of the progenitor star by using shock-cooling models. I will show the first systematic application of these models to data from the Palomar Transient Factory (PTF). We found that R-band data alone at PTF cadence cannot constrain the radius but can constrain the energy per unit mass of the explosion, uncovering new correlations with other supernova observables. We constrained the radii for events with multi-wavelength observations, and for two events observed with the Kepler mission at 30 min cadence. I will discuss improved observing strategies to obtain more constraining results in the future. Some tensions have arisen between our results and the expected radii from identified progenitors of hydrogen-rich supernovae. The resolution of these tensions may be related to the effect of circumstellar material on the light curves, motivating future systematic spectroscopic sequencing of these events. To this end, we have designed a new medium resolution UV-VIS spectrograph. The Multi-Imaging Transient Spectrograph (MITS) is the R~4500 UV-VIS arm of the Son Of X-Shooter (SOXS) spectrograph proposed for ESO’s 3.6 m New Technology Telescope. Our design divides the spectrum into several sub-bands, allowing optimization for each narrow part of the spectrum. We estimate a 50-100% improvement in throughput relative to a classical 4-C echelle design. Our design has passed a preliminary design review and is expected on the telescope in early 2021.
Wood, Robert; Ackerman, Thomas; Rasch, Philip J.; ...
2017-06-22
Anthropogenic aerosol impacts on clouds constitute the largest source of uncertainty in quantifying the radiative forcing of climate, and hinders our ability to determine Earth's climate sensitivity to greenhouse gas increases. Representation of aerosol–cloud interactions in global models is particularly challenging because these interactions occur on typically unresolved scales. Observational studies show influences of aerosol on clouds, but correlations between aerosol and clouds are insufficient to constrain aerosol forcing because of the difficulty in separating aerosol and meteorological impacts. In this commentary, we argue that this current impasse may be overcome with the development of approaches to conduct control experimentsmore » whereby aerosol particle perturbations can be introduced into patches of marine low clouds in a systematic manner. Such cloud perturbation experiments constitute a fresh approach to climate science and would provide unprecedented data to untangle the effects of aerosol particles on cloud microphysics and the resulting reflection of solar radiation by clouds. Here, the control experiments would provide a critical test of high-resolution models that are used to develop an improved representation aerosol–cloud interactions needed to better constrain aerosol forcing in global climate models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Robert; Ackerman, Thomas; Rasch, Philip J.
Anthropogenic aerosol impacts on clouds constitute the largest source of uncertainty in quantifying the radiative forcing of climate, and hinders our ability to determine Earth's climate sensitivity to greenhouse gas increases. Representation of aerosol–cloud interactions in global models is particularly challenging because these interactions occur on typically unresolved scales. Observational studies show influences of aerosol on clouds, but correlations between aerosol and clouds are insufficient to constrain aerosol forcing because of the difficulty in separating aerosol and meteorological impacts. In this commentary, we argue that this current impasse may be overcome with the development of approaches to conduct control experimentsmore » whereby aerosol particle perturbations can be introduced into patches of marine low clouds in a systematic manner. Such cloud perturbation experiments constitute a fresh approach to climate science and would provide unprecedented data to untangle the effects of aerosol particles on cloud microphysics and the resulting reflection of solar radiation by clouds. Here, the control experiments would provide a critical test of high-resolution models that are used to develop an improved representation aerosol–cloud interactions needed to better constrain aerosol forcing in global climate models.« less
Constrained vertebrate evolution by pleiotropic genes.
Hu, Haiyang; Uesaka, Masahiro; Guo, Song; Shimai, Kotaro; Lu, Tsai-Ming; Li, Fang; Fujimoto, Satoko; Ishikawa, Masato; Liu, Shiping; Sasagawa, Yohei; Zhang, Guojie; Kuratani, Shigeru; Yu, Jr-Kai; Kusakabe, Takehiro G; Khaitovich, Philipp; Irie, Naoki
2017-11-01
Despite morphological diversification of chordates over 550 million years of evolution, their shared basic anatomical pattern (or 'bodyplan') remains conserved by unknown mechanisms. The developmental hourglass model attributes this to phylum-wide conserved, constrained organogenesis stages that pattern the bodyplan (the phylotype hypothesis); however, there has been no quantitative testing of this idea with a phylum-wide comparison of species. Here, based on data from early-to-late embryonic transcriptomes collected from eight chordates, we suggest that the phylotype hypothesis would be better applied to vertebrates than chordates. Furthermore, we found that vertebrates' conserved mid-embryonic developmental programmes are intensively recruited to other developmental processes, and the degree of the recruitment positively correlates with their evolutionary conservation and essentiality for normal development. Thus, we propose that the intensively recruited genetic system during vertebrates' organogenesis period imposed constraints on its diversification through pleiotropic constraints, which ultimately led to the common anatomical pattern observed in vertebrates.
Reconstructing Star Formation Histories to Reveal the Origin and Evolution of the SFR-M* Correlation
NASA Astrophysics Data System (ADS)
Gawiser, Eric
2016-10-01
Correlations have played an important role in advancing our knowledge of astrophysics, from the Schmidt-Kennicutt law to the black hole-bulge mass relation. A surprisingly tight correlation between galaxy star formation rates (SFR) and stellar masses (M*) was discovered in 2007, and models of galaxy formation and evolution can be constrained by studying the evolution of this SFR-M* correlation and its intrinsic scatter. At present, such investigations are weakened by the need to assume a simple parametric form for the star formation history, typically constant or exponentially declining.We propose to use our new dense basis method to reconstruct star-formation histories (SFHs) through SED fitting using multi-band photometry of >10,000 galaxies in the 3D-HST and CANDELS catalogs. Armed with these reconstructed SFHs, we will then:1. Better measure the SFR-M* correlation (aka star-forming sequence) in several redshift bins at 0.5
The Atacama Cosmology Telescope: cosmological parameters from three seasons of data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sievers, Jonathan L.; Appel, John William; Hlozek, Renée A.
2013-10-01
We present constraints on cosmological and astrophysical parameters from high-resolution microwave background maps at 148 GHz and 218 GHz made by the Atacama Cosmology Telescope (ACT) in three seasons of observations from 2008 to 2010. A model of primary cosmological and secondary foreground parameters is fit to the map power spectra and lensing deflection power spectrum, including contributions from both the thermal Sunyaev-Zeldovich (tSZ) effect and the kinematic Sunyaev-Zeldovich (kSZ) effect, Poisson and correlated anisotropy from unresolved infrared sources, radio sources, and the correlation between the tSZ effect and infrared sources. The power ℓ{sup 2}C{sub ℓ}/2π of the thermal SZmore » power spectrum at 148 GHz is measured to be 3.4±1.4 μK{sup 2} at ℓ = 3000, while the corresponding amplitude of the kinematic SZ power spectrum has a 95% confidence level upper limit of 8.6 μK{sup 2}. Combining ACT power spectra with the WMAP 7-year temperature and polarization power spectra, we find excellent consistency with the LCDM model. We constrain the number of effective relativistic degrees of freedom in the early universe to be N{sub eff} = 2.79±0.56, in agreement with the canonical value of N{sub eff} = 3.046 for three massless neutrinos. We constrain the sum of the neutrino masses to be Σm{sub ν} < 0.39 eV at 95% confidence when combining ACT and WMAP 7-year data with BAO and Hubble constant measurements. We constrain the amount of primordial helium to be Y{sub p} = 0.225±0.034, and measure no variation in the fine structure constant α since recombination, with α/α{sub 0} = 1.004±0.005. We also find no evidence for any running of the scalar spectral index, dn{sub s}/dln k = −0.004±0.012.« less
The Atacama Cosmology Telescope: Cosmological Parameters from Three Seasons of Data
NASA Technical Reports Server (NTRS)
Seivers, Jonathan L.; Hlozek, Renee A.; Nolta, Michael R.; Acquaviva, Viviana; Addison, Graeme E.; Ade, Peter A. R.; Aguirre, Paula; Amiri, Mandana; Appel, John W.; Barrientos, L. Felipe;
2013-01-01
We present constraints on cosmological and astrophysical parameters from highresolution microwave background maps at 148 GHz and 218 GHz made by the Atacama Cosmology Telescope (ACT) in three seasons of observations from 2008 to 2010. A model of primary cosmological and secondary foreground parameters is fit to the map power spectra and lensing deflection power spectrum, including contributions from both the thermal Sunyaev-Zeldovich (tSZ) effect and the kinematic Sunyaev-Zeldovich (kSZ) effect, Poisson and correlated anisotropy from unresolved infrared sources, radio sources, and the correlation between the tSZ effect and infrared sources. The power l(sup 2)C(sub l)/2pi of the thermal SZ power spectrum at 148 GHz is measured to be 3.4 +/- 1.4 micro-K(sup 2) at l = 3000, while the corresponding amplitude of the kinematic SZ power spectrum has a 95% confidence level upper limit of 8.6 micro-K(sup 2). Combining ACT power spectra with the WMAP 7-year temperature and polarization power spectra, we find excellent consistency with the LCDM model. We constrain the number of effective relativistic degrees of freedom in the early universe to be N(sub eff) = 2.79 +/- 0.56, in agreement with the canonical value of N(sub eff) = 3.046 for three massless neutrinos. We constrain the sum of the neutrino masses to be sigma(m?) is less than 0.39 eV at 95% confidence when combining ACT and WMAP 7-year data with BAO and Hubble constant measurements. We constrain the amount of primordial helium to be Y(sub p) = 0.225 +/- 0.034, and measure no variation in the fine structure constant alpha since recombination, with alpha/alpha(sub 0) = 1.004 +/- 0.005. We also find no evidence for any running of the scalar spectral index, derivative(n(sub s))/derivative(ln k) = -0.004 +/- 0.012.
NASA Astrophysics Data System (ADS)
Jose, L.; Bennett, R. A.; Harig, C.
2017-12-01
Currently, cGPS data is well suited to track vertical changes in the Earth's surface. However, there are annual, semi-annual, and interannual signals within cGPS time series that are not well constrained. We hypothesize that these signals are primarily due to water loading. If this is the case, the conventional method of modeling cGPS data as an annual or semiannual sinusoid falls short, as such models cannot accurately capture all variations in surface displacement, especially those due to extreme hydrologic events. We believe that we can better correct the cGPS time series with another method we are developing wherein we use a time series of surface displacement derived from the GRACE geopotential field instead of a sinusoidal model to correct the data. Currently, our analysis is constrained to the Amazon Basin, where the signal due to water loading is large enough to appear in both the GRACE and cGPS measurements. The vertical signal from cGPS stations across the Amazon Basin show an apparent spatial correlation, which further supports our idea that these signals are due to a regional water loading signal. In our preliminary research, we used tsview for Matlab to find that the WRMS of the corrected cGPS time series can be reduced as much as 30% from the model corrected data to the GRACE corrected data. The Amazon, like many places around the world, has experienced extreme drought, in 2005, 2010, and recently in 2015. In addition to making the cGPS vertical signal more robust, the method we are developing has the potential to help us understand the effects of these weather events and track trends in water loading.
Limiting the effective mass and new physics parameters from 0 ν β β
NASA Astrophysics Data System (ADS)
Awasthi, Ram Lal; Dasgupta, Arnab; Mitra, Manimala
2016-10-01
In the light of the recent result from KamLAND-Zen (KLZ) and GERDA Phase-II, we update the bounds on the effective mass and the new physics parameters, relevant for neutrinoless double beta decay (0 ν β β ). In addition to the light Majorana neutrino exchange, we analyze beyond standard model contributions that arise in left-right symmetry and R-parity violating supersymmetry. The improved limit from KLZ constrains the effective mass of light neutrino exchange down to sub-eV mass regime 0.06 eV. Using the correlation between the 136Xe and 76 half-lives, we show that the KLZ limit individually rules out the positive claim of observation of 0 ν β β for all nuclear matrix element compilation. For the left-right symmetry and R-parity violating supersymmetry, the KLZ bound implies a factor of 2 improvement of the effective mass and the new physics parameters. The future ton scale experiments such as, nEXO will further constrain these models, in particular, will rule out standard as well as Type-II dominating LRSM inverted hierarchy scenario.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Sudeep; Louis, Thibaut; Calabrese, Erminia
2014-04-01
We present the temperature power spectra of the cosmic microwave background (CMB) derived from the three seasons of data from the Atacama Cosmology Telescope (ACT) at 148 GHz and 218 GHz, as well as the cross-frequency spectrum between the two channels. We detect and correct for contamination due to the Galactic cirrus in our equatorial maps. We present the results of a number of tests for possible systematic error and conclude that any effects are not significant compared to the statistical errors we quote. Where they overlap, we cross-correlate the ACT and the South Pole Telescope (SPT) maps and showmore » they are consistent. The measurements of higher-order peaks in the CMB power spectrum provide an additional test of the ΛCDM cosmological model, and help constrain extensions beyond the standard model. The small angular scale power spectrum also provides constraining power on the Sunyaev-Zel'dovich effects and extragalactic foregrounds. We also present a measurement of the CMB gravitational lensing convergence power spectrum at 4.6σ detection significance.« less
NASA Technical Reports Server (NTRS)
Das, Sudeep; Louis, Thibaut; Nolta, Michael R.; Addison, Graeme E.; Battisetti, Elia S.; Bond, J. Richard; Calabrese, Erminia; Crichton, Devin; Devlin, Mark J.; Dicker, Simon;
2014-01-01
We present the temperature power spectra of the cosmic microwave background (CMB) derived from the three seasons of data from the Atacama Cosmology Telescope (ACT) at 148 GHz and 218 GHz, as well as the cross-frequency spectrum between the two channels. We detect and correct for contamination due to the Galactic cirrus in our equatorial maps. We present the results of a number of tests for possible systematic error and conclude that any effects are not significant compared to the statistical errors we quote. Where they overlap, we cross-correlate the ACT and the South Pole Telescope (SPT) maps and show they are consistent. The measurements of higher-order peaks in the CMB power spectrum provide an additional test of the ?CDM cosmological model, and help constrain extensions beyond the standard model. The small angular scale power spectrum also provides constraining power on the Sunyaev-Zel'dovich effects and extragalactic foregrounds. We also present a measurement of the CMB gravitational lensing convergence power spectrum at 4.6s detection significance.
Effective theory of flavor for Minimal Mirror Twin Higgs
NASA Astrophysics Data System (ADS)
Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke
2017-10-01
We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ɛ^{n_i} for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ɛ^' {n}_i} , so that spontaneous breaking of the parity P arises from a single parameter ɛ'/ɛ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i , including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ɛ'/ɛ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. In each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimastrogiovanni, Emanuela; Emami, Razieh, E-mail: emanuela1573@gmail.com, E-mail: iasraziehm@ust.hk
Probing correlations among short and long-wavelength cosmological fluctuations is known to be decisive for deepening the current understanding of inflation at the microphysical level. Spectral distortions of the CMB can be caused by dissipation of cosmological perturbations when they re-enter Hubble after inflation. Correlating spectral distortions with temperature anisotropies will thus provide the opportunity to greatly enlarge the range of scales over which squeezed limits can be tested, opening up a new window on inflation complementing the ones currently probed with CMB and LSS. In this paper we discuss a variety of inflationary mechanisms that can be efficiently constrained withmore » distortion-temperature correlations. For some of these realizations (representative of large classes of models) we derive quantitative predictions for the squeezed limit bispectra, finding that their amplitudes are above the sensitivity limits of an experiment such as the proposed PIXIE.« less
NASA Astrophysics Data System (ADS)
Timmons, Nicholas; Cooray, Asantha; Feng, Chang; Keating, Brian
2017-11-01
We measure the cosmic microwave background (CMB) skewness power spectrum in Planck, using frequency maps of the HFI instrument and the Sunyaev-Zel’dovich (SZ) component map. The two-to-one skewness power spectrum measures the cross-correlation between CMB lensing and the thermal SZ effect. We also directly measure the same cross-correlation using the Planck CMB lensing map and the SZ map and compare it to the cross-correlation derived from the skewness power spectrum. We model fit the SZ power spectrum and CMB lensing-SZ cross-power spectrum via the skewness power spectrum to constrain the gas pressure profile of dark matter halos. The gas pressure profile is compared to existing measurements in the literature including a direct estimate based on the stacking of SZ clusters in Planck.
NASA Astrophysics Data System (ADS)
Miller, D. J.; Liu, Z.; Sun, K.; Tao, L.; Nowak, J. B.; Bambha, R.; Michelsen, H. A.; Zondlo, M. A.
2014-12-01
Agricultural ammonia (NH3) emissions are highly uncertain in current bottom-up inventories. Ammonium nitrate is a dominant component of fine aerosols in agricultural regions such as the Central Valley of California, especially during winter. Recent high resolution regional modeling efforts in this region have found significant ammonium nitrate and gas-phase NH3 biases during summer. We compare spatially-resolved surface and boundary layer gas-phase NH3 observations during NASA DISCOVER-AQ California with Community Multi-Scale Air Quality (CMAQ) regional model simulations driven by the EPA NEI 2008 inventory to constrain wintertime NH3 model biases. We evaluate model performance with respect to aerosol partitioning, mixing and deposition to constrain contributions to modeled NH3 concentration biases in the Central Valley Tulare dairy region. Ammonia measurements performed with an open-path mobile platform on a vehicle are gridded to 4 km resolution hourly background concentrations. A peak detection algorithm is applied to remove local feedlot emission peaks. Aircraft NH3, NH4+ and NO3- observations are also compared with simulations extracted along the flight tracks. We find NH3 background concentrations in the dairy region are underestimated by three to five times during winter and NH3 simulations are moderately correlated with observations (r = 0.36). Although model simulations capture NH3 enhancements in the dairy region, these simulations are biased low by 30-60 ppbv NH3. Aerosol NH4+ and NO3- are also biased low in CMAQ by three and four times respectively. Unlike gas-phase NH3, CMAQ simulations do not capture typical NH4+ or NO3- enhancements observed in the dairy region. In contrast, boundary layer height simulations agree well with observations within 13%. We also address observational constraints on simulated NH3 deposition fluxes. These comparisons suggest that NEI 2008 wintertime dairy emissions are underestimated by a factor of three to five. We test sensitivity to emissions by increasing the NEI 2008 NH3 emissions uniformly across the dairy region and evaluate the impact on modeled concentrations. These results are applicable to improving predictions of ammoniated aerosol loading and highlight the value of mobile platform spatial NH3 measurements to constrain emission inventories.
NASA Astrophysics Data System (ADS)
Kumar, Rohit; Puri, Rajeev K.
2018-03-01
Employing the quantum molecular dynamics (QMD) approach for nucleus-nucleus collisions, we test the predictive power of the energy-based clusterization algorithm, i.e., the simulating annealing clusterization algorithm (SACA), to describe the experimental data of charge distribution and various event-by-event correlations among fragments. The calculations are constrained into the Fermi-energy domain and/or mildly excited nuclear matter. Our detailed study spans over different system masses, and system-mass asymmetries of colliding partners show the importance of the energy-based clusterization algorithm for understanding multifragmentation. The present calculations are also compared with the other available calculations, which use one-body models, statistical models, and/or hybrid models.
Inflationary tensor fossils in large-scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimastrogiovanni, Emanuela; Fasiello, Matteo; Jeong, Donghui
Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to bemore » satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.« less
Using Paleo-climate Comparisons to Constrain Future Projections in CMIP5
NASA Technical Reports Server (NTRS)
Schmidt, G. A.; Annan, J D.; Bartlein, P. J.; Cook, B. I.; Guilyardi, E.; Hargreaves, J. C.; Harrison, S. P.; Kageyama, M.; LeGrande, A. N..; Konecky, B.;
2013-01-01
We present a description of the theoretical framework and best practice for using the paleo-climate model component of the Coupled Model Intercomparison Project (Phase 5) (CMIP5) to constrain future projections of climate using the same models. The constraints arise from measures of skill in hindcasting paleo-climate changes from the present over 3 periods: the Last Glacial Maximum (LGM) (21 thousand years before present, ka), the mid-Holocene (MH) (6 ka) and the Last Millennium (LM) (8501850 CE). The skill measures may be used to validate robust patterns of climate change across scenarios or to distinguish between models that have differing outcomes in future scenarios. We find that the multi-model ensemble of paleo-simulations is adequate for addressing at least some of these issues. For example, selected benchmarks for the LGM and MH are correlated to the rank of future projections of precipitationtemperature or sea ice extent to indicate that models that produce the best agreement with paleoclimate information give demonstrably different future results than the rest of the models. We also find that some comparisons, for instance associated with model variability, are strongly dependent on uncertain forcing timeseries, or show time dependent behaviour, making direct inferences for the future problematic. Overall, we demonstrate that there is a strong potential for the paleo-climate simulations to help inform the future projections and urge all the modeling groups to complete this subset of the CMIP5 runs.
NASA Astrophysics Data System (ADS)
Abeysekara, A. U.; Linnemann, J. T.
2015-05-01
The pulsar emission mechanism in the gamma ray energy band is poorly understood. Currently, there are several models under discussion in the pulsar community. These models can be constrained by studying the collective properties of a sample of pulsars, which became possible with the large sample of gamma ray pulsars discovered by the Fermi Large Area Telescope. In this paper we develop a new experimental multi-wavelength technique to determine the beaming factor ≤ft( {{f}{Ω }} \\right) dependance on spin-down luminosity of a set of GeV pulsars. This technique requires three input parameters: pulsar spin-down luminosity, pulsar phase-averaged GeV flux, and TeV or X-ray flux from the associated pulsar wind nebula (PWN). The analysis presented in this paper uses the PWN TeV flux measurements to study the correlation between {{f}{Ω }} and \\dot{E}. The measured correlation has some features that favor the Outer Gap model over the Polar Cap, Slot Gap, and One Pole Caustic models for pulsar emission in the energy range of 0.1-100 GeV, but one must keep in mind that these simulated models failed to explain many of the most important pulsar population characteristics. A tight correlation between the pulsar GeV emission and PWN TeV emission was also observed, which suggests the possibility of a linear relationship between the two emission mechanisms. In this paper we also discuss a possible mechanism to explain this correlation.
Elimination of a genetic correlation between the sexes via artificial correlational selection.
Delph, Lynda F; Steven, Janet C; Anderson, Ingrid A; Herlihy, Christopher R; Brodie, Edmund D
2011-10-01
Genetic correlations between the sexes can constrain the evolution of sexual dimorphism and be difficult to alter, because traits common to both sexes share the same genetic underpinnings. We tested whether artificial correlational selection favoring specific combinations of male and female traits within families could change the strength of a very high between-sex genetic correlation for flower size in the dioecious plant Silene latifolia. This novel selection dramatically reduced the correlation in two of three selection lines in fewer than five generations. Subsequent selection only on females in a line characterized by a lower between-sex genetic correlation led to a significantly lower correlated response in males, confirming the potential evolutionary impact of the reduced correlation. Although between-sex genetic correlations can potentially constrain the evolution of sexual dimorphism, our findings reveal that these constraints come not from a simple conflict between an inflexible genetic architecture and a pattern of selection working in opposition to it, but rather a complex relationship between a changeable correlation and a form of selection that promotes it. In other words, the form of selection on males and females that leads to sexual dimorphism may also promote the genetic phenomenon that limits sexual dimorphism. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Vibration control of multiferroic fibrous composite plates using active constrained layer damping
NASA Astrophysics Data System (ADS)
Kattimani, S. C.; Ray, M. C.
2018-06-01
Geometrically nonlinear vibration control of fiber reinforced magneto-electro-elastic or multiferroic fibrous composite plates using active constrained layer damping treatment has been investigated. The piezoelectric (BaTiO3) fibers are embedded in the magnetostrictive (CoFe2O4) matrix forming magneto-electro-elastic or multiferroic smart composite. A three-dimensional finite element model of such fiber reinforced magneto-electro-elastic plates integrated with the active constrained layer damping patches is developed. Influence of electro-elastic, magneto-elastic and electromagnetic coupled fields on the vibration has been studied. The Golla-Hughes-McTavish method in time domain is employed for modeling a constrained viscoelastic layer of the active constrained layer damping treatment. The von Kármán type nonlinear strain-displacement relations are incorporated for developing a three-dimensional finite element model. Effect of fiber volume fraction, fiber orientation and boundary conditions on the control of geometrically nonlinear vibration of the fiber reinforced magneto-electro-elastic plates is investigated. The performance of the active constrained layer damping treatment due to the variation of piezoelectric fiber orientation angle in the 1-3 Piezoelectric constraining layer of the active constrained layer damping treatment has also been emphasized.
Thermoviscoplastic model with application to copper
NASA Technical Reports Server (NTRS)
Freed, Alan D.
1988-01-01
A viscoplastic model is developed which is applicable to anisothermal, cyclic, and multiaxial loading conditions. Three internal state variables are used in the model; one to account for kinematic effects, and the other two to account for isotropic effects. One of the isotropic variables is a measure of yield strength, while the other is a measure of limit strength. Each internal state variable evolves through a process of competition between strain hardening and recovery. There is no explicit coupling between dynamic and thermal recovery in any evolutionary equation, which is a useful simplification in the development of the model. The thermodynamic condition of intrinsic dissipation constrains the thermal recovery function of the model. Application of the model is made to copper, and cyclic experiments under isothermal, thermomechanical, and nonproportional loading conditions are considered. Correlations and predictions of the model are representative of observed material behavior.
Design and optimization of organic rankine cycle for low temperature geothermal power plant
NASA Astrophysics Data System (ADS)
Barse, Kirtipal A.
Rising oil prices and environmental concerns have increased attention to renewable energy. Geothermal energy is a very attractive source of renewable energy. Although low temperature resources (90°C to 150°C) are the most common and most abundant source of geothermal energy, they were not considered economical and technologically feasible for commercial power generation. Organic Rankine Cycle (ORC) technology makes it feasible to use low temperature resources to generate power by using low boiling temperature organic liquids. The first hypothesis for this research is that using ORC is technologically and economically feasible to generate electricity from low temperature geothermal resources. The second hypothesis for this research is redesigning the ORC system for the given resource condition will improve efficiency along with improving economics. ORC model was developed using process simulator and validated with the data obtained from Chena Hot Springs, Alaska. A correlation was observed between the critical temperature of the working fluid and the efficiency for the cycle. Exergy analysis of the cycle revealed that the highest exergy destruction occurs in evaporator followed by condenser, turbine and working fluid pump for the base case scenarios. Performance of ORC was studied using twelve working fluids in base, Internal Heat Exchanger and turbine bleeding constrained and non-constrained configurations. R601a, R245ca, R600 showed highest first and second law efficiency in the non-constrained IHX configuration. The highest net power was observed for R245ca, R601a and R601 working fluids in the non-constrained base configuration. Combined heat exchanger area and size parameter of the turbine showed an increasing trend as the critical temperature of the working fluid decreased. The lowest levelized cost of electricity was observed for R245ca followed by R601a, R236ea in non-constrained base configuration. The next best candidates in terms of LCOE were R601a, R245ca and R600 in non-constrained IHX configuration. LCOE is dependent on net power and higher net power favors to lower the cost of electricity. Overall R245ca, R601, R601a, R600 and R236ea show better performance among the fluids studied. Non constrained configurations display better performance compared to the constrained configurations. Base non-constrained offered the highest net power and lowest LCOE.
Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N
2014-12-01
Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph
2018-07-01
To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.
NASA Technical Reports Server (NTRS)
Tinker, Michael L.
1998-01-01
Application of the free-suspension residual flexibility modal test method to the International Space Station Pathfinder structure is described. The Pathfinder, a large structure of the general size and weight of Space Station module elements, was also tested in a large fixed-base fixture to simulate Shuttle Orbiter payload constraints. After correlation of the Pathfinder finite element model to residual flexibility test data, the model was coupled to a fixture model, and constrained modes and frequencies were compared to fixed-base test. modes. The residual flexibility model compared very favorably to results of the fixed-base test. This is the first known direct comparison of free-suspension residual flexibility and fixed-base test results for a large structure. The model correlation approach used by the author for residual flexibility data is presented. Frequency response functions (FRF) for the regions of the structure that interface with the environment (a test fixture or another structure) are shown to be the primary tools for model correlation that distinguish or characterize the residual flexibility approach. A number of critical issues related to use of the structure interface FRF for correlating the model are then identified and discussed, including (1) the requirement of prominent stiffness lines, (2) overcoming problems with measurement noise which makes the antiresonances or minima in the functions difficult to identify, and (3) the use of interface stiffness and lumped mass perturbations to bring the analytical responses into agreement with test data. It is shown that good comparison of analytical-to-experimental FRF is the key to obtaining good agreement of the residual flexibility values.
Evolutionary response when selection and genetic variation covary across environments.
Wood, Corlett W; Brodie, Edmund D
2016-10-01
Although models of evolution usually assume that the strength of selection on a trait and the expression of genetic variation in that trait are independent, whenever the same ecological factor impacts both parameters, a correlation between the two may arise that accelerates trait evolution in some environments and slows it in others. Here, we address the evolutionary consequences and ecological causes of a correlation between selection and expressed genetic variation. Using a simple analytical model, we show that the correlation has a modest effect on the mean evolutionary response and a large effect on its variance, increasing among-population or among-generation variation in the response when positive, and diminishing variation when negative. We performed a literature review to identify the ecological factors that influence selection and expressed genetic variation across traits. We found that some factors - temperature and competition - are unlikely to generate the correlation because they affected one parameter more than the other, and identified others - most notably, environmental novelty - that merit further investigation because little is known about their impact on one of the two parameters. We argue that the correlation between selection and genetic variation deserves attention alongside other factors that promote or constrain evolution in heterogeneous landscapes. © 2016 John Wiley & Sons Ltd/CNRS.
Fuller, Rebecca C
2009-07-01
The sensory bias model for the evolution of mating preferences states that mating preferences evolve as correlated responses to selection on nonmating behaviors sharing a common sensory system. The critical assumption is that pleiotropy creates genetic correlations that affect the response to selection. I simulated selection on populations of neural networks to test this. First, I selected for various combinations of foraging and mating preferences. Sensory bias predicts that populations with preferences for like-colored objects (red food and red mates) should evolve more readily than preferences for differently colored objects (red food and blue mates). Here, I found no evidence for sensory bias. The responses to selection on foraging and mating preferences were independent of one another. Second, I selected on foraging preferences alone and asked whether there were correlated responses for increased mating preferences for like-colored mates. Here, I found modest evidence for sensory bias. Selection for a particular foraging preference resulted in increased mating preference for similarly colored mates. However, the correlated responses were small and inconsistent. Selection on foraging preferences alone may affect initial levels of mating preferences, but these correlations did not constrain the joint evolution of foraging and mating preferences in these simulations.
NASA Astrophysics Data System (ADS)
Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh
2017-06-01
The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timmons, Nicholas; Cooray, Asantha; Feng, Chang
2017-11-01
We measure the cosmic microwave background (CMB) skewness power spectrum in Planck , using frequency maps of the HFI instrument and the Sunyaev–Zel’dovich (SZ) component map. The two-to-one skewness power spectrum measures the cross-correlation between CMB lensing and the thermal SZ effect. We also directly measure the same cross-correlation using the Planck CMB lensing map and the SZ map and compare it to the cross-correlation derived from the skewness power spectrum. We model fit the SZ power spectrum and CMB lensing–SZ cross-power spectrum via the skewness power spectrum to constrain the gas pressure profile of dark matter halos. The gasmore » pressure profile is compared to existing measurements in the literature including a direct estimate based on the stacking of SZ clusters in Planck .« less
Reflectance of micron-sized dust particles retrieved with the Umov law
NASA Astrophysics Data System (ADS)
Zubko, Evgenij; Videen, Gorden; Zubko, Nataliya; Shkuratov, Yuriy
2017-03-01
The maximum positive polarization Pmax that initially unpolarized light acquires when scattered from a particulate surface inversely correlates with its geometric albedo A. In the literature, this phenomenon is known as the Umov law. We investigate the Umov law in application to single-scattering submicron and micron-sized agglomerated debris particles, model particles that have highly irregular morphology. We find that if the complex refractive index m is constrained to Re(m)=1.4-1.7 and Im(m)=0-0.15, model particles of a given size distribution have a linear inverse correlation between log(Pmax) and log(A). This correlation resembles what is measured in particulate surfaces, suggesting a similar mechanism governing the Umov law in both systems. We parameterize the dependence of log(A) on log(Pmax) of single-scattering particles and analyze the airborne polarimetric measurements of atmospheric aerosols reported by Dolgos & Martins in [1]. We conclude that Pmax ≈ 50% measured by Dolgos & Martins corresponds to very dark aerosols having geometric albedo A=0.019 ± 0.005.
Pure state consciousness and its local reduction to neuronal space
NASA Astrophysics Data System (ADS)
Duggins, A. J.
2013-01-01
The single neuronal state can be represented as a vector in a complex space, spanned by an orthonormal basis of integer spike counts. In this model a scalar element of experience is associated with the instantaneous firing rate of a single sensory neuron over repeated stimulus presentations. Here the model is extended to composite neural systems that are tensor products of single neuronal vector spaces. Depiction of the mental state as a vector on this tensor product space is intended to capture the unity of consciousness. The density operator is introduced as its local reduction to the single neuron level, from which the firing rate can again be derived as the objective correlate of a subjective element. However, the relational structure of perceptual experience only emerges when the non-local mental state is considered. A metric of phenomenal proximity between neuronal elements of experience is proposed, based on the cross-correlation function of neurophysiology, but constrained by the association of theoretical extremes of correlation/anticorrelation in inseparable 2-neuron states with identical and opponent elements respectively.
Dodd, C.K.; Dorazio, R.M.
2004-01-01
A critical variable in both ecological and conservation field studies is determining how many individuals of a species are present within a defined sampling area. Labor intensive techniques such as capture-mark-recapture and removal sampling may provide estimates of abundance, but there are many logistical constraints to their widespread application. Many studies on terrestrial and aquatic salamanders use counts as an index of abundance, assuming that detection remains constant while sampling. If this constancy is violated, determination of detection probabilities is critical to the accurate estimation of abundance. Recently, a model was developed that provides a statistical approach that allows abundance and detection to be estimated simultaneously from spatially and temporally replicated counts. We adapted this model to estimate these parameters for salamanders sampled over a six vear period in area-constrained plots in Great Smoky Mountains National Park. Estimates of salamander abundance varied among years, but annual changes in abundance did not vary uniformly among species. Except for one species, abundance estimates were not correlated with site covariates (elevation/soil and water pH, conductivity, air and water temperature). The uncertainty in the estimates was so large as to make correlations ineffectual in predicting which covariates might influence abundance. Detection probabilities also varied among species and sometimes among years for the six species examined. We found such a high degree of variation in our counts and in estimates of detection among species, sites, and years as to cast doubt upon the appropriateness of using count data to monitor population trends using a small number of area-constrained survey plots. Still, the model provided reasonable estimates of abundance that could make it useful in estimating population size from count surveys.
Statistical measures of Planck scale signal correlations in interferometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig J.; Kwon, Ohkyung
2015-06-22
A model-independent statistical framework is presented to interpret data from systems where the mean time derivative of positional cross correlation between world lines, a measure of spreading in a quantum geometrical wave function, is measured with a precision smaller than the Planck time. The framework provides a general way to constrain possible departures from perfect independence of classical world lines, associated with Planck scale bounds on positional information. A parametrized candidate set of possible correlation functions is shown to be consistent with the known causal structure of the classical geometry measured by an apparatus, and the holographic scaling of informationmore » suggested by gravity. Frequency-domain power spectra are derived that can be compared with interferometer data. As a result, simple projections of sensitivity for specific experimental set-ups suggests that measurements will directly yield constraints on a universal time derivative of the correlation function, and thereby confirm or rule out a class of Planck scale departures from classical geometry.« less
NASA Astrophysics Data System (ADS)
Gao, C.; Lekic, V.
2016-12-01
When constraining the structure of the Earth's continental lithosphere, multiple seismic observables are often combined due to their complementary sensitivities.The transdimensional Bayesian (TB) approach in seismic inversion allows model parameter uncertainties and trade-offs to be quantified with few assumptions. TB sampling yields an adaptive parameterization that enables simultaneous inversion for different model parameters (Vp, Vs, density, radial anisotropy), without the need for strong prior information or regularization. We use a reversible jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate different seismic observables - surface wave dispersion (SWD), Rayleigh wave ellipticity (ZH ratio), and receiver functions - into the inversion for the profiles of shear velocity (Vs), compressional velocity (Vp), density (ρ), and radial anisotropy (ξ) beneath a seismic station. By analyzing all three data types individually and together, we show that TB sampling can eliminate the need for a fixed parameterization based on prior information, and reduce trade-offs in model estimates. We then explore the effect of different types of misfit functions for receiver function inversion, which is a highly non-unique problem. We compare the synthetic inversion results using the L2 norm, cross-correlation type and integral type misfit function by their convergence rates and retrieved seismic structures. In inversions in which only one type of model parameter (Vs for the case of SWD) is inverted, assumed scaling relationships are often applied to account for sensitivity to other model parameters (e.g. Vp, ρ, ξ). Here we show that under a TB framework, we can eliminate scaling assumptions, while simultaneously constraining multiple model parameters to varying degrees. Furthermore, we compare the performance of TB inversion when different types of model parameters either share the same or use independent parameterizations. We show that different parameterizations can lead to differences in retrieved model parameters, consistent with limited data constraints. We then quantitatively examine the model parameter trade-offs and find that trade-offs between Vp and radial anisotropy might limit our ability to constrain shallow-layer radial anisotropy using current seismic observables.
Constraining geostatistical models with hydrological data to improve prediction realism
NASA Astrophysics Data System (ADS)
Demyanov, V.; Rojas, T.; Christie, M.; Arnold, D.
2012-04-01
Geostatistical models reproduce spatial correlation based on the available on site data and more general concepts about the modelled patters, e.g. training images. One of the problem of modelling natural systems with geostatistics is in maintaining realism spatial features and so they agree with the physical processes in nature. Tuning the model parameters to the data may lead to geostatistical realisations with unrealistic spatial patterns, which would still honour the data. Such model would result in poor predictions, even though although fit the available data well. Conditioning the model to a wider range of relevant data provide a remedy that avoid producing unrealistic features in spatial models. For instance, there are vast amounts of information about the geometries of river channels that can be used in describing fluvial environment. Relations between the geometrical channel characteristics (width, depth, wave length, amplitude, etc.) are complex and non-parametric and are exhibit a great deal of uncertainty, which is important to propagate rigorously into the predictive model. These relations can be described within a Bayesian approach as multi-dimensional prior probability distributions. We propose a way to constrain multi-point statistics models with intelligent priors obtained from analysing a vast collection of contemporary river patterns based on previously published works. We applied machine learning techniques, namely neural networks and support vector machines, to extract multivariate non-parametric relations between geometrical characteristics of fluvial channels from the available data. An example demonstrates how ensuring geological realism helps to deliver more reliable prediction of a subsurface oil reservoir in a fluvial depositional environment.
ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J
2014-07-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.
Nayhouse, Michael; Kwon, Joseph Sang-Il; Orkoulas, G
2012-05-28
In simulation studies of fluid-solid transitions, the solid phase is usually modeled as a constrained system in which each particle is confined to move in a single Wigner-Seitz cell. The constrained cell model has been used in the determination of fluid-solid coexistence via thermodynamic integration and other techniques. In the present work, the phase diagram of such a constrained system of Lennard-Jones particles is determined from constant-pressure simulations. The pressure-density isotherms exhibit inflection points which are interpreted as the mechanical stability limit of the solid phase. The phase diagram of the constrained system contains a critical and a triple point. The temperature and pressure at the critical and the triple point are both higher than those of the unconstrained system due to the reduction in the entropy caused by the single occupancy constraint.
NASA Astrophysics Data System (ADS)
Yudin, V. A.; England, S.; Liu, H.; Solomon, S. C.; Immel, T. J.; Maute, A. I.; Burns, A. G.; Foster, B.; Wu, Q.; Goncharenko, L. P.
2013-12-01
We examine the capability of novel configurations of community models, WACCM-X and TIME-GCM, to support current and forthcoming space-borne missions to monitor the dynamics and composition of the Mesosphere-Thermosphere-Ionosphere (MTI) system. In these configurations the lower atmosphere of WACCM-X is constrained by operational analyses and/or short-term forecasts provided by the Goddard Earth Observing System (GEOS-5) of Global Modeling and Assimilation Office at NASA/GSFC. With the terrestrial weather of GEOS-5 and updated model physics the simulations in the MTI are capable to reproduce observed signatures of the perturbed wave dynamics and ion-neutral coupling during recent stratospheric warming events, short-term, annual and year-to-year variability of prevailing flows, planetary waves, tides, and composition. These 'terrestrial-weather' driven simulations with day-to-day variable solar and geomagnetic inputs can provide background state (first guess) and error statistics for the inverse algorithms of new NASA missions, ICON and GOLD at locations and time of observations in the MTI region. With two different viewing geometries (sun-synchronous and geostationary) of instruments, ICON and GOLD will provide complimentary global observations of temperature, winds and constituents to constrain the first-principle forecast models. This paper will discuss initial design of Observing Simulation Experiments (OSE) in WACCM-X/GEOS-5 and TIME-GCM. As recognized, OSE represent an excellent learning tool for designing and evaluating observing capabilities of novel sensors. They can guide on how to integrate/combine information from different instruments. The choice of assimilation schemes, forecast and observational errors will be discussed along with challenges and perspectives to constrain fast-varying tidal dynamics and their effects in models by combination of sun-synchronous and geostationary observations of ICON and GOLD. We will also discuss how correlative space-borne and ground-based observations can verify OSE results in the observable and non-observable regions of the MTI.
A physically constrained classical description of the homogeneous nucleation of ice in water.
Koop, Thomas; Murray, Benjamin J
2016-12-07
Liquid water can persist in a supercooled state to below 238 K in the Earth's atmosphere, a temperature range where homogeneous nucleation becomes increasingly probable. However, the rate of homogeneous ice nucleation in supercooled water is poorly constrained, in part, because supercooled water eludes experimental scrutiny in the region of the homogeneous nucleation regime where it can exist only fleetingly. Here we present a new parameterization of the rate of homogeneous ice nucleation based on classical nucleation theory. In our approach, we constrain the key terms in classical theory, i.e., the diffusion activation energy and the ice-liquid interfacial energy, with physically consistent parameterizations of the pertinent quantities. The diffusion activation energy is related to the translational self-diffusion coefficient of water for which we assess a range of descriptions and conclude that the most physically consistent fit is provided by a power law. The other key term is the interfacial energy between the ice embryo and supercooled water whose temperature dependence we constrain using the Turnbull correlation, which relates the interfacial energy to the difference in enthalpy between the solid and liquid phases. The only adjustable parameter in our model is the absolute value of the interfacial energy at one reference temperature. That value is determined by fitting this classical model to a selection of laboratory homogeneous ice nucleation data sets between 233.6 K and 238.5 K. On extrapolation to temperatures below 233 K, into a range not accessible to standard techniques, we predict that the homogeneous nucleation rate peaks between about 227 and 231 K at a maximum nucleation rate many orders of magnitude lower than previous parameterizations suggest. This extrapolation to temperatures below 233 K is consistent with the most recent measurement of the ice nucleation rate in micrometer-sized droplets at temperatures of 227-232 K on very short time scales using an X-ray laser technique. In summary, we present a new physically constrained parameterization for homogeneous ice nucleation which is consistent with the latest literature nucleation data and our physical understanding of the properties of supercooled water.
Updated Reference Model for Heat Generation in the Lithosphere
NASA Astrophysics Data System (ADS)
Wipperfurth, S. A.; Sramek, O.; Roskovec, B.; Mantovani, F.; McDonough, W. F.
2017-12-01
Models integrating geophysics and geochemistry allow for characterization of the Earth's heat budget and geochemical evolution. Global lithospheric geophysical models are now constrained by surface and body wave data and are classified into several unique tectonic types. Global lithospheric geochemical models have evolved from petrological characterization of layers to a combination of petrologic and seismic constraints. Because of these advances regarding our knowledge of the lithosphere, it is necessary to create an updated chemical and physical reference model. We are developing a global lithospheric reference model based on LITHO1.0 (segmented into 1°lon x 1°lat x 9-layers) and seismological-geochemical relationships. Uncertainty assignments and correlations are assessed for its physical attributes, including layer thickness, Vp and Vs, and density. This approach yields uncertainties for the masses of the crust and lithospheric mantle. Heat producing element abundances (HPE: U, Th, and K) are ascribed to each volume element. These chemical attributes are based upon the composition of subducting sediment (sediment layers), composition of surface rocks (upper crust), a combination of petrologic and seismic correlations (middle and lower crust), and a compilation of xenolith data (lithospheric mantle). The HPE abundances are correlated within each voxel, but not vertically between layers. Efforts to provide correlation of abundances horizontally between each voxel are discussed. These models are used further to critically evaluate the bulk lithosphere heat production in the continents and the oceans. Cross-checks between our model and results from: 1) heat flux (Artemieva, 2006; Davies, 2013; Cammarano and Guerri, 2017), 2) gravity (Reguzzoni and Sampietro, 2015), and 3) geochemical and petrological models (Rudnick and Gao, 2014; Hacker et al. 2015) are performed.
Unveiling Quasar Fueling through a Public Snapshot Survey of Quasar Host Environments
NASA Astrophysics Data System (ADS)
Johnson, Sean
2017-08-01
Feedback from quasars is thought to play a vital role in galaxy evolution, but the relationship between quasars and the halo gas that fuels star-formation on long timescales is not well constrained. Recent observations of the content of quasar host halos have found unusually high covering fractions of cool gas observed in absorption in background quasar spectra. The cool halo gas is strongly correlated with quasar luminosity and exceeds what is observed around non-AGN galaxies by factor of two. Together, these observations provide compelling evidence for a connection between AGN activity and halo gas on 20-200 kpc scales. The high covering fraction and correlation with quasar luminosity may be the result of debris from the galaxy mergers thought to trigger luminous quasars or the halo gas of satellites in gas-rich groups amenable to quasar feeding. If this is the case, then the cool gas observed in absorption will be correlated with signatures of recent galaxy interactions in the quasar host or satellites close to the background sightline. Here, we propose a snapshot imaging survey of z<1 quasars with available constraints on halo gas content to examine a possible correlation between cool halo gas and galaxy interaction signatures. Galaxy morphologies and faint tidal features at z 1 can only be observed with the high resolution imaging capabilities of HST due to the substantial flux in extended wings of AO point-spread functions. The images will be of significant archival value for studying the galaxy environments of quasars and for constraining gas flow models with multi-sightline halo gas studies of galaxies at lower redshift than the foreground & background quasars.
NASA Astrophysics Data System (ADS)
Dainotti, Maria G.; Petrosian, Vahe'; Ostrowski, Michal
2015-01-01
Gamma-ray bursts (GRBs), which have been observed up to redshifts z ≈ 9.5 can be good probes of the early universe and have the potential of testing cosmological models. The analysis by Dainotti of GRB Swift afterglow lightcurves with known redshifts and definite X-ray plateau shows an anti-correlation between the
NASA Astrophysics Data System (ADS)
Cao, Guangxi; Zhang, Minjia; Li, Qingchen
2017-04-01
This study focuses on multifractal detrended cross-correlation analysis of the different volatility intervals of Mainland China, US, and Hong Kong stock markets. A volatility-constrained multifractal detrended cross-correlation analysis (VC-MF-DCCA) method is proposed to study the volatility conductivity of Mainland China, US, and Hong Kong stock markets. Empirical results indicate that fluctuation may be related to important activities in real markets. The Hang Seng Index (HSI) stock market is more influential than the Shanghai Composite Index (SCI) stock market. Furthermore, the SCI stock market is more influential than the Dow Jones Industrial Average stock market. The conductivity between the HSI and SCI stock markets is the strongest. HSI was the most influential market in the large fluctuation interval of 1991 to 2014. The autoregressive fractionally integrated moving average method is used to verify the validity of VC-MF-DCCA. Results show that VC-MF-DCCA is effective.
NASA Technical Reports Server (NTRS)
Erickson, D. J., III; Hernandez, J.; Ginoux, P.; Gregg, W.; Kawa, R.; Behrenfeld, M.; Esaias, W.; Einaudi, Franco (Technical Monitor)
2000-01-01
Since the atmospheric deposition of iron has been linked to primary productivity in various oceanic regions, we have conducted an objective study of the correlation of dust deposition and satellite remotely sensed surface ocean chlorophyll concentrations. We present a global analysis of the correlation between atmospheric dust deposition derived from a satellite-based 3-D atmospheric transport model and SeaWiFs estimates of ocean color. We use the monthly mean dust deposition fields of Ginoux et al. which are based on a global model of dust generation and transport. This model is driven by atmospheric circulation from the Data Assimilation Office (DAO) for the period 1995-1998. This global dust model is constrained by several satellite estimates of standard circulation characteristics. We then perform an analysis of the correlation between the dust deposition and the 1998 SeaWIFS ocean color data for each 2.0 deg x 2.5 deg lat/long grid point, for each month of the year. The results are surprisingly robust. The region between 40 S and 60 S has correlation coefficients from 0.6 to 0.95, statistically significant at the 0.05 level. There are swaths of high correlation at the edges of some major ocean current systems. We interpret these correlations as reflecting areas that have shear related turbulence bringing nitrogen and phosphorus from depth into the surface ocean, and the atmospheric supply of iron provides the limiting nutrient and the correlation between iron deposition and surface ocean chlorophyll is high. There is a region in the western North Pacific with high correlation, reflecting the input of Asian dust to that region. The southern hemisphere has an average correlation coefficient of 0.72 compared that in the northern hemisphere of 0.42 consistent with present conceptual models of where atmospheric iron deposition may play a role in surface ocean biogeochemical cycles. The spatial structure of the correlation fields will be discussed within the context of guiding the design of field programs.
Modeling PSInSAR time series without phase unwrapping
Zhang, L.; Ding, X.; Lu, Z.
2011-01-01
In this paper, we propose a least-squares-based method for multitemporal synthetic aperture radar interferometry that allows one to estimate deformations without the need of phase unwrapping. The method utilizes a series of multimaster wrapped differential interferograms with short baselines and focuses on arcs at which there are no phase ambiguities. An outlier detector is used to identify and remove the arcs with phase ambiguities, and a pseudoinverse of the variance-covariance matrix is used as the weight matrix of the correlated observations. The deformation rates at coherent points are estimated with a least squares model constrained by reference points. The proposed approach is verified with a set of simulated data.
NASA Astrophysics Data System (ADS)
Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.
2018-04-01
We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.
Sparsity-promoting inversion for modeling of irregular volcanic deformation source
NASA Astrophysics Data System (ADS)
Zhai, G.; Shirzaei, M.
2016-12-01
Kīlauea volcano, Hawaíi Island, has a complex magmatic system. Nonetheless, kinematic models of the summit reservoir have so far been limited to first-order analytical solutions with pre-determined geometry. To investigate the complex geometry and kinematics of the summit reservoir, we apply a multitrack multitemporal wavelet-based InSAR (Interferometric Synthetic Aperture Radar) algorithm and a geometry-free time-dependent modeling scheme considering a superposition of point centers of dilatation (PCDs). Applying Principal Component Analysis (PCA) to the time-dependent source model, six spatially independent deformation zones (i.e., reservoirs) are identified, whose locations are consistent with previous studies. Time-dependence of the model allows also identifying periods of correlated or anti-correlated behaviors between reservoirs. Hence, we suggest that likely the reservoir are connected and form a complex magmatic reservoir [Zhai and Shirzaei, 2016]. To obtain a physically-meaningful representation of the complex reservoir, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations (i.e., outliers in background crust). The major steps include inverting surface deformation data using a hybrid L-1 and L-2 norm regularization approach to solve for sparse volume change distribution and then implementing a BEM based method to solve for opening distribution on a triangular mesh representing the complex reservoir. Using this approach, we are able to constrain the internal excess pressure of magma body with irregular geometry, satisfying uniformly pressurized boundary condition on the surface of magma chamber. The inversion method with sparsity constraint is tested using five synthetic source geometries, including torus, prolate ellipsoid, and sphere as well as horizontal and vertical L-shape bodies. The results show that source dimension, depth and shape are well recovered. Afterward, we apply this modeling scheme to deformation observed at Kilauea summit to constrain the magmatic source geometry, and revise the kinematics of Kilauea's shallow plumbing system. Such a model is valuable for understanding the physical processes in a magmatic reservoir and the method can readily be applied to other volcanic settings.
The relationship between the spatial scaling of biodiversity and ecosystem stability
Delsol, Robin; Loreau, Michel; Haegeman, Bart
2018-01-01
Aim Ecosystem stability and its link with biodiversity have mainly been studied at the local scale. Here we present a simple theoretical model to address the joint dependence of diversity and stability on spatial scale, from local to continental. Methods The notion of stability we use is based on the temporal variability of an ecosystem-level property, such as primary productivity. In this way, our model integrates the well-known species–area relationship (SAR) with a recent proposal to quantify the spatial scaling of stability, called the invariability–area relationship (IAR). Results We show that the link between the two relationships strongly depends on whether the temporal fluctuations of the ecosystem property of interest are more correlated within than between species. If fluctuations are correlated within species but not between them, then the IAR is strongly constrained by the SAR. If instead individual fluctuations are only correlated by spatial proximity, then the IAR is unrelated to the SAR. We apply these two correlation assumptions to explore the effects of species loss and habitat destruction on stability, and find a rich variety of multi-scale spatial dependencies, with marked differences between the two assumptions. Main conclusions The dependence of ecosystem stability on biodiversity across spatial scales is governed by the spatial decay of correlations within and between species. Our work provides a point of reference for mechanistic models and data analyses. More generally, it illustrates the relevance of macroecology for ecosystem functioning and stability. PMID:29651225
Effective theory of flavor for Minimal Mirror Twin Higgs
Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke
2017-10-03
We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ϵ more » $$n_i$$ for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ϵ' $$n_i$$, so that spontaneous breaking of the parity P arises from a single parameter ϵ'/ϵ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i, including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ϵ'/ϵ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. Lastly, in each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.« less
Exploring the Relationship Between Planet Mass and Atmospheric Metallicity for Cool Giant Planets
NASA Astrophysics Data System (ADS)
Thomas, Nancy H.; Wong, Ian; Knutson, Heather; Deming, Drake; Desert, Jean-Michel; Fortney, Jonathan J.; Morley, Caroline; Kammer, Joshua A.; Line, Michael R.
2016-10-01
Measurements of the average densities of exoplanets have begun to help constrain their bulk compositions and to provide insight into their formation locations and accretionary histories. Current mass and radius measurements suggest an inverse relationship between a planet's bulk metallicity and its mass, a relationship also seen in the gas and ice giant planets of our own solar system. We expect atmospheric metallicity to similarly increase with decreasing planet mass, but there are currently few constraints on the atmospheric metallicities of extrasolar giant planets. For hydrogen-dominated atmospheres, equilibrium chemistry models predict a transition from CO to CH4 below ~1200 K. However, with increased atmospheric metallicity the relative abundance of CH4 is depleted and CO is enhanced. In this study we present new secondary eclipse observations of a set of cool (<1200 K) giant exoplanets at 3.6 and 4.5 microns using the Spitzer Space Telescope, which allow us to constrain their relative abundances of CH4 and CO and corresponding atmospheric metallicities. We discuss the implications of our results for the proposed correlation between planet mass and atmospheric metallicity as predicted by the core accretion models and observed in our solar system.
Characteristics and habitat of deep vs. shallow slow slip events
NASA Astrophysics Data System (ADS)
Wipperfurth, S. A.; Sramek, O.; Roskovec, B.; Mantovani, F.; McDonough, W. F.
2016-12-01
Models integrating geophysics and geochemistry allow for characterization of the Earth's heat budget and geochemical evolution. Global lithospheric geophysical models are now constrained by surface and body wave data and are classified into several unique tectonic types. Global lithospheric geochemical models have evolved from petrological characterization of layers to a combination of petrologic and seismic constraints. Because of these advances regarding our knowledge of the lithosphere, it is necessary to create an updated chemical and physical reference model. We are developing a global lithospheric reference model based on LITHO1.0 (segmented into 1°lon x 1°lat x 9-layers) and seismological-geochemical relationships. Uncertainty assignments and correlations are assessed for its physical attributes, including layer thickness, Vp and Vs, and density. This approach yields uncertainties for the masses of the crust and lithospheric mantle. Heat producing element abundances (HPE: U, Th, and K) are ascribed to each volume element. These chemical attributes are based upon the composition of subducting sediment (sediment layers), composition of surface rocks (upper crust), a combination of petrologic and seismic correlations (middle and lower crust), and a compilation of xenolith data (lithospheric mantle). The HPE abundances are correlated within each voxel, but not vertically between layers. Efforts to provide correlation of abundances horizontally between each voxel are discussed. These models are used further to critically evaluate the bulk lithosphere heat production in the continents and the oceans. Cross-checks between our model and results from: 1) heat flux (Artemieva, 2006; Davies, 2013; Cammarano and Guerri, 2017), 2) gravity (Reguzzoni and Sampietro, 2015), and 3) geochemical and petrological models (Rudnick and Gao, 2014; Hacker et al. 2015) are performed.
Describing litho-constrained layout by a high-resolution model filter
NASA Astrophysics Data System (ADS)
Tsai, Min-Chun
2008-05-01
A novel high-resolution model (HRM) filtering technique was proposed to describe litho-constrained layouts. Litho-constrained layouts are layouts that have difficulties to pattern or are highly sensitive to process-fluctuations under current lithography technologies. HRM applies a short-wavelength (or high NA) model simulation directly on the pre-OPC, original design layout to filter out low spatial-frequency regions, and retain high spatial-frequency components which are litho-constrained. Since no OPC neither mask-synthesis steps are involved, this new technique is highly efficient in run time and can be used in design stage to detect and fix litho-constrained patterns. This method has successfully captured all the hot-spots with less than 15% overshoots on a realistic 80 mm2 full-chip M1 layout in 65nm technology node. A step by step derivation of this HRM technique is presented in this paper.
Phase retrieval using regularization method in intensity correlation imaging
NASA Astrophysics Data System (ADS)
Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin
2014-11-01
Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition
NASA Astrophysics Data System (ADS)
Nesbet, Robert K.
2018-05-01
Velocities in stable circular orbits about galaxies, a measure of centripetal gravitation, exceed the expected Kepler/Newton velocity as orbital radius increases. Standard Λ cold dark matter (ΛCDM) attributes this anomaly to galactic dark matter. McGaugh et al. have recently shown for 153 disc galaxies that observed radial acceleration is an apparently universal function of classical acceleration computed for observed galactic baryonic mass density. This is consistent with the empirical modified Newtonian dynamics (MOND) model, not requiring dark matter. It is shown here that suitably constrained ΛCDM and conformal gravity (CG) also produce such a universal correlation function. ΛCDM requires a very specific dark matter distribution, while the implied CG non-classical acceleration must be independent of galactic mass. All three constrained radial acceleration functions agree with the empirical baryonic v4 Tully-Fisher relation. Accurate rotation data in the nominally flat velocity range could distinguish between MOND, ΛCDM, and CG.
NASA Astrophysics Data System (ADS)
Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III
2015-12-01
Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.
Constrained and Unconstrained Partial Adjacent Category Logit Models for Ordinal Response Variables
ERIC Educational Resources Information Center
Fullerton, Andrew S.; Xu, Jun
2018-01-01
Adjacent category logit models are ordered regression models that focus on comparisons of adjacent categories. These models are particularly useful for ordinal response variables with categories that are of substantive interest. In this article, we consider unconstrained and constrained versions of the partial adjacent category logit model, which…
Poissant, Jocelyn; Wilson, Alastair J; Coltman, David W
2010-01-01
The independent evolution of the sexes may often be constrained if male and female homologous traits share a similar genetic architecture. Thus, cross-sex genetic covariance is assumed to play a key role in the evolution of sexual dimorphism (SD) with consequent impacts on sexual selection, population dynamics, and speciation processes. We compiled cross-sex genetic correlations (r(MF)) estimates from 114 sources to assess the extent to which the evolution of SD is typically constrained and test several specific hypotheses. First, we tested if r(MF) differed among trait types and especially between fitness components and other traits. We also tested the theoretical prediction of a negative relationship between r(MF) and SD based on the expectation that increases in SD should be facilitated by sex-specific genetic variance. We show that r(MF) is usually large and positive but that it is typically smaller for fitness components. This demonstrates that the evolution of SD is typically genetically constrained and that sex-specific selection coefficients may often be opposite in sign due to sub-optimal levels of SD. Most importantly, we confirm that sex-specific genetic variance is an important contributor to the evolution of SD by validating the prediction of a negative correlation between r(MF) and SD.
ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.
2014-01-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156
Fission barriers from multidimensionally-constrained covariant density functional theories
NASA Astrophysics Data System (ADS)
Lu, Bing-Nan; Zhao, Jie; Zhao, En-Guang; Zhou, Shan-Gui
2017-11-01
In recent years, we have developed the multidimensionally-constrained covariant density functional theories (MDC-CDFTs) in which both axial and spatial reflection symmetries are broken and all shape degrees of freedom described by βλμ with even μ, such as β20, β22, β30, β32, β40, etc., are included self-consistently. The MDC-CDFTs have been applied to the investigation of potential energy surfaces and fission barriers of actinide nuclei, third minima in potential energy surfaces of light actinides, shapes and potential energy surfaces of superheavy nuclei, octupole correlations between multiple chiral doublet bands in 78Br, octupole correlations in Ba isotopes, the Y32 correlations in N = 150 isotones and Zr isotopes, the spontaneous fission of Fm isotopes, and shapes of hypernuclei. In this contribution we present the formalism of MDC-CDFTs and the application of these theories to the study of fission barriers and potential energy surfaces of actinide nuclei.
NASA Astrophysics Data System (ADS)
Peel, Austin; Lin, Chieh-An; Lanusse, François; Leonard, Adrienne; Starck, Jean-Luc; Kilbinger, Martin
2017-03-01
Peak statistics in weak-lensing maps access the non-Gaussian information contained in the large-scale distribution of matter in the Universe. They are therefore a promising complementary probe to two-point and higher-order statistics to constrain our cosmological models. Next-generation galaxy surveys, with their advanced optics and large areas, will measure the cosmic weak-lensing signal with unprecedented precision. To prepare for these anticipated data sets, we assess the constraining power of peak counts in a simulated Euclid-like survey on the cosmological parameters Ωm, σ8, and w0de. In particular, we study how Camelus, a fast stochastic model for predicting peaks, can be applied to such large surveys. The algorithm avoids the need for time-costly N-body simulations, and its stochastic approach provides full PDF information of observables. Considering peaks with a signal-to-noise ratio ≥ 1, we measure the abundance histogram in a mock shear catalogue of approximately 5000 deg2 using a multiscale mass-map filtering technique. We constrain the parameters of the mock survey using Camelus combined with approximate Bayesian computation, a robust likelihood-free inference algorithm. Peak statistics yield a tight but significantly biased constraint in the σ8-Ωm plane, as measured by the width ΔΣ8 of the 1σ contour. We find Σ8 = σ8(Ωm/ 0.27)α = 0.77-0.05+0.06 with α = 0.75 for a flat ΛCDM model. The strong bias indicates the need to better understand and control the model systematics before applying it to a real survey of this size or larger. We perform a calibration of the model and compare results to those from the two-point correlation functions ξ± measured on the same field. We calibrate the ξ± result as well, since its contours are also biased, although not as severely as for peaks. In this case, we find for peaks Σ8 = 0.76-0.03+0.02 with α = 0.65, while for the combined ξ+ and ξ- statistics the values are Σ8 = 0.76-0.01+0.02 and α = 0.70. We conclude that the constraining power can therefore be comparable between the two weak-lensing observables in large-field surveys. Furthermore, the tilt in the σ8-Ωm degeneracy direction for peaks with respect to that of ξ± suggests that a combined analysis would yield tighter constraints than either measure alone. As expected, w0de cannot be well constrained without a tomographic analysis, but its degeneracy directions with the other two varied parameters are still clear for both peaks and ξ±.
Chartier, Sylvain; Giguère, Gyslain; Langlois, Dominic
2009-01-01
In this paper, we present a new recurrent bidirectional model that encompasses correlational, competitive and topological model properties. The simultaneous use of many classes of network behaviors allows for the unsupervised learning/categorization of perceptual patterns (through input compression) and the concurrent encoding of proximities in a multidimensional space. All of these operations are achieved within a common learning operation, and using a single set of defining properties. It is shown that the model can learn categories by developing prototype representations strictly from exposition to specific exemplars. Moreover, because the model is recurrent, it can reconstruct perfect outputs from incomplete and noisy patterns. Empirical exploration of the model's properties and performance shows that its ability for adequate clustering stems from: (1) properly distributing connection weights, and (2) producing a weight space with a low dispersion level (or higher density). In addition, since the model uses a sparse representation (k-winners), the size of topological neighborhood can be fixed, and no longer requires a decrease through time as was the case with classic self-organizing feature maps. Since the model's learning and transmission parameters are independent from learning trials, the model can develop stable fixed points in a constrained topological architecture, while being flexible enough to learn novel patterns.
Optimal vibration control of a rotating plate with self-sensing active constrained layer damping
NASA Astrophysics Data System (ADS)
Xie, Zhengchao; Wong, Pak Kin; Lo, Kin Heng
2012-04-01
This paper proposes a finite element model for optimally controlled constrained layer damped (CLD) rotating plate with self-sensing technique and frequency-dependent material property in both the time and frequency domain. Constrained layer damping with viscoelastic material can effectively reduce the vibration in rotating structures. However, most existing research models use complex modulus approach to model viscoelastic material, and an additional iterative approach which is only available in frequency domain has to be used to include the material's frequency dependency. It is meaningful to model the viscoelastic damping layer in rotating part by using the anelastic displacement fields (ADF) in order to include the frequency dependency in both the time and frequency domain. Also, unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Thus, in this work, a single layer finite element is adopted to model a three-layer active constrained layer damped rotating plate in which the constraining layer is made of piezoelectric material to work as both the self-sensing sensor and actuator under an linear quadratic regulation (LQR) controller. After being compared with verified data, this newly proposed finite element model is validated and could be used for future research.
NASA Astrophysics Data System (ADS)
Milliner, C. W. D.; Dolan, J. F.; Hollingsworth, J.; Leprince, S.; Ayoub, F.
2014-12-01
Coseismic surface deformation is typically measured in the field by geologists and with a range of geophysical methods such as InSAR, LiDAR and GPS. Current methods, however, either fail to capture the near-field coseismic surface deformation pattern where vital information is needed, or lack pre-event data. We develop a standardized and reproducible methodology to fully constrain the surface, near-field, coseismic deformation pattern in high resolution using aerial photography. We apply our methodology using the program COSI-corr to successfully cross-correlate pairs of aerial, optical imagery before and after the 1992, Mw 7.3 Landers and 1999, Mw 7.1 Hector Mine earthquakes. This technique allows measurement of the coseismic slip distribution and magnitude and width of off-fault deformation with sub-pixel precision. This technique can be applied in a cost effective manner for recent and historic earthquakes using archive aerial imagery. We also use synthetic tests to constrain and correct for the bias imposed on the result due to use of a sliding window during correlation. Correcting for artificial smearing of the tectonic signal allows us to robustly measure the fault zone width along a surface rupture. Furthermore, the synthetic tests have constrained for the first time the measurement precision and accuracy of estimated fault displacements and fault-zone width. Our methodology provides the unique ability to robustly understand the kinematics of surface faulting while at the same time accounting for both off-fault deformation and measurement biases that typically complicates such data. For both earthquakes we find that our displacement measurements derived from cross-correlation are systematically larger than the field displacement measurements, indicating the presence of off-fault deformation. We show that the Landers and Hector Mine earthquake accommodated 46% and 38% of displacement away from the main primary rupture as off-fault deformation, over a mean deformation width of 183 m and 133 m, respectively. We envisage that correlation results derived from our methodology will provide vital data for near-field deformation patterns and will be of significant use for constraining inversion solutions for fault slip at depth.
Impact of isoprene and HONO chemistry on ozone and OVOC formation in a semirural South Korean forest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Saewung; Kim, So-Young; Lee, Meehye
Rapid urbanization and economic development in East Asia in past decades has led to photochemical air pollution problems such as excess photochemical ozone and aerosol formation. Asian megacities such as Seoul, Tokyo, Shanghai, Gangzhou, and Beijing are surrounded by densely forested areas and recent research has consistently demonstrated the importance of biogenic volatile organic compounds from vegetation in determining oxidation capacity in the suburban Asian megacity regions. Uncertainties in constraining tropospheric oxidation capacity, dominated by hydroxyl radical concentrations, undermine our ability to assess regional photochemical air pollution problems. We present an observational dataset of CO, NOX, SO2, ozone, HONO, andmore » VOCs (anthropogenic and biogenic) from Taehwa Research Forest (TRF) near the Seoul Metropolitan Area (SMA) in early June 2012. The data show that TRF is influenced both by aged pollution and fresh BVOC emissions. With the dataset, we diagnose HOx (OH, HO2, and RO2) distributions calculated with the University of Washington Chemical Box Model (UWCM v 2.1). Uncertainty from unconstrained HONO sources and radical recycling processes highlighted in recent studies is examined using multiple model simulations with different model constraints. The results suggest that 1) different model simulation scenarios cause systematic differences in HOX distributions especially OH levels (up to 2.5 times) and 2) radical destruction (HO2+HO2 or HO2+RO2) could be more efficient than radical recycling (HO2+NO) especially in the afternoon. Implications of the uncertainties in radical chemistry are discussed with respect to ozone-VOC-NOX sensitivity and oxidation product formation rates. Overall, the VOC limited regime in ozone photochemistry is predicted but the degree of sensitivity can significantly vary depending on the model scenarios. The model results also suggest that RO2 levels are positively correlated with OVOCs production that is not routinely constrained by observations. These unconstrained OVOCs can cause higher than expected OH loss rates (missing OH reactivity) and secondary organic aerosol formation. The series of modeling experiments constrained by observations strongly urge observational constraint of the radical pool to enable precise understanding of regional photochemical pollution problems in the East Asian megacity region.« less
NASA Astrophysics Data System (ADS)
Bozek, Brandon
This dissertation describes three research projects on the topic of dark energy. The first project is an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their "w 0--wa" parameterization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which can not be distinguished from a cosmological constant at DETF Stage 2, and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The second project details this analysis on a Inverse Power Law (IPL) or "Ratra-Peebles" (RP) model. This model is a member of a popular subset of scalar field quintessence models that exhibit "tracking" behavior that make this model particularly theoretically interesting. We find that the relative increase in constraining power on the parameter space of this model is consistent to what was found in the first project and the DETF report. We also show, using a background cosmology based on an IPL scalar field model that is consistent with a cosmological constant with Stage 2 data, that good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The third project extends the Causal Entropic Principle to predict the preferred curvature within the "multiverse". The Causal Entropic Principle (Bousso, et al.) provides an alternative approach to anthropic attempts to predict our observed value of the cosmological constant by calculating the entropy created within a causal diamond. We have found that values larger than rhok = 40rho m are disfavored by more than 99.99% and a peak value at rho Λ = 7.9 x 10-123 and rho k = 4.3rhom for open universes. For universes that allow only positive curvature or both positive and negative curvature, we find a correlation between curvature and dark energy that leads to an extended region of preferred values. Our universe is found to be disfavored to an extent depending the priors on curvature. We also provide a comparison to previous anthropic constraints on open universes and discuss future directions for this work.
NASA Astrophysics Data System (ADS)
Johnston, Sarah Ellen; Shorina, Natalia; Bulygina, Ekaterina; Vorobjeva, Taisya; Chupakova, Anna; Klimov, Sergey I.; Kellerman, Anne M.; Guillemette, Francois; Shiklomanov, Alexander; Podgorski, David C.; Spencer, Robert G. M.
2018-03-01
Pan-Arctic riverine dissolved organic carbon (DOC) fluxes represent a major transfer of carbon from land-to-ocean, and past scaling estimates have been predominantly derived from the six major Arctic rivers. However, smaller watersheds are constrained to northern high-latitude regions and, particularly with respect to the Eurasian Arctic, have received little attention. In this study, we evaluated the concentration of DOC and composition of dissolved organic matter (DOM) via optical parameters, biomarkers (lignin phenols), and ultrahigh resolution mass spectrometry in the Northern Dvina River (a midsized high-latitude constrained river). Elevated DOC, lignin concentrations, and aromatic DOM indicators were observed throughout the year in comparison to the major Arctic rivers with seasonality exhibiting a clear spring freshet and also some years a secondary pulse in the autumn concurrent with the onset of freezing. Chromophoric DOM absorbance at a350 was strongly correlated to DOC and lignin across the hydrograph; however, the relationships did not fit previous models derived from the six major Arctic rivers. Updated DOC and lignin fluxes were derived for the pan-Arctic watershed by scaling from the Northern Dvina resulting in increased DOC and lignin fluxes (50 Tg yr-1 and 216 Gg yr-1, respectively) compared to past estimates. This leads to a reduction in the residence time for terrestrial carbon in the Arctic Ocean (0.5 to 1.8 years). These findings suggest that constrained northern high-latitude rivers are underrepresented in models of fluxes based from the six largest Arctic rivers with important ramifications for the export and fate of terrestrial carbon in the Arctic Ocean.
Deep 3 GHz number counts from a P(D) fluctuation analysis
NASA Astrophysics Data System (ADS)
Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.
2014-05-01
Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-11-01
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.
Killeen, Peter R.; Sitomer, Matthew T.
2008-01-01
Mathematical Principles of Reinforcement (MPR) is a theory of reinforcement schedules. This paper reviews the origin of the principles constituting MPR: arousal, association and constraint. Incentives invigorate responses, in particular those preceding and predicting the incentive. The process that generates an associative bond between stimuli, responses and incentives is called coupling. The combination of arousal and coupling constitutes reinforcement. Models of coupling play a central role in the evolution of the theory. The time required to respond constrains the maximum response rates, and generates a hyperbolic relation between rate of responding and rate of reinforcement. Models of control by ratio schedules are developed to illustrate the interaction of the principles. Correlations among parameters are incorporated into the structure of the models, and assumptions that were made in the original theory are refined in light of current data. PMID:12729968
Phi Index: A New Metric to Test the Flush Early and Avoid the Rush Hypothesis
Samia, Diogo S. M.; Blumstein, Daniel T.
2014-01-01
Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the “Flush Early and Avoid the Rush” (FEAR) hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD), and its flight initiation distance (the distance at which it flees the approaching predator, FID). However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship) and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ), a distribution-free metric to evaluate the goodness of fit of a 1∶1 relationship in a constraint envelope (the prediction of the FEAR hypothesis). Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship). Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis. PMID:25405872
Phi index: a new metric to test the flush early and avoid the rush hypothesis.
Samia, Diogo S M; Blumstein, Daniel T
2014-01-01
Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the "Flush Early and Avoid the Rush" (FEAR) hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD), and its flight initiation distance (the distance at which it flees the approaching predator, FID). However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship) and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ), a distribution-free metric to evaluate the goodness of fit of a 1:1 relationship in a constraint envelope (the prediction of the FEAR hypothesis). Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship). Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis.
Species climate range influences hydraulic and stomatal traits in Eucalyptus species.
Bourne, Aimee E; Creek, Danielle; Peters, Jennifer M R; Ellsworth, David S; Choat, Brendan
2017-07-01
Plant hydraulic traits influence the capacity of species to grow and survive in water-limited environments, but their comparative study at a common site has been limited. The primary aim of this study was to determine whether selective pressures on species originating in drought-prone environments constrain hydraulic traits among related species grown under common conditions. Leaf tissue water relations, xylem anatomy, stomatal behaviour and vulnerability to drought-induced embolism were measured on six Eucalyptus species growing in a common garden to determine whether these traits were related to current species climate range and to understand linkages between the traits. Hydraulically weighted xylem vessel diameter, leaf turgor loss point, the water potential at stomatal closure and vulnerability to drought-induced embolism were significantly ( P < 0·05) correlated with climate parameters from the species range. There was a co-ordination between stem and leaf parameters with the water potential at turgor loss, 12 % loss of conductivity and the point of stomatal closure significantly correlated. The correlation of hydraulic, stomatal and anatomical traits with climate variables from the species' original ranges suggests that these traits are genetically constrained. The conservative nature of xylem traits in Eucalyptus trees has important implications for the limits of species responses to changing environmental conditions and thus for species survival and distribution into the future, and yields new information for physiological models. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Hincks, Adam D.; Hajian, Amir; Addison, Graeme E.
2013-05-01
We cross-correlate the 100 μm Improved Reprocessing of the IRAS Survey (IRIS) map and galaxy clusters at 0.1 < z < 0.3 in the maxBCG catalogue taken from the Sloan Digital Sky Survey, measuring an angular cross-power spectrum over multipole moments 150 < l < 3000 at a total significance of over 40σ. The cross-spectrum, which arises from the spatial correlation between unresolved dusty galaxies that make up the cosmic infrared background (CIB) in the IRIS map and the galaxy clusters, is well-fit by a single power law with an index of -1.28±0.12, similar to the clustering of unresolved galaxies from cross-correlating far-infrared and submillimetre maps at longer wavelengths. Using a recent, phenomenological model for the spectral and clustering properties of the IRIS galaxies, we constrain the large-scale bias of the maxBCG clusters to be 2.6±1.4, consistent with existing analyses of the real-space cluster correlation function. The success of our method suggests that future CIB-optical cross-correlations using Planck and Herschel data will significantly improve our understanding of the clustering and redshift distribution of the faint CIB sources.
ERIC Educational Resources Information Center
Hoijtink, Herbert; Molenaar, Ivo W.
1997-01-01
This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)
Configuration-constrained cranking Hartree-Fock pairing calculations for sidebands of nuclei
NASA Astrophysics Data System (ADS)
Liang, W. Y.; Jiao, C. F.; Wu, Q.; Fu, X. M.; Xu, F. R.
2015-12-01
Background: Nuclear collective rotations have been successfully described by the cranking Hartree-Fock-Bogoliubov (HFB) model. However, for rotational sidebands which are built on intrinsic excited configurations, it may not be easy to find converged cranking HFB solutions. The nonconservation of the particle number in the BCS pairing is another shortcoming. To improve the pairing treatment, a particle-number-conserving (PNC) pairing method was suggested. But the existing PNC calculations were performed within a phenomenological one-body potential (e.g., Nilsson or Woods-Saxon) in which one has to deal the double-counting problem. Purpose: The present work aims at an improved description of nuclear rotations, particularly for the rotations of excited configurations, i.e., sidebands. Methods: We developed a configuration-constrained cranking Skyrme Hartree-Fock (SHF) calculation with the pairing correlation treated by the PNC method. The PNC pairing takes the philosophy of the shell model which diagonalizes the Hamiltonian in a truncated model space. The cranked deformed SHF basis provides a small but efficient model space for the PNC diagonalization. Results: We have applied the present method to the calculations of collective rotations of hafnium isotopes for both ground-state bands and sidebands, reproducing well experimental observations. The first up-bendings observed in the yrast bands of the hafnium isotopes are reproduced, and the second up-bendings are predicted. Calculations for rotational bands built on broken-pair excited configurations agree well with experimental data. The band-mixing between two Kπ=6+ bands observed in 176Hf and the K purity of the 178Hf rotational state built on the famous 31 yr Kπ=16+ isomer are discussed. Conclusions: The developed configuration-constrained cranking calculation has been proved to be a powerful tool to describe both the yrast bands and sidebands of deformed nuclei. The analyses of rotational moments of inertia help to understand the structures of nuclei, including rotational alignments, configurations, and competitions between collective and single-particle excitations.
Ground-state properties of rare-earth metals: an evaluation of density-functional theory.
Söderlind, Per; Turchi, P E A; Landa, A; Lordi, V
2014-10-15
The rare-earth metals have important technological applications due to their magnetic properties, but are scarce and expensive. Development of high-performance magnetic materials with less rare-earth content is desired, but theoretical modeling is hampered by complexities of the rare earths electronic structure. The existence of correlated (atomic-like) 4f electrons in the vicinity of the valence band makes any first-principles theory challenging. Here, we apply and evaluate the efficacy of density-functional theory for the series of lanthanides (rare earths), investigating the influence of the electron exchange and correlation functional, spin-orbit interaction, and orbital polarization. As a reference, the results are compared with those of the so-called 'standard model' of the lanthanides in which electrons are constrained to occupy 4f core states with no hybridization with the valence electrons. Some comparisons are also made with models designed for strong electron correlations. Our results suggest that spin-orbit coupling and orbital polarization are important, particularly for the magnitude of the magnetic moments, and that calculated equilibrium volumes, bulk moduli, and magnetic moments show correct trends overall. However, the precision of the calculated properties is not at the level of that found for simpler metals in the Periodic Table of Elements, and the electronic structures do not accurately reproduce x-ray photoemission spectra.
Comparison of Ab initio Low-Energy Models for LaFePO, LaFeAsO, BaFe2As2, LiFeAs, FeSe, and FeTe
NASA Astrophysics Data System (ADS)
Nakamura, Kazuma; Miyake, Takashi; Arita, Ryotaro; Imada, Masatoshi
2010-03-01
We present effective low-energy models for LaFePO and LaFeAsO (1111 family), BaFe2As2 (122), LiFeAs (111), and FeSe and FeTe (11) [1], based on ab initio downfolding scheme, a constrained random-phase-approximation method combined with maximally localized Wannier functions. Comparison among the effective models, derived for 5 Fe-3d bands, provides a basis for interpreting physics/chemistry; material dependences of electron correlations, a multiband character entangled by the 3d orbitals, and the geometrical frustration depending on hybridizations between iron and pnictogen/chalcogen orbitals. We found that LaFePO in the 1111 family resides in the weak correlation regime, while LaFeAsO and 111/122 compounds are the intermediate region and FeSe and FeTe in the 11 family are located in the strong correlation regime. A principal parameter relevant to the physics is clarified to be the pnictogen/chalcogen height from the iron layer. Implications in low-energy properties including magnetism and superconductivity are discussed. [1] T. Miyake, K. Nakamura, R. Arita, and M. Imada, arXiv:0911.3705.
The data-driven null models for information dissemination tree in social networks
NASA Astrophysics Data System (ADS)
Zhang, Zhiwei; Wang, Zhenyu
2017-10-01
For the purpose of detecting relatedness and co-occurrence between users, as well as the distribution features of nodes in spreading path of a social network, this paper explores topological characteristics of information dissemination trees (IDT) that can be employed indirectly to probe the information dissemination laws within social networks. Hence, three different null models of IDT are presented in this article, including the statistical-constrained 0-order IDT null model, the random-rewire-broken-edge 0-order IDT null model and the random-rewire-broken-edge 2-order IDT null model. These null models firstly generate the corresponding randomized copy of an actual IDT; then the extended significance profile, which is developed by adding the cascade ratio of information dissemination path, is exploited not only to evaluate degree correlation of two nodes associated with an edge, but also to assess the cascade ratio of different length of information dissemination paths. The experimental correspondences of the empirical analysis for several SinaWeibo IDTs and Twitter IDTs indicate that the IDT null models presented in this paper perform well in terms of degree correlation of nodes and dissemination path cascade ratio, which can be better to reveal the features of information dissemination and to fit the situation of real social networks.
The colour-magnitude relation as a constraint on the formation of rich cluster galaxies
NASA Astrophysics Data System (ADS)
Bower, Richard G.; Kodama, Tadayuki; Terlevich, Ale
1998-10-01
The colours and magnitudes of early-type galaxies in galaxy clusters are strongly correlated. The existence of such a correlation has been used to infer that early-type galaxies must be old passively evolving systems. Given the dominance of early-type galaxies in the cores of rich clusters, this view sits uncomfortably with the increasing fraction of blue galaxies found in clusters at intermediate redshifts, and with the late formation of galaxies favoured by cold dark matter type cosmologies. In this paper, we make a detailed investigation of these issues and examine the role that the colour-magnitude relation can play in constraining the formation history of galaxies currently found in the cores of rich clusters. We start by considering the colour evolution of galaxies after star formation ceases. We show that the scatter of the colour-magnitude relation places a strong constraint on the spread in age that is allowed for the bulk of the stellar population. In the extreme case that the stars are formed in a single event, the spread in age cannot be more than 4 Gyr. Although the bulk of stars must be formed in a short period, continuing formation of stars in a fraction of the galaxies is not so strongly constrained. We examine a model in which star formation occurs over an extended period of time in most galaxies with star formation being truncated randomly. This model is consistent with the formation of stars in a few systems until look-back times of ~5Gyr. An extension of this type of star formation history allows us to reconcile the small present-day scatter of the colour-magnitude relation with the observed blue galaxy fractions of intermediate redshift galaxy clusters. In addition to setting a limit on the variations in luminosity-weighted age between the stellar populations of cluster galaxies, the colour-magnitude relation can also be used to constrain the degree of merging between pre-existing stellar systems. This test relies on the slope of the colour-magnitude relation: mergers between galaxies of unequal mass tend to reduce the slope of the relation and to increase its scatter. We show that random mergers between galaxies very rapidly remove any well-defined colour-magnitude correlation. This model is not physically motivated, however, and we prefer to examine the merger process using a self-consistent merger tree. In such a model there are two effects. First, massive galaxies preferentially merge with systems of similar mass. Secondly, the rate of mass growth is considerably smaller than for the random merger case. As a result of both of these effects, the colour-magnitude correlation persists through a larger number of merger steps. The passive evolution of galaxy colours and their averaging in dissipationless mergers provide opposing constraints on the formation of cluster galaxies in a hierarchical model. At the level of current constraints, a compromise solution appears possible. The bulk of the stellar population must have formed before z=1, but cannot have formed in mass units much less than about half the mass of a present-day L_* galaxy. In this case, the galaxies are on average old enough that stellar population evolution is weak, yet formed recently enough that mass growth resulting from mergers is small.
POLARBEAR constraints on cosmic birefringence and primordial magnetic fields
Ade, Peter A. R.; Arnold, Kam; Atlas, Matt; ...
2015-12-08
Here, we constrain anisotropic cosmic birefringence using four-point correlations of even-parity E-mode and odd-parity B-mode polarization in the cosmic microwave background measurements made by the POLARization of the Background Radiation (POLARBEAR) experiment in its first season of observations. We find that the anisotropic cosmic birefringence signal from any parity-violating processes is consistent with zero. The Faraday rotation from anisotropic cosmic birefringence can be compared with the equivalent quantity generated by primordial magnetic fields if they existed. The POLARBEAR nondetection translates into a 95% confidence level (C.L.) upper limit of 93 nanogauss (nG) on the amplitude of an equivalent primordial magneticmore » field inclusive of systematic uncertainties. This four-point correlation constraint on Faraday rotation is about 15 times tighter than the upper limit of 1380 nG inferred from constraining the contribution of Faraday rotation to two-point correlations of B-modes measured by Planck in 2015. Metric perturbations sourced by primordial magnetic fields would also contribute to the B-mode power spectrum. Using the POLARBEAR measurements of the B-mode power spectrum (two-point correlation), we set a 95% C.L. upper limit of 3.9 nG on primordial magnetic fields assuming a flat prior on the field amplitude. This limit is comparable to what was found in the Planck 2015 two-point correlation analysis with both temperature and polarization. Finally, we perform a set of systematic error tests and find no evidence for contamination. This work marks the first time that anisotropic cosmic birefringence or primordial magnetic fields have been constrained from the ground at subdegree scales.« less
Natural selection and inheritance of breeding time and clutch size in the collared flycatcher.
Sheldon, B C; Kruuk, L E B; Merilä, J
2003-02-01
Many characteristics of organisms in free-living populations appear to be under directional selection, possess additive genetic variance, and yet show no evolutionary response to selection. Avian breeding time and clutch size are often-cited examples of such characters. We report analyses of inheritance of, and selection on, these traits in a long-term study of a wild population of the collared flycatcher Ficedula albicollis. We used mixed model analysis with REML estimation ("animal models") to make full use of the information in complex multigenerational pedigrees. Heritability of laying date, but not clutch size, was lower than that estimated previously using parent-offspring regressions, although for both traits there was evidence of substantial additive genetic variance (h2 = 0.19 and 0.29, respectively). Laying date and clutch size were negatively genetically correlated (rA = -0.41 +/- 0.09), implying that selection on one of the traits would cause a correlated response in the other, but there was little evidence to suggest that evolution of either trait would be constrained by correlations with other phenotypic characters. Analysis of selection on these traits in females revealed consistent strong directional fecundity selection for earlier breeding at the level of the phenotype (beta = -0.28 +/- 0.03), but little evidence for stabilising selection on breeding time. We found no evidence that clutch size was independently under selection. Analysis of fecundity selection on breeding values for laying date, estimated from an animal model, indicated that selection acts directly on additive genetic variance underlying breeding time (beta = -0.20 +/- 0.04), but not on clutch size (beta = 0.03 +/- 0.05). In contrast, selection on laying date via adult female survival fluctuated in sign between years, and was opposite in sign for selection on phenotypes (negative) and breeding values (positive). Our data thus suggest that any evolutionary response to selection on laying date is partially constrained by underlying life-history trade-offs, and illustrate the difficulties in using purely phenotypic measures and incomplete fitness estimates to assess evolution of life-history trade-offs. We discuss some of the difficulties associated with understanding the evolution of laying date and clutch size in natural populations.
NASA Astrophysics Data System (ADS)
de Wit, Ralph W. L.; Valentine, Andrew P.; Trampert, Jeannot
2013-10-01
How do body-wave traveltimes constrain the Earth's radial (1-D) seismic structure? Existing 1-D seismological models underpin 3-D seismic tomography and earthquake location algorithms. It is therefore crucial to assess the quality of such 1-D models, yet quantifying uncertainties in seismological models is challenging and thus often ignored. Ideally, quality assessment should be an integral part of the inverse method. Our aim in this study is twofold: (i) we show how to solve a general Bayesian non-linear inverse problem and quantify model uncertainties, and (ii) we investigate the constraint on spherically symmetric P-wave velocity (VP) structure provided by body-wave traveltimes from the EHB bulletin (phases Pn, P, PP and PKP). Our approach is based on artificial neural networks, which are very common in pattern recognition problems and can be used to approximate an arbitrary function. We use a Mixture Density Network to obtain 1-D marginal posterior probability density functions (pdfs), which provide a quantitative description of our knowledge on the individual Earth parameters. No linearization or model damping is required, which allows us to infer a model which is constrained purely by the data. We present 1-D marginal posterior pdfs for the 22 VP parameters and seven discontinuity depths in our model. P-wave velocities in the inner core, outer core and lower mantle are resolved well, with standard deviations of ˜0.2 to 1 per cent with respect to the mean of the posterior pdfs. The maximum likelihoods of VP are in general similar to the corresponding ak135 values, which lie within one or two standard deviations from the posterior means, thus providing an independent validation of ak135 in this part of the radial model. Conversely, the data contain little or no information on P-wave velocity in the D'' layer, the upper mantle and the homogeneous crustal layers. Further, the data do not constrain the depth of the discontinuities in our model. Using additional phases available in the ISC bulletin, such as PcP, PKKP and the converted phases SP and ScP, may enhance the resolvability of these parameters. Finally, we show how the method can be extended to obtain a posterior pdf for a multidimensional model space. This enables us to investigate correlations between model parameters.
Dark-matter decay as a complementary probe of multicomponent dark sectors.
Dienes, Keith R; Kumar, Jason; Thomas, Brooks; Yaylali, David
2015-02-06
In single-component theories of dark matter, the 2→2 amplitudes for dark-matter production, annihilation, and scattering can be related to each other through various crossing symmetries. The detection techniques based on these processes are thus complementary. However, multicomponent theories exhibit an additional direction for dark-matter complementarity: the possibility of dark-matter decay from heavier to lighter components. We discuss how this new detection channel may be correlated with the others, and demonstrate that the enhanced complementarity which emerges can be an important ingredient in probing and constraining the parameter spaces of such models.
CMB-galaxy correlation in Unified Dark Matter scalar field cosmologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertacca, Daniele; Bartolo, Nicola; Matarrese, Sabino
We present an analysis of the cross-correlation between the CMB and the large-scale structure (LSS) of the Universe in Unified Dark Matter (UDM) scalar field cosmologies. We work out the predicted cross-correlation function in UDM models, which depends on the speed of sound of the unified component, and compare it with observations from six galaxy catalogues (NVSS, HEAO, 2MASS, and SDSS main galaxies, luminous red galaxies, and quasars). We sample the value of the speed of sound and perform a likelihood analysis, finding that the UDM model is as likely as the ΛCDM, and is compatible with observations for amore » range of values of c{sub ∞} (the value of the sound speed at late times) on which structure formation depends. In particular, we obtain an upper bound of c{sub ∞}{sup 2} ≤ 0.009 at 95% confidence level, meaning that the ΛCDM model, for which c{sub ∞}{sup 2} = 0, is a good fit to the data, while the posterior probability distribution peaks at the value c{sub ∞}{sup 2} = 10{sup −4} . Finally, we study the time dependence of the deviation from ΛCDM via a tomographic analysis using a mock redshift distribution and we find that the largest deviation is for low-redshift sources, suggesting that future low-z surveys will be best suited to constrain UDM models.« less
ERIC Educational Resources Information Center
Mare, Robert D.; Mason, William M.
An important class of applications of measurement error or constrained factor analytic models consists of comparing models for several populations. In such cases, it is appropriate to make explicit statistical tests of model similarity across groups and to constrain some parameters of the models to be equal across groups using a priori substantive…
Order-Constrained Bayes Inference for Dichotomous Models of Unidimensional Nonparametric IRT
ERIC Educational Resources Information Center
Karabatsos, George; Sheu, Ching-Fan
2004-01-01
This study introduces an order-constrained Bayes inference framework useful for analyzing data containing dichotomous scored item responses, under the assumptions of either the monotone homogeneity model or the double monotonicity model of nonparametric item response theory (NIRT). The framework involves the implementation of Gibbs sampling to…
Stretched hydrogen molecule from a constrained-search density-functional perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valone, Steven M; Levy, Mel
2009-01-01
Constrained-search density functional theory gives valuable insights into the fundamentals of density functional theory. It provides exact results and bounds on the ground- and excited-state density functionals. An important advantage of the theory is that it gives guidance in the construction of functionals. Here they engage constrained search theory to explore issues associated with the functional behavior of 'stretched bonds' in molecular hydrogen. A constrained search is performed with familiar valence bond wavefunctions ordinarily used to describe molecular hydrogen. The effective, one-electron hamiltonian is computed and compared to the corresponding uncorrelated, Hartree-Fock effective hamiltonian. Analysis of the functional suggests themore » need to construct different functionals for the same density and to allow a competition among these functions. As a result the correlation energy functional is composed explicitly of energy gaps from the different functionals.« less
NASA Astrophysics Data System (ADS)
McClarty, P. A.; O'Brien, A.; Pollmann, F.
2014-05-01
We consider a classical model of charges ±q on a pyrochlore lattice in the presence of long-range Coulomb interactions. This model first appeared in the early literature on charge order in magnetite [P. W. Anderson, Phys. Rev. 102, 1008 (1956), 10.1103/PhysRev.102.1008]. In the limit where the interactions become short ranged, the model has a ground state with an extensive entropy and dipolar charge-charge correlations. When long-range interactions are introduced, the exact degeneracy is broken. We study the thermodynamics of the model and show the presence of a correlated charge liquid within a temperature window in which the physics is well described as a liquid of screened charged defects. The structure factor in this phase, which has smeared pinch points at the reciprocal lattice points, may be used to detect charge ice experimentally. In addition, the model exhibits fractionally charged excitations ±q/2 which are shown to interact via a 1/r potential. At lower temperatures, the model exhibits a transition to a long-range ordered phase. We are able to treat the Coulombic charge ice model and the dipolar spin ice model on an equal footing by mapping both to a constrained charge model on the diamond lattice. We find that states of the two ice models are related by a staggering field which is reflected in the energetics of these two models. From this perspective, we can understand the origin of the spin ice and charge ice ground states as coming from a dipolar model on a diamond lattice. We study the properties of charge ice in an external electric field, finding that the correlated liquid is robust to the presence of a field in contrast to the case of spin ice in a magnetic field. Finally, we comment on the transport properties of Coulombic charge ice in the correlated liquid phase.
NASA Astrophysics Data System (ADS)
DeGregorio, P.; Lawlor, A.; Dawson, K. A.
2006-04-01
We introduce a new method to describe systems in the vicinity of dynamical arrest. This involves a map that transforms mobile systems at one length scale to mobile systems at a longer length. This map is capable of capturing the singular behavior accrued across very large length scales, and provides a direct route to the dynamical correlation length and other related quantities. The ideas are immediately applicable in two spatial dimensions, and have been applied to a modified Kob-Andersen type model. For such systems the map may be derived in an exact form, and readily solved numerically. We obtain the asymptotic behavior across the whole physical domain of interest in dynamical arrest.
Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.
Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao
2017-06-21
In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.
Joint reconstruction of multiview compressed images.
Thirumalai, Vijayaraghavan; Frossard, Pascal
2013-05-01
Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.
Course 4: Density Functional Theory, Methods, Techniques, and Applications
NASA Astrophysics Data System (ADS)
Chrétien, S.; Salahub, D. R.
Contents 1 Introduction 2 Density functional theory 2.1 Hohenberg and Kohn theorems 2.2 Levy's constrained search 2.3 Kohn-Sham method 3 Density matrices and pair correlation functions 4 Adiabatic connection or coupling strength integration 5 Comparing and constrasting KS-DFT and HF-CI 6 Preparing new functionals 7 Approximate exchange and correlation functionals 7.1 The Local Spin Density Approximation (LSDA) 7.2 Gradient Expansion Approximation (GEA) 7.3 Generalized Gradient Approximation (GGA) 7.4 meta-Generalized Gradient Approximation (meta-GGA) 7.5 Hybrid functionals 7.6 The Optimized Effective Potential method (OEP) 7.7 Comparison between various approximate functionals 8 LAP correlation functional 9 Solving the Kohn-Sham equations 9.1 The Kohn-Sham orbitals 9.2 Coulomb potential 9.3 Exchange-correlation potential 9.4 Core potential 9.5 Other choices and sources of error 9.6 Functionality 10 Applications 10.1 Ab initio molecular dynamics for an alanine dipeptide model 10.2 Transition metal clusters: The ecstasy, and the agony... 10.3 The conversion of acetylene to benzene on Fe clusters 11 Conclusions
NASA Astrophysics Data System (ADS)
Kurzweil, Yair; Head-Gordon, Martin
2009-07-01
We develop a method that can constrain any local exchange-correlation potential to preserve basic exact conditions. Using the method of Lagrange multipliers, we calculate for each set of given Kohn-Sham orbitals a constraint-preserving potential which is closest to the given exchange-correlation potential. The method is applicable to both the time-dependent (TD) and independent cases. The exact conditions that are enforced for the time-independent case are Galilean covariance, zero net force and torque, and Levy-Perdew virial theorem. For the time-dependent case we enforce translational covariance, zero net force, Levy-Perdew virial theorem, and energy balance. We test our method on the exchange (only) Krieger-Li-Iafrate (xKLI) approximate-optimized effective potential for both cases. For the time-independent case, we calculated the ground state properties of some hydrogen chains and small sodium clusters for some constrained xKLI potentials and Hartree-Fock (HF) exchange. The results (total energy, Kohn-Sham eigenvalues, polarizability, and hyperpolarizability) indicate that enforcing the exact conditions is not important for these cases. On the other hand, in the time-dependent case, constraining both energy balance and zero net force yields improved results relative to TDHF calculations. We explored the electron dynamics in small sodium clusters driven by cw laser pulses. For each laser pulse we compared calculations from TD constrained xKLI, TD partially constrained xKLI, and TDHF. We found that electron dynamics such as electron ionization and moment of inertia dynamics for the constrained xKLI are most similar to the TDHF results. Also, energy conservation is better by at least one order of magnitude with respect to the unconstrained xKLI. We also discuss the problems that arise in satisfying constraints in the TD case with a non-cw driving force.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurzweil, Yair; Head-Gordon, Martin
2009-07-15
We develop a method that can constrain any local exchange-correlation potential to preserve basic exact conditions. Using the method of Lagrange multipliers, we calculate for each set of given Kohn-Sham orbitals a constraint-preserving potential which is closest to the given exchange-correlation potential. The method is applicable to both the time-dependent (TD) and independent cases. The exact conditions that are enforced for the time-independent case are Galilean covariance, zero net force and torque, and Levy-Perdew virial theorem. For the time-dependent case we enforce translational covariance, zero net force, Levy-Perdew virial theorem, and energy balance. We test our method on the exchangemore » (only) Krieger-Li-Iafrate (xKLI) approximate-optimized effective potential for both cases. For the time-independent case, we calculated the ground state properties of some hydrogen chains and small sodium clusters for some constrained xKLI potentials and Hartree-Fock (HF) exchange. The results (total energy, Kohn-Sham eigenvalues, polarizability, and hyperpolarizability) indicate that enforcing the exact conditions is not important for these cases. On the other hand, in the time-dependent case, constraining both energy balance and zero net force yields improved results relative to TDHF calculations. We explored the electron dynamics in small sodium clusters driven by cw laser pulses. For each laser pulse we compared calculations from TD constrained xKLI, TD partially constrained xKLI, and TDHF. We found that electron dynamics such as electron ionization and moment of inertia dynamics for the constrained xKLI are most similar to the TDHF results. Also, energy conservation is better by at least one order of magnitude with respect to the unconstrained xKLI. We also discuss the problems that arise in satisfying constraints in the TD case with a non-cw driving force.« less
NASA Astrophysics Data System (ADS)
Collatz, G. J.; Kawa, S. R.; Liu, Y.; Zeng, F.; Ivanoff, A.
2013-12-01
We evaluate our understanding of the land biospheric carbon cycle by benchmarking a model and its variants to atmospheric CO2 observations and to an atmospheric CO2 inversion. Though the seasonal cycle in CO2 observations is well simulated by the model (RMSE/standard deviation of observations <0.5 at most sites north of 15N and <1 for Southern Hemisphere sites) different model setups suggest that the CO2 seasonal cycle provides some constraint on gross photosynthesis, respiration, and fire fluxes revealed in the amplitude and phase at northern latitude sites. CarbonTracker inversions (CT) and model show similar phasing of the seasonal fluxes but agreement in the amplitude varies by region. We also evaluate interannual variability (IAV) in the measured atmospheric CO2 which, in contrast to the seasonal cycle, is not well represented by the model. We estimate the contributions of biospheric and fire fluxes, and atmospheric transport variability to explaining observed variability in measured CO2. Comparisons with CT show that modeled IAV has some correspondence to the inversion results >40N though fluxes match poorly at regional to continental scales. Regional and global fire emissions are strongly correlated with variability observed at northern flask sample sites and in the global atmospheric CO2 growth rate though in the latter case fire emissions anomalies are not large enough to account fully for the observed variability. We discuss remaining unexplained variability in CO2 observations in terms of the representation of fluxes by the model. This work also demonstrates the limitations of the current network of CO2 observations and the potential of new denser surface measurements and space based column measurements for constraining carbon cycle processes in models.
Option pricing, stochastic volatility, singular dynamics and constrained path integrals
NASA Astrophysics Data System (ADS)
Contreras, Mauricio; Hojman, Sergio A.
2014-01-01
Stochastic volatility models have been widely studied and used in the financial world. The Heston model (Heston, 1993) [7] is one of the best known models to deal with this issue. These stochastic volatility models are characterized by the fact that they explicitly depend on a correlation parameter ρ which relates the two Brownian motions that drive the stochastic dynamics associated to the volatility and the underlying asset. Solutions to the Heston model in the context of option pricing, using a path integral approach, are found in Lemmens et al. (2008) [21] while in Baaquie (2007,1997) [12,13] propagators for different stochastic volatility models are constructed. In all previous cases, the propagator is not defined for extreme cases ρ=±1. It is therefore necessary to obtain a solution for these extreme cases and also to understand the origin of the divergence of the propagator. In this paper we study in detail a general class of stochastic volatility models for extreme values ρ=±1 and show that in these two cases, the associated classical dynamics corresponds to a system with second class constraints, which must be dealt with using Dirac’s method for constrained systems (Dirac, 1958,1967) [22,23] in order to properly obtain the propagator in the form of a Euclidean Hamiltonian path integral (Henneaux and Teitelboim, 1992) [25]. After integrating over momenta, one gets an Euclidean Lagrangian path integral without constraints, which in the case of the Heston model corresponds to a path integral of a repulsive radial harmonic oscillator. In all the cases studied, the price of the underlying asset is completely determined by one of the second class constraints in terms of volatility and plays no active role in the path integral.
Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H
2015-11-30
We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.
Stabilizing l1-norm prediction models by supervised feature grouping.
Kamkar, Iman; Gupta, Sunil Kumar; Phung, Dinh; Venkatesh, Svetha
2016-02-01
Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making. Copyright © 2015 Elsevier Inc. All rights reserved.
A Method to Constrain Mass and Spin of GRB Black Holes within the NDAF Model
NASA Astrophysics Data System (ADS)
Liu, Tong; Xue, Li; Zhao, Xiao-Hong; Zhang, Fu-Wen; Zhang, Bing
2016-04-01
Black holes (BHs) hide themselves behind various astronomical phenomena and their properties, I.e., mass and spin, are usually difficult to constrain. One leading candidate for the central engine model of gamma-ray bursts (GRBs) invokes a stellar mass BH and a neutrino-dominated accretion flow (NDAF), with the relativistic jet launched due to neutrino-anti-neutrino annihilations. Such a model gives rise to a matter-dominated fireball, and is suitable to interpret GRBs with a dominant thermal component with a photospheric origin. We propose a method to constrain BH mass and spin within the framework of this model and apply the method to the thermally dominant GRB 101219B, whose initial jet launching radius, r0, is constrained from the data. Using our numerical model of NDAF jets, we estimate the following constraints on the central BH: mass MBH ˜ 5-9 M⊙, spin parameter a* ≳ 0.6, and disk mass 3 M⊙ ≲ Mdisk ≲ 4 M⊙. Our results also suggest that the NDAF model is a competitive candidate for the central engine of GRBs with a strong thermal component.
NASA Astrophysics Data System (ADS)
Elkhateeb, Esraa
2018-01-01
We consider a cosmological model based on a generalization of the equation of state proposed by Nojiri and Odintsov (2004) and Štefančić (2005, 2006). We argue that this model works as a dark fluid model which can interpolate between dust equation of state and the dark energy equation of state. We show how the asymptotic behavior of the equation of state constrained the parameters of the model. The causality condition for the model is also studied to constrain the parameters and the fixed points are tested to determine different solution classes. Observations of Hubble diagram of SNe Ia supernovae are used to further constrain the model. We present an exact solution of the model and calculate the luminosity distance and the energy density evolution. We also calculate the deceleration parameter to test the state of the universe expansion.
NASA Astrophysics Data System (ADS)
Sanford, Ward E.; Niel Plummer, L.; Casile, Gerolamo; Busenberg, Ed; Nelms, David L.; Schlosser, Peter
2017-06-01
Dual-domain transport is an alternative conceptual and mathematical paradigm to advection-dispersion for describing the movement of dissolved constituents in groundwater. Here we test the use of a dual-domain algorithm combined with advective pathline tracking to help reconcile environmental tracer concentrations measured in springs within the Shenandoah Valley, USA. The approach also allows for the estimation of the three dual-domain parameters: mobile porosity, immobile porosity, and a domain exchange rate constant. Concentrations of CFC-113, SF6, 3H, and 3He were measured at 28 springs emanating from carbonate rocks. The different tracers give three different mean composite piston-flow ages for all the springs that vary from 5 to 18 years. Here we compare four algorithms that interpret the tracer concentrations in terms of groundwater age: piston flow, old-fraction mixing, advective-flow path modeling, and dual-domain modeling. Whereas the second two algorithms made slight improvements over piston flow at reconciling the disparate piston-flow age estimates, the dual-domain algorithm gave a very marked improvement. Optimal values for the three transport parameters were also obtained, although the immobile porosity value was not well constrained. Parameter correlation and sensitivities were calculated to help quantify the uncertainty. Although some correlation exists between the three parameters being estimated, a watershed simulation of a pollutant breakthrough to a local stream illustrates that the estimated transport parameters can still substantially help to constrain and predict the nature and timing of solute transport. The combined use of multiple environmental tracers with this dual-domain approach could be applicable in a wide variety of fractured-rock settings.
Instant preheating in quintessential inflation with α -attractors
NASA Astrophysics Data System (ADS)
Dimopoulos, Konstantinos; Wood, Leonora Donaldson; Owen, Charlotte
2018-03-01
We investigate a compelling model of quintessential inflation in the context of α -attractors, which naturally result in a scalar potential featuring two flat regions; the inflationary plateau and the quintessential tail. The "asymptotic freedom" of α -attractors, near the kinetic poles, suppresses radiative corrections and interactions, which would otherwise threaten to lift the flatness of the quintessential tail and cause a 5th-force problem respectively. Since this is a nonoscillatory inflation model, we reheat the Universe through instant preheating. The parameter space is constrained by both inflation and dark energy requirements. We find an excellent correlation between the inflationary observables and model predictions, in agreement with the α -attractors setup. We also obtain successful quintessence for natural values of the parameters. Our model predicts potentially sizeable tensor perturbations (at the level of 1%) and a slightly varying equation of state for dark energy, to be probed in the near future.
Effects of long-term representations on free recall of unrelated words
Katkov, Mikhail; Romani, Sandro
2015-01-01
Human memory stores vast amounts of information. Yet recalling this information is often challenging when specific cues are lacking. Here we consider an associative model of retrieval where each recalled item triggers the recall of the next item based on the similarity between their long-term neuronal representations. The model predicts that different items stored in memory have different probability to be recalled depending on the size of their representation. Moreover, items with high recall probability tend to be recalled earlier and suppress other items. We performed an analysis of a large data set on free recall and found a highly specific pattern of statistical dependencies predicted by the model, in particular negative correlations between the number of words recalled and their average recall probability. Taken together, experimental and modeling results presented here reveal complex interactions between memory items during recall that severely constrain recall capacity. PMID:25593296
LHC signals of radiatively-induced neutrino masses and implications for the Zee-Babu model
NASA Astrophysics Data System (ADS)
Alcaide, Julien; Chala, Mikael; Santamaria, Arcadi
2018-04-01
Contrary to the see-saw models, extended Higgs sectors leading to radiatively-induced neutrino masses do require the extra particles to be at the TeV scale. However, these new states have often exotic decays, to which experimental LHC searches performed so far, focused on scalars decaying into pairs of same-sign leptons, are not sensitive. In this paper we show that their experimental signatures can start to be tested with current LHC data if dedicated multi-region analyses correlating different observables are used. We also provide high-accuracy estimations of the complicated Standard Model backgrounds involved. For the case of the Zee-Babu model, we show that regions not yet constrained by neutrino data and low-energy experiments can be already probed, while most of the parameter space could be excluded at the 95% C.L. in a high-luminosity phase of the LHC.
A probabilistic framework to infer brain functional connectivity from anatomical connections.
Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel
2011-01-01
We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajello, M.; Atwood, W. B.; Baldini, L.
During its first year of data taking, the Large Area Telescope (LAT) onboard the Fermi Gamma-Ray Space Telescope has collected a large sample of high-energy cosmic-ray electrons and positrons (CREs). We present the results of a directional analysis of the CRE events, in which we searched for a flux excess correlated with the direction of the Sun. Two different and complementary analysis approaches were implemented, and neither yielded evidence of a significant CRE flux excess from the Sun. Here, we derive upper limits on the CRE flux from the Sun’s direction, and use these bounds to constrain two classes ofmore » dark matter models which predict a solar CRE flux: (1) models in which dark matter annihilates to CREs via a light intermediate state, and (2) inelastic dark matter models in which dark matter annihilates to CREs.« less
Ajello, M.; Atwood, W. B.; Baldini, L.; ...
2011-08-15
During its first year of data taking, the Large Area Telescope (LAT) onboard the Fermi Gamma-Ray Space Telescope has collected a large sample of high-energy cosmic-ray electrons and positrons (CREs). We present the results of a directional analysis of the CRE events, in which we searched for a flux excess correlated with the direction of the Sun. Two different and complementary analysis approaches were implemented, and neither yielded evidence of a significant CRE flux excess from the Sun. Here, we derive upper limits on the CRE flux from the Sun’s direction, and use these bounds to constrain two classes ofmore » dark matter models which predict a solar CRE flux: (1) models in which dark matter annihilates to CREs via a light intermediate state, and (2) inelastic dark matter models in which dark matter annihilates to CREs.« less
Gamma-ray activity of Seyfert galaxies and constraints on hot accretion flows
NASA Astrophysics Data System (ADS)
Wojaczyński, Rafał; Niedźwiecki, Andrzej; Xie, Fu-Guo; Szanecki, Michał
2015-12-01
Aims: We check how the Fermi/LAT data constrain the physics of hot accretion flows that are most likely present in low-luminosity AGNs. Methods: Using a precise model of emission from hot flows, we studied the flow γ-ray emission resulting from proton-proton interactions. We explored the dependence of the γ-ray luminosity on the accretion rate, the black hole spin, the magnetic field strength, the electron heating efficiency, and the particle distribution. Then, we compared the hadronic γ-ray luminosities predicted by the model for several nearby Seyfert 1 galaxies with the results of our analysis of 6.4 years of Fermi/LAT observations of these AGNs. Results: In agreement with previous studies, we find a significant γ-ray detection in NGC 6814. We were only able to derive upper limits for the remaining objects, although we report marginally significant (~3σ) signals at the positions of NGC 4151 and NGC 4258. The derived upper limits for the flux above 1 GeV allow us to constrain the proton acceleration efficiency in flows with heating of electrons dominated by Coulomb interactions, which case is favored by the X-ray spectral properties. In these flows, at most ~10% of the accretion power can be used for a relativistic acceleration of protons. Upper limits for the flux below 1 GeV can constrain the magnetic field strength and black hole spin value; we find these constraints for NGC 7213 and NGC 4151. We also note that the spectral component above ~4 GeV previously found in the Fermi/LAT data of Centaurus A may be due to hadronic emission from a flow within the above constraint. We rule out this origin of the γ-ray emission for NGC 6814. For models with a strong magnetohydrodynamic heating of electrons, the hadronic γ-ray fluxes are below the Fermi/LAT sensitivity even for the closest AGNs. In these models, nonthermal Compton radiation may dominate in the γ-ray range if electrons are efficiently accelerated and the acceleration index is hard; for the index ≃2, the LAT upper limits constrain the fraction of accretion power used for such an acceleration to at most ~5%. Finally, we note that the three Seyfert 2 galaxies with high starburst activity NGC 4595, NGC 1068, and Circinus show an interesting correlation of their γ-ray luminosities with properties of their active nuclei, and we discuss this in the context of the hot flow model.
Ward Identity and Scattering Amplitudes for Nonlinear Sigma Models
NASA Astrophysics Data System (ADS)
Low, Ian; Yin, Zhewei
2018-02-01
We present a Ward identity for nonlinear sigma models using generalized nonlinear shift symmetries, without introducing current algebra or coset space. The Ward identity constrains correlation functions of the sigma model such that the Adler's zero is guaranteed for S -matrix elements, and gives rise to a subleading single soft theorem that is valid at the quantum level and to all orders in the Goldstone decay constant. For tree amplitudes, the Ward identity leads to a novel Berends-Giele recursion relation as well as an explicit form of the subleading single soft factor. Furthermore, interactions of the cubic biadjoint scalar theory associated with the single soft limit, which was previously discovered using the Cachazo-He-Yuan representation of tree amplitudes, can be seen to emerge from matrix elements of conserved currents corresponding to the generalized shift symmetry.
On meeting capital requirements with a chance-constrained optimization model.
Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan
2016-01-01
This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.
NASA Astrophysics Data System (ADS)
Flores, A. N.; Entekhabi, D.; Bras, R. L.
2007-12-01
Soil hydraulic and thermal properties (SHTPs) affect both the rate of moisture redistribution in the soil column and the volumetric soil water capacity. Adequately constraining these properties through field and lab analysis to parameterize spatially-distributed hydrology models is often prohibitively expensive. Because SHTPs vary significantly at small spatial scales individual soil samples are also only reliably indicative of local conditions, and these properties remain a significant source of uncertainty in soil moisture and temperature estimation. In ensemble-based soil moisture data assimilation, uncertainty in the model-produced prior estimate due to associated uncertainty in SHTPs must be taken into account to avoid under-dispersive ensembles. To treat SHTP uncertainty for purposes of supplying inputs to a distributed watershed model we use the restricted pairing (RP) algorithm, an extension of Latin Hypercube (LH) sampling. The RP algorithm generates an arbitrary number of SHTP combinations by sampling the appropriate marginal distributions of the individual soil properties using the LH approach, while imposing a target rank correlation among the properties. A previously-published meta- database of 1309 soils representing 12 textural classes is used to fit appropriate marginal distributions to the properties and compute the target rank correlation structure, conditioned on soil texture. Given categorical soil textures, our implementation of the RP algorithm generates an arbitrarily-sized ensemble of realizations of the SHTPs required as input to the TIN-based Realtime Integrated Basin Simulator with vegetation dynamics (tRIBS+VEGGIE) distributed parameter ecohydrology model. Soil moisture ensembles simulated with RP- generated SHTPs exhibit less variance than ensembles simulated with SHTPs generated by a scheme that neglects correlation among properties. Neglecting correlation among SHTPs can lead to physically unrealistic combinations of parameters that exhibit implausible hydrologic behavior when input to the tRIBS+VEGGIE model.
NASA Astrophysics Data System (ADS)
Tang, Huaxiu; Zhan, Jinyan; Deng, Xiangzheng; Ma, Jinsong
2007-11-01
By using the GIS technologies, we interpolate the site-based meteorological data into climatic surface data, which are the main input parameters for the CropWat model, used to estimate the reference evapotranspiration (ET 0). And then by combining the ET 0 with the information on share of cultivated land decoded from the Landsat TM/ETM digital imagines covering the entire case study area, the Huang-Huai-Hai plain, we estimate the amount of irrigation water requirements (IWRs) in the years of 1991 and 2000. We then introduce the potential yield (PY) of cultivated land estimated from the Estimation Model for the Agricultural Productivity Potential (EMAPP) to explore the relationship between the IWRs and the PY . By conducting GIS-based spatial overlay analyses, we explore the positive correlation relationship between the IWRs and the PY of cultivated land. Finally, we conclude that the IWRs is now a constrain factor on the PY of cultivated land in the Huang-Huai-Hai plain in those areas with the irrigation water constrains. The result has offered a scientific basis for the decision makings in the exploitation and utilization of resources and energy as well as the land use planning, protection of the potential yields and the managements of irrigation water at the regional level.
Cosmological implications of primordial black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luis Bernal, José; Bellomo, Nicola; Raccanelli, Alvise
The possibility that a relevant fraction of the dark matter might be comprised of Primordial Black Holes (PBHs) has been seriously reconsidered after LIGO's detection of a ∼ 30 M {sub ⊙} binary black holes merger. Despite the strong interest in the model, there is a lack of studies on possible cosmological implications and effects on cosmological parameters inference. We investigate correlations with the other standard cosmological parameters using cosmic microwave background observations, finding significant degeneracies, especially with the tilt of the primordial power spectrum and the sound horizon at radiation drag. However, these degeneracies can be greatly reduced withmore » the inclusion of small scale polarization data. We also explore if PBHs as dark matter in simple extensions of the standard ΛCDM cosmological model induces extra degeneracies, especially between the additional parameters and the PBH's ones. Finally, we present cosmic microwave background constraints on the fraction of dark matter in PBHs, not only for monochromatic PBH mass distributions but also for popular extended mass distributions. Our results show that extended mass distribution's constraints are tighter, but also that a considerable amount of constraining power comes from the high-ℓ polarization data. Moreover, we constrain the shape of such mass distributions in terms of the correspondent constraints on the PBH mass fraction.« less
Constraining new physics models with isotope shift spectroscopy
NASA Astrophysics Data System (ADS)
Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias
2017-07-01
Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Röntsch, Raoul; Schulze, Markus
2015-09-21
We study top quark pair production in association with a Z boson at the Large Hadron Collider (LHC) and investigate the prospects of measuring the couplings of top quarks to the Z boson. To date these couplings have not been constrained in direct measurements. Such a determination will be possible for the first time at the LHC. Our calculation improves previous coupling studies through the inclusion of next-to-leading order (NLO) QCD corrections in production and decays of all unstable particles. We treat top quarks in the narrow-width approximation and retain all NLO spin correlations. To determine the sensitivity of amore » coupling measurement we perform a binned log-likelihood ratio test based on normalization and shape information of the angle between the leptons from the Z boson decay. The obtained limits account for statistical uncertainties as well as leading theoretical systematics from residual scale dependence and parton distribution functions. We use current CMS data to place the first direct constraints on the ttbZ couplings. We also consider the upcoming high-energy LHC run and find that with 300 inverse fb of data at an energy of 13 TeV the vector and axial ttbZ couplings can be constrained at the 95% confidence level to C_V=0.24^{+0.39}_{-0.85} and C_A=-0.60^{+0.14}_{-0.18}, where the central values are the Standard Model predictions. This is a reduction of uncertainties by 25% and 42%, respectively, compared to an analysis based on leading-order predictions. We also translate these results into limits on dimension-six operators contributing to the ttbZ interactions beyond the Standard Model.« less
Constraints on Biogenic Emplacement of Crystalline Calcium Carbonate and Dolomite
NASA Astrophysics Data System (ADS)
Colas, B.; Clark, S. M.; Jacob, D. E.
2015-12-01
Amorphous calcium carbonate (ACC) is a biogenic precursor of calcium carbonates forming shells and skeletons of marine organisms, which are key components of the whole marine environment. Understanding carbonate formation is an essential prerequisite to quantify the effect climate change and pollution have on marine population. Water is a critical component of the structure of ACC and the key component controlling the stability of the amorphous state. Addition of small amounts of magnesium (1-5% of the calcium content) is known to promote the stability of ACC presumably through stabilization of the hydrogen bonding network. Understanding the hydrogen bonding network in ACC is fundamental to understand the stability of ACC. Our approach is to use Monte-Carlo simulations constrained by X-ray and neutron scattering data to determine hydrogen bonding networks in ACC as a function of magnesium doping. We have already successfully developed a synthesis protocol to make ACC, and have collected X-ray data, which is suitable for determining Ca, Mg and O correlations, and have collected neutron data, which gives information on the hydrogen/deuterium (as the interaction of X-rays with hydrogen is too low for us to be able to constrain hydrogen atom positions with only X-rays). The X-ray and neutron data are used to constrain reverse Monte-Carlo modelling of the ACC structure using the Empirical Potential Structure Refinement program, in order to yield a complete structural model for ACC including water molecule positions. We will present details of our sample synthesis and characterization methods, X-ray and neutron scattering data, and reverse Monte-Carlo simulations results, together with a discussion of the role of hydrogen bonding in ACC stability.
On the evaluation of global sea-salt aerosol models at coastal/orographic sites
NASA Astrophysics Data System (ADS)
Spada, M.; Jorba, O.; Pérez García-Pando, C.; Janjic, Z.; Baldasano, J. M.
2015-01-01
Sea-salt aerosol global models are typically evaluated against concentration observations at coastal stations that are unaffected by local surf conditions and thus considered representative of open ocean conditions. Despite recent improvements in sea-salt source functions, studies still show significant model errors in specific regions. Using a multiscale model, we investigated the effect of high model resolution (0.1° × 0.1° vs. 1° × 1.4°) upon sea-salt patterns in four stations from the University of Miami Network: Baring Head, Chatam Island, and Invercargill in New Zealand, and Marion Island in the sub-antarctic Indian Ocean. Normalized biases improved from +63.7% to +3.3% and correlation increased from 0.52 to 0.84. The representation of sea/land interfaces, mesoscale circulations, and precipitation with the higher resolution model played a major role in the simulation of annual concentration trends. Our results recommend caution when comparing or constraining global models using surface concentration observations from coastal stations.
High-redshift post-reionization cosmology with 21cm intensity mapping
NASA Astrophysics Data System (ADS)
Obuljen, Andrej; Castorina, Emanuele; Villaescusa-Navarro, Francisco; Viel, Matteo
2018-05-01
We investigate the possibility of performing cosmological studies in the redshift range 2.5
NASA Astrophysics Data System (ADS)
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.
2018-04-01
Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.
Ashworth, Danielle C.; Fuller, Gary W.; Toledano, Mireille B.; Font, Anna; Elliott, Paul; Hansell, Anna L.; de Hoogh, Kees
2013-01-01
Background. Research to date on health effects associated with incineration has found limited evidence of health risks, but many previous studies have been constrained by poor exposure assessment. This paper provides a comparative assessment of atmospheric dispersion modelling and distance from source (a commonly used proxy for exposure) as exposure assessment methods for pollutants released from incinerators. Methods. Distance from source and the atmospheric dispersion model ADMS-Urban were used to characterise ambient exposures to particulates from two municipal solid waste incinerators (MSWIs) in the UK. Additionally an exploration of the sensitivity of the dispersion model simulations to input parameters was performed. Results. The model output indicated extremely low ground level concentrations of PM10, with maximum concentrations of <0.01 μg/m3. Proximity and modelled PM10 concentrations for both MSWIs at postcode level were highly correlated when using continuous measures (Spearman correlation coefficients ~ 0.7) but showed poor agreement for categorical measures (deciles or quintiles, Cohen's kappa coefficients ≤ 0.5). Conclusion. To provide the most appropriate estimate of ambient exposure from MSWIs, it is essential that incinerator characteristics, magnitude of emissions, and surrounding meteorological and topographical conditions are considered. Reducing exposure misclassification is particularly important in environmental epidemiology to aid detection of low-level risks. PMID:23935644
Predicting Lg Coda Using Synthetic Seismograms and Media With Stochastic Heterogeneity
NASA Astrophysics Data System (ADS)
Tibuleac, I. M.; Stroujkova, A.; Bonner, J. L.; Mayeda, K.
2005-12-01
Recent examinations of the characteristics of coda-derived Sn and Lg spectra for yield estimation have shown that the spectral peak of Nevada Test Site (NTS) explosion spectra is depth-of-burial dependent, and that this peak is shifted to higher frequencies for Lop Nor explosions at the same depths. To confidently use coda-based yield formulas, we need to understand and predict coda spectral shape variations with depth, source media, velocity structure, topography, and geological heterogeneity. We present results of a coda modeling study to predict Lg coda. During the initial stages of this research, we have acquired and parameterized a deterministic 6 deg. x 6 deg. velocity and attenuation model centered on the Nevada Test Site. Near-source data are used to constrain density and attenuation profiles for the upper five km. The upper crust velocity profiles are quilted into a background velocity profile at depths greater than five km. The model is parameterized for use in a modified version of the Generalized Fourier Method in two dimensions (GFM2D). We modify this model to include stochastic heterogeneities of varying correlation lengths within the crust. Correlation length, Hurst number and fractional velocity perturbation of the heterogeneities are used to construct different realizations of the random media. We use nuclear explosion and earthquake cluster waveform analysis, as well as well log and geological information to constrain the stochastic parameters for a path between the NTS and the seismic stations near Mina, Nevada. Using multiple runs, we quantify the effects of variations in the stochastic parameters, of heterogeneity location in the crust and attenuation on coda amplitude and spectral characteristics. We calibrate these parameters by matching synthetic earthquake Lg coda envelopes to coda envelopes of local earthquakes with well-defined moments and mechanisms. We generate explosion synthetics for these calibrated deterministic and stochastic models. Secondary effects, including a compensated linear vector dipole source, are superposed on the synthetics in order to adequately characterize the Lg generation. We use this technique to characterize the effects of depth of burial on the coda spectral shapes.
NASA Astrophysics Data System (ADS)
Glaze, L. S.; Baloga, S. M.; Garvin, J. B.; Quick, L. C.
2014-05-01
Lava flows and flow fields on Venus lack sufficient topographic data for any type of quantitative modeling to estimate eruption rates and durations. Such modeling can constrain rates of resurfacing and provide insights into magma plumbing systems.
NASA Astrophysics Data System (ADS)
Miller, Meghan S.; Sun, Daoyuan; O'Driscoll, Leland; Becker, Thorsten W.; Holt, Adam; Diaz, Jordi; Thomas, Christine
2015-04-01
Detailed mantle and lithospheric structure from the Canary Islands to Iberia have been imaged with data from recent temporary deployments and select permanent stations from over 300 broadband seismometers. The stations extended across Morocco and Spain as part of the PICASSO, IberArray, and Morocco-Münster experiments. We present results from S receiver functions (SRF), shear wave splitting, waveform modeling, and geodynamic models that help constrain the tectonic evolution of the westernmost Mediterranean, including orogenesis of the Atlas Mountains and occurrence of localized alkaline volcanism. Our receiver function images, in agreement with previous geophysical modeling, show that the lithosphere is thin (~65 km) beneath the Atlas, but thickens (~100 km) over a very short length scale at the flanks of the mountains. We find that these dramatic changes in lithospheric thickness also correspond to dramatic decreases in delay times inferred from S and SKS splitting observations of seismic anisotropy. Pockets and conduits of low seismic velocity material below the lithosphere extend along much of the Atlas to Southern Spain and correlate with the locations of Pliocene-Quaternary magmatism. Waveform analysis from the USC linear seismic array across the Atlas Mountains constrains the position, shape, and physical characteristics of one localized, low velocity conduit that extends from the uppermost mantle (~200 km depth) up to the volcanoes in the Middle Atlas. The shape, position and temperature of these seismically imaged low velocity anomalies, topography of the base of the lithosphere, morphology of the subducted slab beneath the Alboran Sea, position of the West African Craton and correlation with mantle flow inferred from shear wave splitting suggest that the unusually high topography of the Atlas Mountains and isolated recent volcanics are due to active mantle support that may be from material channeled from the Canary Island plume.
NASA Astrophysics Data System (ADS)
Hirakawa, E. T.; Pitarka, A.; Mellors, R. J.
2015-12-01
Evan Hirakawa, Arben Pitarka, and Robert Mellors One challenging task in explosion seismology is development of physical models for explaining the generation of S-waves during underground explosions. Pitarka et al. (2015) used finite difference simulations of SPE-3 (part of Source Physics Experiment, SPE, an ongoing series of underground chemical explosions at the Nevada National Security Site) and found that while a large component of shear motion was generated directly at the source, additional scattering from heterogeneous velocity structure and topography are necessary to better match the data. Large-scale features in the velocity model used in the SPE simulations are well constrained, however, small-scale heterogeneity is poorly constrained. In our study we used a stochastic representation of small-scale variability in order to produce additional high-frequency scattering. Two methods for generating the distributions of random scatterers are tested. The first is done in the spatial domain by essentially smoothing a set of random numbers over an ellipsoidal volume using a Gaussian weighting function. The second method consists of filtering a set of random numbers in the wavenumber domain to obtain a set of heterogeneities with a desired statistical distribution (Frankel and Clayton, 1986). This method is capable of generating distributions with either Gaussian or von Karman autocorrelation functions. The key parameters that affect scattering are the correlation length, the standard deviation of velocity for the heterogeneities, and the Hurst exponent, which is only present in the von Karman media. Overall, we find that shorter correlation lengths as well as higher standard deviations result in increased tangential motion in the frequency band of interest (0 - 10 Hz). This occurs partially through S-wave refraction, but mostly by P-S and Rg-S waves conversions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alioli, Simone; Farina, Marco; Pappadopulo, Duccio
New physics, that is too heavy to be produced directly, can leave measurable imprints on the tails of kinematic distributions at the LHC.We use energetic QCD processes to perform novel measurements of the Standard Model (SM) Effective Field Theory. We show that the dijet invariant mass spectrum, and the inclusive jet transverse momentum spectrum, are sensitive to a dimension 6 operator that modifies the gluon propagator at high energies. The dominant effect is constructive or destructive interference with SM jet production. Here, we compare differential next-to-leading order predictions from POWHEG to public 7TeV jet data, including scale, PDF, and experimentalmore » uncertainties and their respective correlations. Furthermore, we constrain a New Physics (NP) scale of 3.5TeV with current data. We project the reach of future 13 and 100TeV measurements, which we estimate to be sensitive to NP scales of 8 and 60TeV, respectively. As an application, we apply our bounds to constrain heavy vector octet colorons that couple to the QCD current. We conclude that effective operators will surpass bump hunts, in terms of coloron mass reach, even for sequential couplings.« less
Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD
NASA Astrophysics Data System (ADS)
Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; Gambhir, Arjun S.; Orginos, Kostas; Savage, Martin J.; Shanahan, Phiala E.; Wagman, Michael L.; Winter, Frank; Nplqcd Collaboration
2018-04-01
Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and 3He at SU(3)-symmetric values of the quark masses corresponding to a pion mass mπ˜806 MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elements of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O (10 %), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.
Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Emmanuel; Davoudi, Zohreh; Detmold, William
Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and 3He at SU(3)-symmetric values of the quark masses corresponding to a pion mass m π~806 MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elementsmore » of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O(10%), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.« less
Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD
Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; ...
2018-04-13
Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and 3He at SU(3)-symmetric values of the quark masses corresponding to a pion mass m π~806 MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elementsmore » of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O(10%), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.« less
CCTOP: a Consensus Constrained TOPology prediction web server.
Dobson, László; Reményi, István; Tusnády, Gábor E
2015-07-01
The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Precision probes of QCD at high energies
NASA Astrophysics Data System (ADS)
Alioli, Simone; Farina, Marco; Pappadopulo, Duccio; Ruderman, Joshua T.
2017-07-01
New physics, that is too heavy to be produced directly, can leave measurable imprints on the tails of kinematic distributions at the LHC. We use energetic QCD processes to perform novel measurements of the Standard Model (SM) Effective Field Theory. We show that the dijet invariant mass spectrum, and the inclusive jet transverse momentum spectrum, are sensitive to a dimension 6 operator that modifies the gluon propagator at high energies. The dominant effect is constructive or destructive interference with SM jet production. We compare differential next-to-leading order predictions from POWHEG to public 7 TeV jet data, including scale, PDF, and experimental uncertainties and their respective correlations. We constrain a New Physics (NP) scale of 3.5 TeV with current data. We project the reach of future 13 and 100 TeV measurements, which we estimate to be sensitive to NP scales of 8 and 60 TeV, respectively. As an application, we apply our bounds to constrain heavy vector octet colorons that couple to the QCD current. We project that effective operators will surpass bump hunts, in terms of coloron mass reach, even for sequential couplings.
Scalar, Axial, and Tensor Interactions of Light Nuclei from Lattice QCD.
Chang, Emmanuel; Davoudi, Zohreh; Detmold, William; Gambhir, Arjun S; Orginos, Kostas; Savage, Martin J; Shanahan, Phiala E; Wagman, Michael L; Winter, Frank
2018-04-13
Complete flavor decompositions of the matrix elements of the scalar, axial, and tensor currents in the proton, deuteron, diproton, and ^{3}He at SU(3)-symmetric values of the quark masses corresponding to a pion mass m_{π}∼806 MeV are determined using lattice quantum chromodynamics. At the physical quark masses, the scalar interactions constrain mean-field models of nuclei and the low-energy interactions of nuclei with potential dark matter candidates. The axial and tensor interactions of nuclei constrain their spin content, integrated transversity, and the quark contributions to their electric dipole moments. External fields are used to directly access the quark-line connected matrix elements of quark bilinear operators, and a combination of stochastic estimation techniques is used to determine the disconnected sea-quark contributions. The calculated matrix elements differ from, and are typically smaller than, naive single-nucleon estimates. Given the particularly large, O(10%), size of nuclear effects in the scalar matrix elements, contributions from correlated multinucleon effects should be quantified in the analysis of dark matter direct-detection experiments using nuclear targets.
Constraining particle dark matter using local galaxy distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ando, Shin’ichiro; Ishiwata, Koji
It has been long discussed that cosmic rays may contain signals of dark matter. In the last couple of years an anomaly of cosmic-ray positrons has drawn a lot of attentions, and recently an excess in cosmic-ray anti-proton has been reported by AMS-02 collaboration. Both excesses may indicate towards decaying or annihilating dark matter with a mass of around 1–10 TeV. In this article we study the gamma rays from dark matter and constraints from cross correlations with distribution of galaxies, particularly in a local volume. We find that gamma rays due to inverse-Compton process have large intensity, and hencemore » they give stringent constraints on dark matter scenarios in the TeV scale mass regime. Taking the recent developments in modeling astrophysical gamma-ray sources as well as comprehensive possibilities of the final state products of dark matter decay or annihilation into account, we show that the parameter regions of decaying dark matter that are suggested to explain the excesses are excluded. We also discuss the constrains on annihilating scenarios.« less
NASA Astrophysics Data System (ADS)
Volk, Brent L.; Lagoudas, Dimitris C.; Maitland, Duncan J.
2011-09-01
In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5-4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Computational substrates of norms and their violations during social exchange.
Xiang, Ting; Lohrenz, Terry; Montague, P Read
2013-01-16
Social norms in humans constrain individual behaviors to establish shared expectations within a social group. Previous work has probed social norm violations and the feelings that such violations engender; however, a computational rendering of the underlying neural and emotional responses has been lacking. We probed norm violations using a two-party, repeated fairness game (ultimatum game) where proposers offer a split of a monetary resource to a responder who either accepts or rejects the offer. Using a norm-training paradigm where subject groups are preadapted to either high or low offers, we demonstrate that unpredictable shifts in expected offers creates a difference in rejection rates exhibited by the two responder groups for otherwise identical offers. We constructed an ideal observer model that identified neural correlates of norm prediction errors in the ventral striatum and anterior insula, regions that also showed strong responses to variance-prediction errors generated by the same model. Subjective feelings about offers correlated with these norm prediction errors, and the two signals displayed overlapping, but not identical, neural correlates in striatum, insula, and medial orbitofrontal cortex. These results provide evidence for the hypothesis that responses in anterior insula can encode information about social norm violations that correlate with changes in overt behavior (changes in rejection rates). Together, these results demonstrate that the brain regions involved in reward prediction and risk prediction are also recruited in signaling social norm violations.
Computational Substrates of Norms and Their Violations during Social Exchange
Xiang, Ting; Lohrenz, Terry; Montague, P. Read
2013-01-01
Social norms in humans constrain individual behaviors to establish shared expectations within a social group. Previous work has probed social norm violations and the feelings that such violations engender; however, a computational rendering of the underlying neural and emotional responses has been lacking. We probed norm violations using a two-party, repeated fairness game (ultimatum game) where proposers offer a split of a monetary resource to a responder who either accepts or rejects the offer. Using a norm-training paradigm where subject groups are preadapted to either high or low offers, we demonstrate that unpredictable shifts in expected offers creates a difference in rejection rates exhibited by the two responder groups for otherwise identical offers. We constructed an ideal observer model that identified neural correlates of norm prediction errors in the ventral striatum and anterior insula, regions that also showed strong responses to variance-prediction errors generated by the same model. Subjective feelings about offers correlated with these norm prediction errors, and the two signals displayed overlapping, but not identical, neural correlates in striatum, insula, and medial orbitofrontal cortex. These results provide evidence for the hypothesis that responses in anterior insula can encode information about social norm violations that correlate with changes in overt behavior (changes in rejection rates). Together, these results demonstrate that the brain regions involved in reward prediction and risk prediction are also recruited in signaling social norm violations. PMID:23325247
Percolation of spatially constrained Erdős-Rényi networks with degree correlations.
Schmeltzer, C; Soriano, J; Sokolov, I M; Rüdiger, S
2014-01-01
Motivated by experiments on activity in neuronal cultures [ J. Soriano, M. Rodríguez Martínez, T. Tlusty and E. Moses Proc. Natl. Acad. Sci. 105 13758 (2008)], we investigate the percolation transition and critical exponents of spatially embedded Erdős-Rényi networks with degree correlations. In our model networks, nodes are randomly distributed in a two-dimensional spatial domain, and the connection probability depends on Euclidian link length by a power law as well as on the degrees of linked nodes. Generally, spatial constraints lead to higher percolation thresholds in the sense that more links are needed to achieve global connectivity. However, degree correlations favor or do not favor percolation depending on the connectivity rules. We employ two construction methods to introduce degree correlations. In the first one, nodes stay homogeneously distributed and are connected via a distance- and degree-dependent probability. We observe that assortativity in the resulting network leads to a decrease of the percolation threshold. In the second construction methods, nodes are first spatially segregated depending on their degree and afterwards connected with a distance-dependent probability. In this segregated model, we find a threshold increase that accompanies the rising assortativity. Additionally, when the network is constructed in a disassortative way, we observe that this property has little effect on the percolation transition.
Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network
2012-01-01
Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Conclusions Yeast 5 expands and refines the computational reconstruction of yeast metabolism and improves the predictive accuracy of a stoichiometrically constrained yeast metabolic model. It differs from previous reconstructions and models by emphasizing the distinction between the yeast metabolic reconstruction and the stoichiometrically constrained model, and makes both available as Additional file 4 and Additional file 5 and at http://yeast.sf.net/ as separate systems biology markup language (SBML) files. Through this separation, we intend to make the modeling process more accessible, explicit, transparent, and reproducible. PMID:22663945
Constrained Analysis of Fluorescence Anisotropy Decay:Application to Experimental Protein Dynamics
Feinstein, Efraim; Deikus, Gintaras; Rusinova, Elena; Rachofsky, Edward L.; Ross, J. B. Alexander; Laws, William R.
2003-01-01
Hydrodynamic properties as well as structural dynamics of proteins can be investigated by the well-established experimental method of fluorescence anisotropy decay. Successful use of this method depends on determination of the correct kinetic model, the extent of cross-correlation between parameters in the fitting function, and differences between the timescales of the depolarizing motions and the fluorophore's fluorescence lifetime. We have tested the utility of an independently measured steady-state anisotropy value as a constraint during data analysis to reduce parameter cross correlation and to increase the timescales over which anisotropy decay parameters can be recovered accurately for two calcium-binding proteins. Mutant rat F102W parvalbumin was used as a model system because its single tryptophan residue exhibits monoexponential fluorescence intensity and anisotropy decay kinetics. Cod parvalbumin, a protein with a single tryptophan residue that exhibits multiexponential fluorescence decay kinetics, was also examined as a more complex model. Anisotropy decays were measured for both proteins as a function of solution viscosity to vary hydrodynamic parameters. The use of the steady-state anisotropy as a constraint significantly improved the precision and accuracy of recovered parameters for both proteins, particularly for viscosities at which the protein's rotational correlation time was much longer than the fluorescence lifetime. Thus, basic hydrodynamic properties of larger biomolecules can now be determined with more precision and accuracy by fluorescence anisotropy decay. PMID:12524313
Do gamma-ray burst sources repeat?
NASA Technical Reports Server (NTRS)
Meegan, C. A.; Hartmann, D. H.; Brainerd, J. J.; Briggs, M.; Paciesas, W. S.; Pendleton, G.; Kouveliotou, C.; Fishman, G.; Blumenthal, G.; Brock, M.
1994-01-01
The demonstration of repeated gamma-ray bursts from an individual source would severely constrain burst source models. Recent reports of evidence for repetition in the first BATSE burst catalog have generated renewed interest in this issue. Here, we analyze the angular distribution of 585 bursts of the second BATSE catalog (Meegan et al. 1994). We search for evidence of burst recurrence using the nearest and farthest neighbor statistic ad the two-point angular correlation function. We find the data to be consistent with the hypothesis that burst sources do not repeat; however, a repeater fraction of up to about 20% of the bursts cannot be excluded.
A general analysis of Wtb anomalous couplings
NASA Astrophysics Data System (ADS)
Cao, Qing-Hong; Yan, Bin; Yu, Jiang-Hao; Zhang, Chen
2017-06-01
We investigate new physics effects on the Wtb effective couplings in a model-independent framework. The new physics effects can be parametrized by four independent couplings, , , and . We further introduce a set of parameters x 0, x m , x p and x 5 which exhibit a linear relation to the single top production cross sections. Using recent data for the t-channel single top production cross section σ t , tW associated production cross section σ tW, s-channel single top production cross section σ s , and W-helicity fractions F 0, F L and F R collected at the 8 TeV LHC and Tevatron, we perform a global fit to impose constraints on the top quark effective couplings. Our global fitting results show that the top quark effective couplings are strongly correlated. We show that (i) improving the measurements of σ t and σ tW is important in constraining the correlation of (,) and (,); (ii) and are anti-correlated, and are sensitive to all the four experiments; (iii) and are also anti-correlated, and are sensitive to the F 0 and F L measurements; (iv) the correlation between and is sensitive to the precision of the σ t , σ tW and F 0 measurements. The effective Wtb couplings are studied in three kinds of new physics models: the G(221) = SU(2)1 ⊗ SU(2)2 ⊗ U(1) X models, the vector-like quark models and the Littlest Higgs model with and without T-parity. We show that the Wtb couplings in the left-right model and the un-unified model are sensitive to the ratio of gauge couplings when the new heavy gauge boson’s mass (M W‧) is less than several hundred GeV, but the constraint is loose if M W‧ > 1 TeV. Furthermore, the Wtb couplings in vector-like quark models and the Littlest Higgs models are sensitive to the mixing angles of new heavy particles and SM particles. Supported by National Science Foundation of China (11275009, 11675002, 11635001), National Science Foundation (PHY-1315983, PHY-1316033) and DOE (DE- SC0011095)
Bogart, Eli; Myers, Christopher R.
2016-01-01
C4 plants, such as maize, concentrate carbon dioxide in a specialized compartment surrounding the veins of their leaves to improve the efficiency of carbon dioxide assimilation. Nonlinear relationships between carbon dioxide and oxygen levels and reaction rates are key to their physiology but cannot be handled with standard techniques of constraint-based metabolic modeling. We demonstrate that incorporating these relationships as constraints on reaction rates and solving the resulting nonlinear optimization problem yields realistic predictions of the response of C4 systems to environmental and biochemical perturbations. Using a new genome-scale reconstruction of maize metabolism, we build an 18000-reaction, nonlinearly constrained model describing mesophyll and bundle sheath cells in 15 segments of the developing maize leaf, interacting via metabolite exchange, and use RNA-seq and enzyme activity measurements to predict spatial variation in metabolic state by a novel method that optimizes correlation between fluxes and expression data. Though such correlations are known to be weak in general, we suggest that developmental gradients may be particularly suited to the inference of metabolic fluxes from expression data, and we demonstrate that our method predicts fluxes that achieve high correlation with the data, successfully capture the experimentally observed base-to-tip transition between carbon-importing tissue and carbon-exporting tissue, and include a nonzero growth rate, in contrast to prior results from similar methods in other systems. PMID:26990967
Molina-Romero, Miguel; Gómez, Pedro A; Sperl, Jonathan I; Czisch, Michael; Sämann, Philipp G; Jones, Derek K; Menzel, Marion I; Menze, Bjoern H
2018-03-23
The compartmental nature of brain tissue microstructure is typically studied by diffusion MRI, MR relaxometry or their correlation. Diffusion MRI relies on signal representations or biophysical models, while MR relaxometry and correlation studies are based on regularized inverse Laplace transforms (ILTs). Here we introduce a general framework for characterizing microstructure that does not depend on diffusion modeling and replaces ill-posed ILTs with blind source separation (BSS). This framework yields proton density, relaxation times, volume fractions, and signal disentanglement, allowing for separation of the free-water component. Diffusion experiments repeated for several different echo times, contain entangled diffusion and relaxation compartmental information. These can be disentangled by BSS using a physically constrained nonnegative matrix factorization. Computer simulations, phantom studies, together with repeatability and reproducibility experiments demonstrated that BSS is capable of estimating proton density, compartmental volume fractions and transversal relaxations. In vivo results proved its potential to correct for free-water contamination and to estimate tissue parameters. Formulation of the diffusion-relaxation dependence as a BSS problem introduces a new framework for studying microstructure compartmentalization, and a novel tool for free-water elimination. © 2018 International Society for Magnetic Resonance in Medicine.
A physical model of the infrared-to-radio correlation in galaxies
NASA Technical Reports Server (NTRS)
Helou, G.; Bicay, M. D.
1993-01-01
We explore the implications of the IR-radio correlation in star-forming galaxies, using a simple physical model constrained by the constant global ratio q of IR to radio emission and by the radial falloff of this ratio in disks of galaxies. The modeling takes into account the diffusion, radiative decay, and escape of cosmic-ray electrons responsible for the synchrotron emission, and the full range of optical depths to dust-heating photons. We introduce two assumptions: that dust-heating photons and radio-emitting cosmic-ray electrons are created in constant proportion to each other as part of the star formation activity, and that gas and magnetic field are well coupled locally, expressed as B proportional to n exp beta, with beta between 1/3 and 2/3. We conclude that disk galaxies would maintain the observed constant ratio q under these assumptions if the disk scale height h(0) and the escape scale length l(esc) for cosmic-ray electrons followed a relation of the form l(esc) proportional to h(0) exp 1/2; the IR-to-radio ratio will then depend very weakly on interstellar density, and, therefore, on magnetic field strength or mean optical depth.
Dynamic response of composite beams with induced-strain actuators
NASA Astrophysics Data System (ADS)
Chandra, Ramesh
1994-05-01
This paper presents an analytical-experimental study on dynamic response of open-section composite beams with actuation by piezoelectric devices. The analysis includes the essential features of open-section composite beam modeling, such as constrained warping and transverse shear deformation. A general plate segment of the beam with and without piezoelectric ply is modeled using laminated plate theory and the forces and displacement relations of this plate segment are then reduced to the force and displacement of the one-dimensional beam. The dynamic response of bending-torsion coupled composite beams excited by piezoelectric devices is predicted. In order to validate the analysis, kevlar-epoxy and graphite-epoxy beams with surface mounted pieziceramic actuators are tested for their dynamic response. The response was measured using accelerometer. Good correlation between analysis and experiment is achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutqvist, Jonny; Majer, Ernie; Oldenburg, Curt
2006-06-07
In this paper, we present progress made in a study aimed atincreasing the understanding of the relative contributions of differentmechanisms that may be causing the seismicity occurring at The Geysersgeothermal field, California. The approach we take is to integrate: (1)coupled reservoir geomechanical numerical modeling, (2) data fromrecently upgraded and expanded NCPA/Calpine/LBNL seismic arrays, and (3)tens of years of archival InSAR data from monthly satellite passes. Wehave conducted a coupled reservoir geomechanical analysis to studypotential mechanisms induced by steam production. Our simulation resultscorroborate co-locations of hypocenter field observations of inducedseismicity and their correlation with steam production as reported in theliterature. Seismicmore » and InSAR data are being collected and processed foruse in constraining the coupled reservoir geomechanicalmodel.« less
Glassy Behavior due to Kinetic Constraints: from Topological Foam to Backgammon
NASA Astrophysics Data System (ADS)
Sherrington, David
A study is reported of a series of simple model systems with only non-interacting Hamiltonians, and hence simple equilibrium thermodynamics, but with constrained kinetics of a type initially suggested by topological considerations of foams and two-dimensional covalent glasses. It is demonstrated that oscopic dynamical features characteristic of real glasses, such as two-time decays in energy and auto-correlation functions, arise and may be understood in terms of annihilation-diffusion concepts and theory. This recognition leads to a sequence of further models which (i) encapsulate the essense but are more readily simulated and open to easier analytic study, and (ii) allow generalization and extension to higher dimension. Fluctuation-dissipation relations are also considered and show novel aspects. The comparison is with strong glasses.
NASA Technical Reports Server (NTRS)
Battersby, Bryn D.
2003-01-01
This paper argues that a consumer's decision on ticket class takes into account the expected likelihood of obtaining a seat in a particular class which, in turn, partially depends on an optimum "transaction cost". Taking into account the preferences of the consumer and the information that the consumer is endowed with, the consumer will select a ticket that includes its own optimal transaction cost. This motivates the inclusion of the capacity constraint as a proxy independent variable for these consumer expectations This then forms the basis of a model of air-travel demand with specific reference to Australia. A censored likelihood function allowing for correlation in the disturbance term across k classes is introduced. The correlation in the disturbances arises as a result of the interdependence of the capacity constraints in k different ticket classes on each flight.
Constraining compensated isocurvature perturbations using the CMB
NASA Astrophysics Data System (ADS)
Smith, Tristan L.; Rhiannon Smith, Kyle Yee, Julian Munoz, Daniel Grin
2017-01-01
Compensated isocurvature perturbations (CIPs) are variations in the cosmic baryon fraction which leave the total non-relativistic matter (and radiation) density unchanged. They are predicted by models of inflation which involve more than one scalar field, such as the curvaton scenario. At linear order, they leave the CMB two-point correlation function nearly unchanged: this is why existing constraints to CIPs are so much more permissive than constraints to typical isocurvature perturbations. Recent work articulated an efficient way to calculate the second order CIP effects on the CMB two-point correlation. We have implemented this method in order to explore constraints to the CIP amplitude using current Planck temperature and polarization data. In addition, we have computed the contribution of CIPs to the CMB lensing estimator which provides us with a novel method to use CMB data to place constraints on CIPs. We find that Planck data places a constraint to the CIP amplitude which is competitive with other methods.
Y. He; Q. Zhuang; A.D. McGuire; Y. Liu; M. Chen
2013-01-01
Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations inmodeling regional carbon dynamics and explore the...
NASA Astrophysics Data System (ADS)
Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.
2014-01-01
Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.
Yu Wei; Michael Bevers; Erin Belval; Benjamin Bird
2015-01-01
This research developed a chance-constrained two-stage stochastic programming model to support wildfire initial attack resource acquisition and location on a planning unit for a fire season. Fire growth constraints account for the interaction between fire perimeter growth and construction to prevent overestimation of resource requirements. We used this model to examine...
Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J
2011-01-01
In this work, tensile tests and one-dimensional constitutive modeling are performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigate the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles are performed during each test. The material is observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5 MPa to 4.2 MPa is observed for the constrained displacement recovery experiments. After performing the experiments, the Chen and Lagoudas model is used to simulate and predict the experimental results. The material properties used in the constitutive model – namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction – are calibrated from a single 10% extension free recovery experiment. The model is then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data. PMID:22003272
3D basin structure of the Santa Clara Valley constrained by ambient noise tomography
NASA Astrophysics Data System (ADS)
Cho, H.; Lee, S. J.; Rhie, J.; Kim, S.
2017-12-01
The basin structure is an important factor controls the intensity and duration of ground shaking due to earthquake. Thus it is important to study the basin structure for better understanding seismic hazard and also improving the earthquake preparedness. An active source seismic survey is the most appropriate method to determine the basin structure in detail but its applicability, especially in urban areas, is limited. In this study, we tested the potential of an ambient noise tomography, which can be a cheaper and more easily applicable method compared to a traditional active source survey, to construct the velocity model of the basin. Our testing region is the Santa Clara Valley, which is one of the major urban sedimentary basins in the States. We selected this region because continuous seismic recordings and well defined velocity models are available. Continuous seismic recordings of 6 months from short-period array of Santa Clara Valley Seismic Experiment are cross-correlated with 1 hour time window. And the fast marching method and the subspace method are jointly applied to construct 2-D group velocity maps between 0.2 - 4.0 Hz. Then, shear wave velocity model of the Santa Clara Valley is calculated up to 5 km depth using bayesian inversion technique. Although our model cannot depict the detailed structures, it is roughly comparable with the velocity model of the US Geological Survey, which is constrained by active seismic surveys and field researches. This result indicate that an ambient noise tomography can be a replacement, at least in part, of an active seismic survey to construct the velocity model of the basin.
ERIC Educational Resources Information Center
Merikangas, Kathleen Ries; He, Jian-ping; Burstein, Marcy; Swendsen, Joel; Avenevoli, Shelli; Case, Brady; Georgiades, Katholiki; Heaton, Leanne; Swanson, Sonja; Olfson, Mark
2011-01-01
Objective: Mental health policy for youth has been constrained by a paucity of nationally representative data concerning patterns and correlates of mental health service utilization in this segment of the population. The objectives of this investigation were to examine the rates and sociodemographic correlates of lifetime mental health service use…
NASA Astrophysics Data System (ADS)
Choi, Hyun-Deok; Liu, Hongyu; Crawford, James H.; Considine, David B.; Allen, Dale J.; Duncan, Bryan N.; Horowitz, Larry W.; Rodriguez, Jose M.; Strahan, Susan E.; Zhang, Lin; Liu, Xiong; Damon, Megan R.; Steenrod, Stephen D.
2017-07-01
We examine the capability of the Global Modeling Initiative (GMI) chemistry and transport model to reproduce global mid-tropospheric (618 hPa) ozone-carbon monoxide (O3-CO) correlations determined by the measurements from the Tropospheric Emission Spectrometer (TES) aboard NASA's Aura satellite during boreal summer (July-August). The model is driven by three meteorological data sets (finite-volume General Circulation Model (fvGCM) with sea surface temperature for 1995, Goddard Earth Observing System Data Assimilation System Version 4 (GEOS-4 DAS) for 2005, and Modern-Era Retrospective Analysis for Research and Applications (MERRA) for 2005), allowing us to examine the sensitivity of model O3-CO correlations to input meteorological data. Model simulations of radionuclide tracers (222Rn, 210Pb, and 7Be) are used to illustrate the differences in transport-related processes among the meteorological data sets. Simulated O3 values are evaluated with climatological profiles from ozonesonde measurements and satellite tropospheric O3 columns. Despite the fact that the three simulations show significantly different global and regional distributions of O3 and CO concentrations, they show similar patterns of O3-CO correlations on a global scale. All model simulations sampled along the TES orbit track capture the observed positive O3-CO correlations in the Northern Hemisphere midlatitude continental outflow and the Southern Hemisphere subtropics. While all simulations show strong negative correlations over the Tibetan Plateau, northern Africa, the subtropical eastern North Pacific, and the Caribbean, TES O3 and CO concentrations at 618 hPa only show weak negative correlations over much narrower areas (i.e., the Tibetan Plateau and northern Africa). Discrepancies in regional O3-CO correlation patterns in the three simulations may be attributed to differences in convective transport, stratospheric influence, and subsidence, among other processes. To understand how various emissions drive global O3-CO correlation patterns, we examine the sensitivity of GMI/MERRA model-calculated O3 and CO concentrations and their correlations to emission types (fossil fuel, biomass burning, biogenic, and lightning NOx emissions). Fossil fuel and biomass burning emissions are mainly responsible for the strong positive O3-CO correlations over continental outflow regions in both hemispheres. Biogenic emissions have a relatively smaller impact on O3-CO correlations than other emissions but are largely responsible for the negative correlations over the tropical eastern Pacific, reflecting the fact that O3 is consumed and CO generated during the atmospheric oxidation process of isoprene under low-NOx conditions. We find that lightning NOx emissions degrade both positive correlations at mid- and high latitudes and negative correlations in the tropics because ozone production downwind of lightning NOx emissions is not directly related to the emission and transport of CO. Our study concludes that O3-CO correlations may be used effectively to constrain the sources of regional tropospheric O3 in global 3-D models, especially for those regions where convective transport of pollution plays an important role.
Atomic displacements in the charge ice pyrochlore Bi2Ti2O6O' studied by neutron total scattering
NASA Astrophysics Data System (ADS)
Shoemaker, Daniel P.; Seshadri, Ram; Hector, Andrew L.; Llobet, Anna; Proffen, Thomas; Fennie, Craig J.
2010-04-01
The oxide pyrochlore Bi2Ti2O6O' is known to be associated with large displacements of Bi and O' atoms from their ideal crystallographic positions. Neutron total scattering, analyzed in both reciprocal and real space, is employed here to understand the nature of these displacements. Rietveld analysis and maximum entropy methods are used to produce an average picture of the structural nonideality. Local structure is modeled via large-box reverse Monte Carlo simulations constrained simultaneously by the Bragg profile and real-space pair distribution function. Direct visualization and statistical analyses of these models show the precise nature of the static Bi and O' displacements. Correlations between neighboring Bi displacements are analyzed using coordinates from the large-box simulations. The framework of continuous symmetry measures has been applied to distributions of O'Bi4 tetrahedra to examine deviations from ideality. Bi displacements from ideal positions appear correlated over local length scales. The results are consistent with the idea that these nonmagnetic lone-pair containing pyrochlore compounds can be regarded as highly structurally frustrated systems.
NASA Astrophysics Data System (ADS)
Lauterbach, S.; Fina, M.; Wagner, W.
2018-04-01
Since structural engineering requires highly developed and optimized structures, the thickness dependency is one of the most controversially debated topics. This paper deals with stability analysis of lightweight thin structures combined with arbitrary geometrical imperfections. Generally known design guidelines only consider imperfections for simple shapes and loading, whereas for complex structures the lower-bound design philosophy still holds. Herein, uncertainties are considered with an empirical knockdown factor representing a lower bound of existing measurements. To fully understand and predict expected bearable loads, numerical investigations are essential, including geometrical imperfections. These are implemented into a stand-alone program code with a stochastic approach to compute random fields as geometric imperfections that are applied to nodes of the finite element mesh of selected structural examples. The stochastic approach uses the Karhunen-Loève expansion for the random field discretization. For this approach, the so-called correlation length l_c controls the random field in a powerful way. This parameter has a major influence on the buckling shape, and also on the stability load. First, the impact of the correlation length is studied for simple structures. Second, since most structures for engineering devices are more complex and combined structures, these are intensively discussed with the focus on constrained random fields for e.g. flange-web-intersections. Specific constraints for those random fields are pointed out with regard to the finite element model. Further, geometrical imperfections vanish where the structure is supported.
Eye Gaze Correlates of Motor Impairment in VR Observation of Motor Actions.
Alves, J; Vourvopoulos, A; Bernardino, A; Bermúdez I Badia, S
2016-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". Identify eye gaze correlates of motor impairment in a virtual reality motor observation task in a study with healthy participants and stroke patients. Participants consisted of a group of healthy subjects (N = 20) and a group of stroke survivors (N = 10). Both groups were required to observe a simple reach-and-grab and place-and-release task in a virtual environment. Additionally, healthy subjects were required to observe the task in a normal condition and a constrained movement condition. Eye movements were recorded during the observation task for later analysis. For healthy participants, results showed differences in gaze metrics when comparing the normal and arm-constrained conditions. Differences in gaze metrics were also found when comparing dominant and non-dominant arm for saccades and smooth pursuit events. For stroke patients, results showed longer smooth pursuit segments in action observation when observing the paretic arm, thus providing evidence that the affected circuitry may be activated for eye gaze control during observation of the simulated motor action. This study suggests that neural motor circuits are involved, at multiple levels, in observation of motor actions displayed in a virtual reality environment. Thus, eye tracking combined with action observation tasks in a virtual reality display can be used to monitor motor deficits derived from stroke, and consequently can also be used for rehabilitation of stroke patients.
A survey of volcano deformation in the central Andes using InSAR: Evidence for deep, slow inflation
NASA Astrophysics Data System (ADS)
Pritchard, M. E.; Simons, M.
2001-12-01
We use interferometric synthetic aperture radar (InSAR) to survey about 50 volcanos of the central Andes (15-27o S) for deformation during the 1992-2000 time interval. Because of the remote location of these volcanos, the activity of most are poorly constrained. Using the ERS-1/2 C-band radars (5.6 cm), we observe good interferometric correlation south of about 21o S, but poor correlation north of that latitude, especially in southern Peru. This variation is presumably related to regional climate variations. Our survey reveals broad (10's of km), roughly axisymmetric deformation at 2 volcanic centers with no previously documented deformation. At Uturuncu volcano, in southwestern Bolivia, the deformation rate can be constrained with radar data from several satellite tracks and is about 1 cm/year between 1992 and 2000. We find a second source of volcanic deformation located between Lastarria and Cordon del Azufre volcanos near the Chile/Argentina border. There is less radar data to constrain the deformation in this area, but the rate is also about 1 cm/yr between 1996 and 2000. While the spatial character of the deformation field appears to be affected by atmosphere at both locations, we do not think that the entire signal is atmospheric, because the signal is observed in several interferograms and nearby edifices do not show similar patterns. The deformation signal appears to be time-variable, although it is difficult to determine whether this is due to real variations in the deformation source or atmospheric effects. We model the deformation with both a uniform point-source source of inflation, and a tri-axial point-source ellipsoid, and compare both elastic half-space and layered-space models. We also explore the effects of local topography upon the deformation field using the method of Williams and Wadge (1998). We invert for source parameters using the global search Neighborhood Algorithm of Sambridge (1998). Preliminary results indicate that the sources at both Uturuncu and Lastarria/Cordon del Azufre volcanos are model-dependent, but are generally greater than 10 km deep. This depth suggests a potential relationship between the deformation source at Uturuncu and the large Altiplano-Puna Magmatic Complex that has been imaged seismically (e.g. Chmielowski et al., 1999), although the deformation at Lastarria/Cordon del Azufre lies outside the region of lowest seismic velocities (Yuan et al., 2000).
Type II Supernova Energetics and Comparison of Light Curves to Shock-cooling Models
NASA Astrophysics Data System (ADS)
Rubin, Adam; Gal-Yam, Avishay; De Cia, Annalisa; Horesh, Assaf; Khazov, Danny; Ofek, Eran O.; Kulkarni, S. R.; Arcavi, Iair; Manulis, Ilan; Yaron, Ofer; Vreeswijk, Paul; Kasliwal, Mansi M.; Ben-Ami, Sagi; Perley, Daniel A.; Cao, Yi; Cenko, S. Bradley; Rebbapragada, Umaa D.; Woźniak, P. R.; Filippenko, Alexei V.; Clubb, K. I.; Nugent, Peter E.; Pan, Y.-C.; Badenes, C.; Howell, D. Andrew; Valenti, Stefano; Sand, David; Sollerman, J.; Johansson, Joel; Leonard, Douglas C.; Horst, J. Chuck; Armen, Stephen F.; Fedrow, Joseph M.; Quimby, Robert M.; Mazzali, Paulo; Pian, Elena; Sternberg, Assaf; Matheson, Thomas; Sullivan, M.; Maguire, K.; Lazarevic, Sanja
2016-03-01
During the first few days after explosion, Type II supernovae (SNe) are dominated by relatively simple physics. Theoretical predictions regarding early-time SN light curves in the ultraviolet (UV) and optical bands are thus quite robust. We present, for the first time, a sample of 57 R-band SN II light curves that are well-monitored during their rise, with \\gt 5 detections during the first 10 days after discovery, and a well-constrained time of explosion to within 1-3 days. We show that the energy per unit mass (E/M) can be deduced to roughly a factor of five by comparing early-time optical data to the 2011 model of Rabinak & Waxman, while the progenitor radius cannot be determined based on R-band data alone. We find that SN II explosion energies span a range of E/M = (0.2-20) × 1051 erg/(10 {M}⊙ ), and have a mean energy per unit mass of < E/M> =0.85× {10}51 erg/(10 {M}⊙ ), corrected for Malmquist bias. Assuming a small spread in progenitor masses, this indicates a large intrinsic diversity in explosion energy. Moreover, E/M is positively correlated with the amount of 56Ni produced in the explosion, as predicted by some recent models of core-collapse SNe. We further present several empirical correlations. The peak magnitude is correlated with the decline rate ({{Δ }}{m}15), the decline rate is weakly correlated with the rise time, and the rise time is not significantly correlated with the peak magnitude. Faster declining SNe are more luminous and have longer rise times. This limits the possible power sources for such events.
Type II supernova energetics and comparison of light curves to shock-cooling models
Rubin, Adam; Gal-Yam, Avishay; De Cia, Annalisa; ...
2016-03-16
During the first few days after explosion, Type II supernovae (SNe) are dominated by relatively simple physics. Theoretical predictions regarding early-time SN light curves in the ultraviolet (UV) and optical bands are thus quite robust. We present, for the first time, a sample of 57 R-band SN II light curves that are well-monitored during their rise, withmore » $$\\gt 5$$ detections during the first 10 days after discovery, and a well-constrained time of explosion to within 1–3 days. We show that the energy per unit mass (E/M) can be deduced to roughly a factor of five by comparing early-time optical data to the 2011 model of Rabinak & Waxman, while the progenitor radius cannot be determined based on R-band data alone. We find that SN II explosion energies span a range of E/M = (0.2–20) × 10 51 erg/(10 $${M}_{\\odot }$$), and have a mean energy per unit mass of $$\\langle E/M\\rangle =0.85\\times {10}^{51}$$ erg/(10 $${M}_{\\odot }$$), corrected for Malmquist bias. Assuming a small spread in progenitor masses, this indicates a large intrinsic diversity in explosion energy. Moreover, E/M is positively correlated with the amount of 56Ni produced in the explosion, as predicted by some recent models of core-collapse SNe. We further present several empirical correlations. The peak magnitude is correlated with the decline rate ($${\\rm{\\Delta }}{m}_{15}$$), the decline rate is weakly correlated with the rise time, and the rise time is not significantly correlated with the peak magnitude. Faster declining SNe are more luminous and have longer rise times. Lastly, this limits the possible power sources for such events.« less
Type II Supernova Energetics and Comparison of Light Curves to Shock-Cooling Models
NASA Technical Reports Server (NTRS)
Rubin, Adam; Gal-Yam, Avishay; Cia, Annalisa De; Horesh, Assaf; Khazov, Danny; Ofek, Eran O.; Kulkarni, S. R.; Arcavi, Iair; Manulis, Ilan; Cenko, S. Bradley
2016-01-01
During the first few days after explosion, Type II supernovae (SNe) are dominated by relatively simple physics. Theoretical predictions regarding early-time SN light curves in the ultraviolet (UV) and optical bands are thus quite robust. We present, for the first time, a sample of 57 R-band SN II light curves that are well-monitored during their rise, with greater than 5 detections during the first 10 days after discovery, and a well-constrained time of explosion to within 13 days. We show that the energy per unit mass (E/M) can be deduced to roughly a factor of five by comparing early-time optical data to the 2011 model of Rabinak Waxman, while the progenitor radius cannot be determined based on R-band data alone. We find that SN II explosion energies span a range of EM = (0.2-20) x 10(exp 51) erg/(10 M stellar mass), and have a mean energy per unit mass of E/ M = 0.85 x 10(exp 51) erg(10 stellar mass), corrected for Malmquist bias. Assuming a small spread in progenitor masses, this indicates a large intrinsic diversity in explosion energy. Moreover, E/M is positively correlated with the amount of Ni-56 produced in the explosion, as predicted by some recent models of core-collapse SNe. We further present several empirical correlations. The peak magnitude is correlated with the decline rate (Delta m(sub15), the decline rate is weakly correlated with the rise time, and the rise time is not significantly correlated with the peak magnitude. Faster declining SNe are more luminous and have longer rise times. This limits the possible power sources for such events.
Nitrogen Species in the Post-Pinatubo Stratosphere: Model Analysis Utilizing UARS Measurements
NASA Technical Reports Server (NTRS)
Danilin, M. Y.; Rodriguez, J. M.; Hu, W.; Ko, M. K. W.; Weisenstein, D. K.; Kumer, J. B.; Mergenthaler, J. L.; Russell, J. M., III; Koike, M.; Yue, G. K.
1998-01-01
We present an analysis of the impact of heterogeneous chemistry on the partitioning of nitrogen species measured by the Upper Atmosphere Research Satellite (UARS) instruments. The UARS measurements utilized include: N2O, HNO3 and ClONO2 (Cryogen Limb Array Etalon Spectrometer (CLAES), version 7), temperature, methane, ozone, H2O, HCI, NO and NO2 (HALogen Occultation Experiment (HALOE), version 18). The analysis is carried out for the data from January 1992 to September 1994 in the 100-1 mbar (approximately 17-47 km) altitude range and over 10 degree latitude bins from 70 deg S to 70 deg N. Temporal-spatial evolution of aerosol surface area density (SAD) is adopted according to the Stratospheric Aerosol and Gas Experiment (SAGE) II data. A diurnal steady-state photochemical box model, constrained by the temperature, ozone, H2O, CH4, aerosol SAD and columns of O2 and O3 above the point of interest, has been used as the main tool to analyze these data. Total inorganic nitrogen (NOy) is obtained by three different methods: (1) as a sum of the UARS measured NO, NO2, HNO3, and CIONO2; (2) from the N2O-NOy correlation, (3) from the CH4-NOy correlation. To validate our current understanding of stratospheric heterogeneous chemistry for post-Pinatubo conditions, the model-calculated NOx/NOy ratios and the NO, NO2, and HNO3 profiles are compared to the UARS-derived data. In general, the UARS-constrained box model captures the main features of nitrogen species partitioning in the post-Pinatubo years. However, the model underestimates the NO2 content, particularly, in the 30-7 mbar (approximately 23-32 km) range. Comparisons of the calculated temporal behavior of the partial columns of NO2 and HNO3 and ground based measurements at 45 deg S and 45 deg N are also presented. Our analysis indicates that ground-based and HALOE v.18 measurements of the NO2 vertical columns are consistent within the range of their uncertainties and are systematically higher (up to 50%) than the model results at mid-latitudes in both hemispheres. Reasonable agreement is obtained for HNO3 columns at 45 deg S suggesting some problems with nitrogen species partitioning in the model. Outstanding uncertainties are discussed.
Nitrogen Species in the Post-Pinatubo Stratosphere: Model Analysis Utilizing UARS Measurements
NASA Technical Reports Server (NTRS)
Danilin, M. Y.; Rodriquez, J. M.; Hu, W.; Ko, M. K. W.; Weisenstein, D. K.; Mergenthaler, J. L.; Russell, J. M., III; Koike, M.; Yue, G. K.
1998-01-01
We present an analysis of the impact of heterogeneous chemistry on the partitioning of nitrogen species measured by the Upper Atmosphere Research Satellite (UARS) instruments. The UARS measurements utilized include: N2O, HNO3 and ClONO2 (Cryogen Limb Array Etalon Spectrometer (CLAES), version 7), temperature, methane, ozone, H2O, HCl, NO and NO2 (HALogen Occultation Experiment (HALOE), version 18). The analysis is carried out for the data from January 1992 to September 1994 in the 100-1 mbar (approx.17-47 km) altitude range and over 10 degree latitude bins from 70degS to 70degN. Temporal-spatial evolution of aerosol surface area density (SAD) is adopted according to the Stratospheric Aerosol and Gas Experiment (SAGE) 11 data. A diurnal steady-state photochemical box model, constrained by the temperature, ozone, H2O, CH4, aerosol SAD and columns of O2 and O3 above the point of interest, has been used as the main tool to analyze these data. Total inorganic nitrogen (NO(y)) is obtained by three different methods: (1) as a sum of the UARS measured NO, NO2, HNO3, and ClONO2; (2) from the N2O-NO(y) correlation, and (3) from the CH4-NO(y) correlation. To validate our current understanding of stratospheric heterogeneous chemistry for post-Pinatubo conditions, the model-calculated NO(x)/NO(y) ratios and the NO, NO2, and HNO3 profiles are compared to the UARS-derived data. In general, the UARS-constrained box model captures the main features of nitrogen species partitioning in the post-Pinatubo years. However, the model underestimates the NO2 content, particularly, in the 30-7 mbar (approx. 23-32 km) range. Comparisons of the calculated temporal behavior of the partial columns of NO2 and HNO3 and ground based measurements at 45degS and 45degN are also presented. Our analysis indicates that ground-based and HALOE v. 18 measurements of the NO2 vertical columns are consistent within the range of their uncertainties and are systematically higher (up to 50%) than the model results at mid-latitudes in both hemispheres. Reasonable agreement is obtained for HNO3 columns at 45degS suggesting some problems with nitrogen species partitioning in the model. Outstanding uncertainties are discussed.
Comorbidity of Alcohol and Gambling Problems in Emerging Adults: A Bifactor Model Conceptualization.
Tackett, Jennifer L; Krieger, Heather; Neighbors, Clayton; Rinker, Dipali; Rodriguez, Lindsey; Edward, Gottheil
2017-03-01
Addictive disorders, such as pathological gambling and alcohol use disorders, frequently co-occur at greater than chance levels. Substantive questions stem from this comorbidity regarding the extent to which shared variance between gambling and alcohol use reflects a psychological core of addictive tendencies, and whether this differs as a function of gender. The aims of this study were to differentiate both common and unique variance in alcohol and gambling problems in a bifactor model, examine measurement invariance of this model by gender, and identify substantive correlates of the final bifactor model. Undergraduates (N = 4475) from a large northwestern university completed an online screening questionnaire which included demographics, quantity of money lost and won when gambling, the South Oaks Gambling Screen, the AUDIT, gambling motives, drinking motives, personality, and the Brief Symptom Inventory. Results suggest that the bifactor model fit the data well in the full sample. Although the data suggest configural invariance across gender, factor loadings could not be constrained to be equal between men and women. As such, general and specific factors were examined separately by gender with a more intensive subsample of females and males (n = 264). Correlations with motivational tendencies, personality traits, and mental health symptoms indicated support for the validity of the bifactor model, as well as gender-specific patterns of association. Results suggest informative distinctions between shared and unique attributes related to problematic drinking and gambling.
Comorbidity of Alcohol and Gambling Problems in Emerging Adults: A Bifactor Model Conceptualization
Krieger, Heather; Neighbors, Clayton; Rinker, Dipali; Rodriguez, Lindsey; Edward, Gottheil
2017-01-01
Addictive disorders, such as pathological gambling and alcohol use disorders, frequently co-occur at greater than chance levels. Substantive questions stem from this comorbidity regarding the extent to which shared variance between gambling and alcohol use reflects a psychological core of addictive tendencies, and whether this differs as a function of gender. The aims of this study were to differentiate both common and unique variance in alcohol and gambling problems in a bifactor model, examine measurement invariance of this model by gender, and identify substantive correlates of the final bifactor model. Undergraduates (N = 4475) from a large northwestern university completed an online screening questionnaire which included demographics, quantity of money lost and won when gambling, the South Oaks Gambling Screen, the AUDIT, gambling motives, drinking motives, personality, and the Brief Symptom Inventory. Results suggest that the bifactor model fit the data well in the full sample. Although the data suggest configural invariance across gender, factor loadings could not be constrained to be equal between men and women. As such, general and specific factors were examined separately by gender with a more intensive subsample of females and males (n = 264). Correlations with motivational tendencies, personality traits, and mental health symptoms indicated support for the validity of the bifactor model, as well as gender-specific patterns of association. Results suggest informative distinctions between shared and unique attributes related to problematic drinking and gambling. PMID:27260007
Assimilation of Terrestrial Water Storage from GRACE in a Snow-Dominated Basin
NASA Technical Reports Server (NTRS)
Forman, Barton A.; Reichle, R. H.; Rodell, M.
2011-01-01
Terrestrial water storage (TWS) information derived from Gravity Recovery and Climate Experiment (GRACE) measurements is assimilated into a land surface model over the Mackenzie River basin located in northwest Canada. Assimilation is conducted using an ensemble Kalman smoother (EnKS). Model estimates with and without assimilation are compared against independent observational data sets of snow water equivalent (SWE) and runoff. For SWE, modest improvements in mean difference (MD) and root mean squared difference (RMSD) are achieved as a result of the assimilation. No significant differences in temporal correlations of SWE resulted. Runoff statistics of MD remain relatively unchanged while RMSD statistics, in general, are improved in most of the sub-basins. Temporal correlations are degraded within the most upstream sub-basin, but are, in general, improved at the downstream locations, which are more representative of an integrated basin response. GRACE assimilation using an EnKS offers improvements in hydrologic state/flux estimation, though comparisons with observed runoff would be enhanced by the use of river routing and lake storage routines within the prognostic land surface model. Further, GRACE hydrology products would benefit from the inclusion of better constrained models of post-glacial rebound, which significantly affects GRACE estimates of interannual hydrologic variability in the Mackenzie River basin.
Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun
2014-01-01
This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation. PMID:24757433
Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun
2014-01-01
This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation.
Patchy screening of the cosmic microwave background by inhomogeneous reionization
NASA Astrophysics Data System (ADS)
Gluscevic, Vera; Kamionkowski, Marc; Hanson, Duncan
2013-02-01
We derive a constraint on patchy screening of the cosmic microwave background from inhomogeneous reionization using off-diagonal TB and TT correlations in WMAP-7 temperature/polarization data. We interpret this as a constraint on the rms optical-depth fluctuation Δτ as a function of a coherence multipole LC. We relate these parameters to a comoving coherence scale, of bubble size RC, in a phenomenological model where reionization is instantaneous but occurs on a crinkly surface, and also to the bubble size in a model of “Swiss cheese” reionization where bubbles of fixed size are spread over some range of redshifts. The current WMAP data are still too weak, by several orders of magnitude, to constrain reasonable models, but forthcoming Planck and future EPIC data should begin to approach interesting regimes of parameter space. We also present constraints on the parameter space imposed by the recent results from the EDGES experiment.
The impact of Faraday effects on polarized black hole images of Sagittarius A*.
NASA Astrophysics Data System (ADS)
Jiménez-Rosales, Alejandra; Dexter, Jason
2018-05-01
We study model images and polarization maps of Sagittarius A* at 230 GHz. We post-process GRMHD simulations and perform a fully relativistic radiative transfer calculation of the emitted synchrotron radiation to obtain polarized images for a range of mass accretion rates and electron temperatures. At low accretion rates, the polarization map traces the underlying toroidal magnetic field geometry. At high accretion rates, we find that Faraday rotation internal to the emission region can depolarize and scramble the map. We measure the net linear polarization fraction and find that high accretion rate "jet-disc" models are heavily depolarized and are therefore disfavoured. We show how Event Horizon Telescope measurements of the polarized "correlation length" over the image provide a model-independent upper limit on the strength of these Faraday effects, and constrain plasma properties like the electron temperature and magnetic field strength.
Complete Hamiltonian analysis of cosmological perturbations at all orders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in
2016-06-01
In this work, we present a consistent Hamiltonian analysis of cosmological perturbations at all orders. To make the procedure transparent, we consider a simple model and resolve the 'gauge-fixing' issues and extend the analysis to scalar field models and show that our approach can be applied to any order of perturbation for any first order derivative fields. In the case of Galilean scalar fields, our procedure can extract constrained relations at all orders in perturbations leading to the fact that there is no extra degrees of freedom due to the presence of higher time derivatives of the field in themore » Lagrangian. We compare and contrast our approach to the Lagrangian approach (Chen et al. [2006]) for extracting higher order correlations and show that our approach is efficient and robust and can be applied to any model of gravity and matter fields without invoking slow-roll approximation.« less
Estimating free-body modal parameters from tests of a constrained structure
NASA Technical Reports Server (NTRS)
Cooley, Victor M.
1993-01-01
Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.
NASA Astrophysics Data System (ADS)
Kilb, Debi
2003-01-01
The 1992 M7.3 Landers earthquake may have played a role in triggering the 1999 M7.1 Hector Mine earthquake as suggested by their close spatial (˜20 km) proximity. Current investigations of triggering by static stress changes produce differing conclusions when small variations in parameter values are employed. Here I test the hypothesis that large-amplitude dynamic stress changes, induced by the Landers rupture, acted to promote the Hector Mine earthquake. I use a flat layer reflectivity method to model the Landers earthquake displacement seismograms. By requiring agreement between the model seismograms and data, I can constrain the Landers main shock parameters and velocity model. A similar reflectivity method is used to compute the evolution of stress changes. I find a strong positive correlation between the Hector Mine hypocenter and regions of large (>4 MPa) dynamic Coulomb stress changes (peak Δσf(t)) induced by the Landers main shock. A positive correlation is also found with large dynamic normal and shear stress changes. Uncertainties in peak Δσf(t) (1.3 MPa) are only 28% of the median value (4.6 MPa) determined from an extensive set (160) of model parameters. Therefore the correlation with dynamic stresses is robust to a range of Hector Mine main shock parameters, as well as to variations in the friction and Skempton's coefficients used in the calculations. These results imply dynamic stress changes may be an important part of earthquake trigging, such that large-amplitude stress changes alter the properties of an existing fault in a way that promotes fault failure.
A chance-constrained stochastic approach to intermodal container routing problems.
Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.
A chance-constrained stochastic approach to intermodal container routing problems
Zhao, Yi; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389
Hydrologic and hydraulic flood forecasting constrained by remote sensing data
NASA Astrophysics Data System (ADS)
Li, Y.; Grimaldi, S.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.
2017-12-01
Flooding is one of the most destructive natural disasters, resulting in many deaths and billions of dollars of damages each year. An indispensable tool to mitigate the effect of floods is to provide accurate and timely forecasts. An operational flood forecasting system typically consists of a hydrologic model, converting rainfall data into flood volumes entering the river system, and a hydraulic model, converting these flood volumes into water levels and flood extents. Such a system is prone to various sources of uncertainties from the initial conditions, meteorological forcing, topographic data, model parameters and model structure. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using ground-based streamflow measurements, and such applications are limited to well-gauged areas. The recent increasing availability of spatially distributed remote sensing (RS) data offers new opportunities to improve flood forecasting skill. Based on an Australian case study, this presentation will discuss the use of 1) RS soil moisture to constrain a hydrologic model, and 2) RS flood extent and level to constrain a hydraulic model.The GRKAL hydrological model is calibrated through a joint calibration scheme using both ground-based streamflow and RS soil moisture observations. A lag-aware data assimilation approach is tested through a set of synthetic experiments to integrate RS soil moisture to constrain the streamflow forecasting in real-time.The hydraulic model is LISFLOOD-FP which solves the 2-dimensional inertial approximation of the Shallow Water Equations. Gauged water level time series and RS-derived flood extent and levels are used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space will be discussed.
NASA Astrophysics Data System (ADS)
Burrage, Clare; Sakstein, Jeremy
2018-03-01
Theories of modified gravity, where light scalars with non-trivial self-interactions and non-minimal couplings to matter—chameleon and symmetron theories—dynamically suppress deviations from general relativity in the solar system. On other scales, the environmental nature of the screening means that such scalars may be relevant. The highly-nonlinear nature of screening mechanisms means that they evade classical fifth-force searches, and there has been an intense effort towards designing new and novel tests to probe them, both in the laboratory and using astrophysical objects, and by reinterpreting existing datasets. The results of these searches are often presented using different parametrizations, which can make it difficult to compare constraints coming from different probes. The purpose of this review is to summarize the present state-of-the-art searches for screened scalars coupled to matter, and to translate the current bounds into a single parametrization to survey the state of the models. Presently, commonly studied chameleon models are well-constrained but less commonly studied models have large regions of parameter space that are still viable. Symmetron models are constrained well by astrophysical and laboratory tests, but there is a desert separating the two scales where the model is unconstrained. The coupling of chameleons to photons is tightly constrained but the symmetron coupling has yet to be explored. We also summarize the current bounds on f( R) models that exhibit the chameleon mechanism (Hu and Sawicki models). The simplest of these are well constrained by astrophysical probes, but there are currently few reported bounds for theories with higher powers of R. The review ends by discussing the future prospects for constraining screened modified gravity models further using upcoming and planned experiments.
Deco, Gustavo; Mantini, Dante; Romani, Gian Luca; Hagmann, Patric; Corbetta, Maurizio
2013-01-01
Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure–function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure–function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications. PMID:23825427
Deco, Gustavo; Ponce-Alvarez, Adrián; Mantini, Dante; Romani, Gian Luca; Hagmann, Patric; Corbetta, Maurizio
2013-07-03
Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure-function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure-function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications.
Rodriguez, Brian D.; Sawyer, David A.; Hudson, Mark R.; Grauch, V.J.S.
2013-01-01
Two- and three-dimensional electrical resistivity models derived from the magnetotelluric method were interpreted to provide more accurate hydrogeologic parameters for the Albuquerque and Española Basins. Analysis and interpretation of the resistivity models are aided by regional borehole resistivity data. Examination of the magnetotelluric response of hypothetical stratigraphic cases using resistivity characterizations from the borehole data elucidates two scenarios where the magnetotelluric method provides the strongest constraints. In the first scenario, the magnetotelluric method constrains the thickness of extensive volcanic cover, the underlying thickness of coarser-grained facies of buried Santa Fe Group sediments, and the depth to Precambrian basement or overlying Pennsylvanian limestones. In the second scenario, in the absence of volcanic cover, the magnetotelluric method constrains the thickness of coarser-grained facies of buried Santa Fe Group sediments and the depth to Precambrian basement or overlying Pennsylvanian limestones. Magnetotelluric surveys provide additional constraints on the relative positions of basement rocks and the thicknesses of Paleozoic, Mesozoic, and Tertiary sedimentary rocks in the region of the Albuquerque and Española Basins. The northern extent of a basement high beneath the Cerros del Rio volcanic field is delineated. Our results also reveal that the largest offset of the Hubbell Spring fault zone is located 5 km west of the exposed scarp. By correlating our resistivity models with surface geology and the deeper stratigraphic horizons using deep well log data, we are able to identify which of the resistivity variations in the upper 2 km belong to the upper Santa Fe Group sediment
Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.
Giedt, Joel; Thomas, Anthony W; Young, Ross D
2009-11-13
Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.
NASA Technical Reports Server (NTRS)
Abercromby, Kira J.; Rapp, Jason; Bedard, Donald; Seitzer, Patrick; Cardona, Tommaso; Cowardin, Heather; Barker, Ed; Lederer, Susan
2013-01-01
Constrained Linear Least Squares model is generally more accurate than the "human-in-the-loop". However, "human-in-the-loop" can remove materials that make no sense. The speed of the model in determining a "first cut" at the material ID makes it a viable option for spectral unmixing of debris objects.
NASA Astrophysics Data System (ADS)
Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven
2017-04-01
Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.
Statistical model of a flexible inextensible polymer chain: The effect of kinetic energy.
Pergamenshchik, V M; Vozniak, A B
2017-01-01
Because of the holonomic constraints, the kinetic energy contribution in the partition function of an inextensible polymer chain is difficult to find, and it has been systematically ignored. We present the first thermodynamic calculation incorporating the kinetic energy of an inextensible polymer chain with the bending energy. To explore the effect of the translation-rotation degrees of freedom, we propose and solve a statistical model of a fully flexible chain of N+1 linked beads which, in the limit of smooth bending, is equivalent to the well-known wormlike chain model. The partition function with the kinetic and bending energies and correlations between orientations of any pair of links and velocities of any pair of beads are found. This solution is precise in the limits of small and large rigidity-to-temperature ratio b/T. The last exact solution is essential as even very "harmless" approximation results in loss of the important effects when the chain is very rigid. For very high b/T, the orientations of different links become fully correlated. Nevertheless, the chain does not go over into a hard rod even in the limit b/T→∞: While the velocity correlation length diverges, the correlations themselves remain weak and tend to the value ∝T/(N+1). The N dependence of the partition function is essentially determined by the kinetic energy contribution. We demonstrate that to obtain the correct energy and entropy in a constrained system, the T derivative of the partition function has to be applied before integration over the constraint-setting variable.
Statistical model of a flexible inextensible polymer chain: The effect of kinetic energy
NASA Astrophysics Data System (ADS)
Pergamenshchik, V. M.; Vozniak, A. B.
2017-01-01
Because of the holonomic constraints, the kinetic energy contribution in the partition function of an inextensible polymer chain is difficult to find, and it has been systematically ignored. We present the first thermodynamic calculation incorporating the kinetic energy of an inextensible polymer chain with the bending energy. To explore the effect of the translation-rotation degrees of freedom, we propose and solve a statistical model of a fully flexible chain of N +1 linked beads which, in the limit of smooth bending, is equivalent to the well-known wormlike chain model. The partition function with the kinetic and bending energies and correlations between orientations of any pair of links and velocities of any pair of beads are found. This solution is precise in the limits of small and large rigidity-to-temperature ratio b /T . The last exact solution is essential as even very "harmless" approximation results in loss of the important effects when the chain is very rigid. For very high b /T , the orientations of different links become fully correlated. Nevertheless, the chain does not go over into a hard rod even in the limit b /T →∞ : While the velocity correlation length diverges, the correlations themselves remain weak and tend to the value ∝T /(N +1 ). The N dependence of the partition function is essentially determined by the kinetic energy contribution. We demonstrate that to obtain the correct energy and entropy in a constrained system, the T derivative of the partition function has to be applied before integration over the constraint-setting variable.
NASA Astrophysics Data System (ADS)
Yudin, V. A.; England, S.; Matsuo, T.; Wang, H.; Immel, T. J.; Eastes, R.; Akmaev, R. A.; Goncharenko, L. P.; Fuller-Rowell, T. J.; Liu, H.; Solomon, S. C.; Wu, Q.
2014-12-01
We review and discuss the capability of novel configurations of global community (WACCM-X and TIME-GCM) and planned-operational (WAM) models to support current and forthcoming space-borne missions to monitor the dynamics and composition of the Ionosphere-Thermosphere-Mesosphere (ITM) system. In the specified meteorology model configuration of WACCM-X, the lower atmosphere is constrained by operational analyses and/or short-term forecasts provided by the Goddard Earth Observing System (GEOS-5) of GMAO/NASA/GSFC. With the terrestrial weather of GEOS-5 and updated model physics, WACCM-X simulations are capable to reproduce the observed signatures of the perturbed wave dynamics and ion-neutral coupling during recent (2006-2013) stratospheric warming events, short-term, annual and year-to-year variability of prevailing flows, planetary waves, tides, and composition. With assimilation of the NWP data in the troposphere and stratosphere the planned-operational configuration of WAM can also recreate the observed features of the ITM day-to-day variability. These "terrestrial-weather" driven whole atmosphere simulations, with day-to-day variable solar and geomagnetic inputs, can provide specification of the background state (first guess) and errors for the inverse algorithms of forthcoming NASA ITM missions, such as ICON and GOLD. With two different viewing geometries (sun-synchronous, for ICON and geostationary for GOLD) these missions promise to perform complimentary global observations of temperature, winds and constituents to constrain the first-principle space weather forecast models. The paper will discuss initial designs of Observing System Simulation Experiments (OSSE) in the coupled simulations of TIME-GCM/WACCM-X/GEOS5 and WAM/GIP. As recognized, OSSE represent an excellent learning tool for designing and evaluating observing capabilities of novel sensors. The choice of assimilation schemes, forecast and observational errors will be discussed along with challenges and perspectives to constrain fast-varying dynamics of tides and planetary waves by observations made from sun-synchronous and geostationary space-borne platforms. We will also discuss how correlative space-borne and ground-based observations can evaluate OSSE results.
Wu, Sheng; Jin, Qibing; Zhang, Ridong; Zhang, Junfeng; Gao, Furong
2017-07-01
In this paper, an improved constrained tracking control design is proposed for batch processes under uncertainties. A new process model that facilitates process state and tracking error augmentation with further additional tuning is first proposed. Then a subsequent controller design is formulated using robust stable constrained MPC optimization. Unlike conventional robust model predictive control (MPC), the proposed method enables the controller design to bear more degrees of tuning so that improved tracking control can be acquired, which is very important since uncertainties exist inevitably in practice and cause model/plant mismatches. An injection molding process is introduced to illustrate the effectiveness of the proposed MPC approach in comparison with conventional robust MPC. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Spreading of correlations in the XXZ chain at finite temperatures
NASA Astrophysics Data System (ADS)
Bonnes, Lars; Läuchli, Andreas
2014-03-01
In a quantum quench, for instance by abruptly changing the interaction parameter in a spin chain, correlations can spread across the system but have to obey a speed limit set by the Lieb-Robinson bound. This results into a causal structure where the propagation front resembles a light-cone. One can ask how fast a correlation front actually propagates and how its velocity depends on the nature of the quench. This question is addressed by performing global quenches in the XXZ chain initially prepared in a finite-temperature state using minimally entangled typical thermal states (METTS). We provide numerical evidence that the spreading velocity of the spin correlation functions for the quench into the gapless phase is solely determined by the value of the final interaction and the amount of excess energy of the system. This is quite surprising as the XXZ model is integrable and its dynamics is constrained by a large amount of conserved quantities. In particular, the spreading velocity seems to interpolate linearly from a universal value at T = ∞ to the spin wave velocity of the final Hamiltonian in the limit of zero excess energy for Δfinal > 0 .
Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model
NASA Astrophysics Data System (ADS)
Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman
2015-01-01
The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.
NASA Astrophysics Data System (ADS)
Moorkamp, M.; Fishwick, S.; Jones, A. G.
2015-12-01
Typical surface wave tomography can recover well the velocity structure of the upper mantle in the depth range between 70-200km. For a successful inversion, we have to constrain the crustal structure and assess the impact on the resulting models. In addition,we often observe potentially interesting features in the uppermost lithosphere which are poorly resolved and thus their interpretationhas to be approached with great care.We are currently developing a seismically constrained magnetotelluric (MT) inversion approach with the aim of better recovering the lithospheric properties (and thus seismic velocities) in these problematic areas. We perform a 3D MT inversion constrained by a fixed seismic velocity model from surface wave tomography. In order to avoid strong bias, we only utilize information on structural boundaries to combine these two methods. Within the region that is well resolved by both methods, we can then extract a velocity-conductivity relationship. By translating the conductivitiesretrieved from MT into velocities in areas where the velocity model is poorly resolved, we can generate an updated velocity model and test what impactthe updated velocities have on the predicted data.We test this new approach using a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons togetherwith a tomographic models for the region. Here, both datasets have previously been used to constrain lithospheric structure and show some similarities.We carefully asses the validity of our results by comparing with observations and petrophysical predictions for the conductivity-velocity relationship.
Periodic Forced Response of Structures Having Three-Dimensional Frictional Constraints
NASA Astrophysics Data System (ADS)
CHEN, J. J.; YANG, B. D.; MENQ, C. H.
2000-01-01
Many mechanical systems have moving components that are mutually constrained through frictional contacts. When subjected to cyclic excitations, a contact interface may undergo constant changes among sticks, slips and separations, which leads to very complex contact kinematics. In this paper, a 3-D friction contact model is employed to predict the periodic forced response of structures having 3-D frictional constraints. Analytical criteria based on this friction contact model are used to determine the transitions among sticks, slips and separations of the friction contact, and subsequently the constrained force which consists of the induced stick-slip friction force on the contact plane and the contact normal load. The resulting constrained force is often a periodic function and can be considered as a feedback force that influences the response of the constrained structures. By using the Multi-Harmonic Balance Method along with Fast Fourier Transform, the constrained force can be integrated with the receptance of the structures so as to calculate the forced response of the constrained structures. It results in a set of non-linear algebraic equations that can be solved iteratively to yield the relative motion as well as the constrained force at the friction contact. This method is used to predict the periodic response of a frictionally constrained 3-d.o.f. oscillator. The predicted results are compared with those of the direct time integration method so as to validate the proposed method. In addition, the effect of super-harmonic components on the resonant response and jump phenomenon is examined.
New probes of Cosmic Microwave Background large-scale anomalies
NASA Astrophysics Data System (ADS)
Aiola, Simone
Fifty years of Cosmic Microwave Background (CMB) data played a crucial role in constraining the parameters of the LambdaCDM model, where Dark Energy, Dark Matter, and Inflation are the three most important pillars not yet understood. Inflation prescribes an isotropic universe on large scales, and it generates spatially-correlated density fluctuations over the whole Hubble volume. CMB temperature fluctuations on scales bigger than a degree in the sky, affected by modes on super-horizon scale at the time of recombination, are a clean snapshot of the universe after inflation. In addition, the accelerated expansion of the universe, driven by Dark Energy, leaves a hardly detectable imprint in the large-scale temperature sky at late times. Such fundamental predictions have been tested with current CMB data and found to be in tension with what we expect from our simple LambdaCDM model. Is this tension just a random fluke or a fundamental issue with the present model? In this thesis, we present a new framework to probe the lack of large-scale correlations in the temperature sky using CMB polarization data. Our analysis shows that if a suppression in the CMB polarization correlations is detected, it will provide compelling evidence for new physics on super-horizon scale. To further analyze the statistical properties of the CMB temperature sky, we constrain the degree of statistical anisotropy of the CMB in the context of the observed large-scale dipole power asymmetry. We find evidence for a scale-dependent dipolar modulation at 2.5sigma. To isolate late-time signals from the primordial ones, we test the anomalously high Integrated Sachs-Wolfe effect signal generated by superstructures in the universe. We find that the detected signal is in tension with the expectations from LambdaCDM at the 2.5sigma level, which is somewhat smaller than what has been previously argued. To conclude, we describe the current status of CMB observations on small scales, highlighting the tensions between Planck, WMAP, and SPT temperature data and how the upcoming data release of the ACTpol experiment will contribute to this matter. We provide a description of the current status of the data-analysis pipeline and discuss its ability to recover large-scale modes.
NASA Astrophysics Data System (ADS)
Pan, M.; Wood, E. F.
2004-05-01
This study explores a method to estimate various components of the water cycle (ET, runoff, land storage, etc.) based on a number of different info sources, including both observations and observation-enhanced model simulations. Different from existing data assimilations, this constrained Kalman filtering approach keeps the water budget perfectly closed while updating the states of the underlying model (VIC model) optimally using observations. Assimilating different data sources in this way has several advantages: (1) physical model is included to make estimation time series smooth, missing-free, and more physically consistent; (2) uncertainties in the model and observations are properly addressed; (3) model is constrained by observation thus to reduce model biases; (4) balance of water is always preserved along the assimilation. Experiments are carried out in Southern Great Plain region where necessary observations have been collected. This method may also be implemented in other applications with physical constraints (e.g. energy cycles) and at different scales.
Guenole, Nigel
2018-01-01
The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository.
Guenole, Nigel
2018-01-01
The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository. PMID:29551985
Kuzawa, Christopher W; Eisenberg, Dan T A
2012-01-01
Birth weight (BW) predicts many health outcomes, but the relative contributions of genes and environmental factors to BW remain uncertain. Some studies report stronger mother-offspring than father-offspring BW correlations, with attenuated father-offspring BW correlations when the mother is stunted. These findings have been interpreted as evidence that maternal genetic or environmental factors play an important role in determining birth size, with small maternal size constraining paternal genetic contributions to offspring BW. Here we evaluate mother-offspring and father-offspring birth weight (BW) associations and evaluate whether maternal stunting constrains genetic contributions to offspring birth size. Data include BW of offspring (n = 1,101) born to female members (n = 382) and spouses of male members (n = 275) of a birth cohort (born 1983-84) in Metropolitan Cebu, Philippines. Regression was used to relate parental and offspring BW adjusting for confounders. Resampling testing was used to evaluate whether false paternity could explain any evidence for excess matrilineal inheritance. In a pooled model adjusting for maternal height and confounders, parental BW was a borderline-significantly stronger predictor of offspring BW in mothers compared to fathers (sex of parent interaction p = 0.068). In separate multivariate models, each kg in mother's and father's BW predicted a 271±53 g (p<0.00001) and 132±55 g (p = 0.017) increase in offspring BW, respectively. Resampling statistics suggested that false paternity rates of >25% and likely 50% would be needed to explain these differences. There was no interaction between maternal stature and maternal BW (interaction p = 0.520) or paternal BW (p = 0.545). Each kg change in mother's BW predicted twice the change in offspring BW as predicted by a change in father's BW, consistent with an intergenerational maternal effect on offspring BW. Evidence for excess matrilineal BW heritability at all levels of maternal stature points to indirect genetic, mitochondrial, or epigenetic maternal contributions to offspring fetal growth.
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
Trajectory optimization and guidance law development for national aerospace plane applications
NASA Technical Reports Server (NTRS)
Calise, A. J.; Flandro, G. A.; Corban, J. E.
1988-01-01
The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.
NASA Astrophysics Data System (ADS)
Ermakov, A. I.; Fu, R. R.; Castillo-Rogez, J. C.; Raymond, C. A.; Park, R. S.; Preusker, F.; Russell, C. T.; Smith, D. E.; Zuber, M. T.
2017-11-01
Ceres is the largest body in the asteroid belt with a radius of approximately 470 km. In part due to its large mass, Ceres more closely approaches hydrostatic equilibrium than major asteroids. Pre-Dawn mission shape observations of Ceres revealed a shape consistent with a hydrostatic ellipsoid of revolution. The Dawn spacecraft Framing Camera has been imaging Ceres since March 2015, which has led to high-resolution shape models of the dwarf planet, while the gravity field has been globally determined to a spherical harmonic degree 14 (equivalent to a spatial wavelength of 211 km) and locally to 18 (a wavelength of 164 km). We use these shape and gravity models to constrain Ceres' internal structure. We find a negative correlation and admittance between topography and gravity at degree 2 and order 2. Low admittances between spherical harmonic degrees 3 and 16 are well explained by Airy isostatic compensation mechanism. Different models of isostasy give crustal densities between 1,200 and 1,400 kg/m3 with our preferred model giving a crustal density of 1,287+70-87 kg/m3. The mantle density is constrained to be 2,434+5-8 kg/m3. We compute isostatic gravity anomaly and find evidence for mascon-like structures in the two biggest basins. The topographic power spectrum of Ceres and its latitude dependence suggest that viscous relaxation occurred at the long wavelengths (>246 km). Our density constraints combined with finite element modeling of viscous relaxation suggests that the rheology and density of the shallow surface are most consistent with a rock, ice, salt and clathrate mixture.
Constraints on Dark Energy from Baryon Acoustic Peak and Galaxy Cluster Gas Mass Measurements
NASA Astrophysics Data System (ADS)
Samushia, Lado; Ratra, Bharat
2009-10-01
We use baryon acoustic peak measurements by Eisenstein et al. and Percival et al., together with the Wilkinson Microwave Anisotropy Probe (WMAP) measurement of the apparent acoustic horizon angle, and galaxy cluster gas mass fraction measurements of Allen et al., to constrain a slowly rolling scalar field dark energy model, phiCDM, in which dark energy's energy density changes in time. We also compare our phiCDM results with those derived for two more common dark energy models: the time-independent cosmological constant model, ΛCDM, and the XCDM parameterization of dark energy's equation of state. For time-independent dark energy, the Percival et al. measurements effectively constrain spatial curvature and favor a close to the spatially flat model, mostly due to the WMAP cosmic microwave background prior used in the analysis. In a spatially flat model the Percival et al. data less effectively constrain time-varying dark energy. The joint baryon acoustic peak and galaxy cluster gas mass constraints on the phiCDM model are consistent with but tighter than those derived from other data. A time-independent cosmological constant in a spatially flat model provides a good fit to the joint data, while the α parameter in the inverse power-law potential phiCDM model is constrained to be less than about 4 at 3σ confidence level.
Sharma, Varun; Suryawanshi, Dipak; Saggurti, Niranjan; Bharat, Shalini
2017-11-01
Accessibility and frequency of use of health care services among female sex workers (FSWs) are constrained by various factors. In this analysis, we examined the correlates of frequency of using health care services under targeted interventions among FSWs. A sample of FSWs (N = 1,973) was obtained from a second round (2012) of Behavioral Tracking Survey, conducted in five districts of Andhra Pradesh, a high-HIV-prevalence state in southern India. We used negative binomial regression models to analyze frequency of utilization of health care services among FSWs. Based on our analysis, we suggest that various predisposing and enabling factors were found to be significantly associated with the visit to NGO clinics for treatment of any health problem, any sexually transmitted infection symptom, and the number of condoms received from the peer worker or condom depot. We suggest the need for further research with respect to various correlates of frequency of using health care among FSWs to develop effective intervention strategies in countries that have high HIV prevalence among FSWs and targeted interventions need more diligent implementation to reach the unreached.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, C.; et al.
We present the calibration of the Dark Energy Survey Year 1 (DES Y1) weak lensing source galaxy redshift distributions from clustering measurements. By cross-correlating the positions of source galaxies with luminous red galaxies selected by the redMaGiC algorithm we measure the redshift distributions of the source galaxies as placed into different tomographic bins. These measurements constrain any such shifts to an accuracy ofmore » $$\\sim0.02$$ and can be computed even when the clustering measurements do not span the full redshift range. The highest-redshift source bin is not constrained by the clustering measurements because of the minimal redshift overlap with the redMaGiC galaxies. We compare our constraints with those obtained from $$\\texttt{COSMOS}$$ 30-band photometry and find that our two very different methods produce consistent constraints.« less
Critical Robotic Lunar Missions
NASA Astrophysics Data System (ADS)
Plescia, J. B.
2018-04-01
Perhaps the most critical missions to understanding lunar history are in situ dating and network missions. These would constrain the volcanic and thermal history and interior structure. These data would better constrain lunar evolution models.
Universally Sloppy Parameter Sensitivities in Systems Biology Models
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-01-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568
Universally sloppy parameter sensitivities in systems biology models.
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-10-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.
Ou, Guoliang; Tan, Shukui; Zhou, Min; Lu, Shasha; Tao, Yinghui; Zhang, Zuo; Zhang, Lu; Yan, Danping; Guan, Xingliang; Wu, Gang
2017-12-15
An interval chance-constrained fuzzy land-use allocation (ICCF-LUA) model is proposed in this study to support solving land resource management problem associated with various environmental and ecological constraints at a watershed level. The ICCF-LUA model is based on the ICCF (interval chance-constrained fuzzy) model which is coupled with interval mathematical model, chance-constrained programming model and fuzzy linear programming model and can be used to deal with uncertainties expressed as intervals, probabilities and fuzzy sets. Therefore, the ICCF-LUA model can reflect the tradeoff between decision makers and land stakeholders, the tradeoff between the economical benefits and eco-environmental demands. The ICCF-LUA model has been applied to the land-use allocation of Wujiang watershed, Guizhou Province, China. The results indicate that under highly land suitable conditions, optimized area of cultivated land, forest land, grass land, construction land, water land, unused land and landfill in Wujiang watershed will be [5015, 5648] hm 2 , [7841, 7965] hm 2 , [1980, 2056] hm 2 , [914, 1423] hm 2 , [70, 90] hm 2 , [50, 70] hm 2 and [3.2, 4.3] hm 2 , the corresponding system economic benefit will be between 6831 and 7219 billion yuan. Consequently, the ICCF-LUA model can effectively support optimized land-use allocation problem in various complicated conditions which include uncertainties, risks, economic objective and eco-environmental constraints. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wong, T. E.; Noone, D. C.; Kleiber, W.
2014-12-01
The single largest uncertainty in climate model energy balance is the surface latent heating over tropical land. Furthermore, the partitioning of the total latent heat flux into contributions from surface evaporation and plant transpiration is of great importance, but notoriously poorly constrained. Resolving these issues will require better exploiting information which lies at the interface between observations and advanced modeling tools, both of which are imperfect. There are remarkably few observations which can constrain these fluxes, placing strict requirements on developing statistical methods to maximize the use of limited information to best improve models. Previous work has demonstrated the power of incorporating stable water isotopes into land surface models for further constraining ecosystem processes. We present results from a stable water isotopically-enabled land surface model (iCLM4), including model experiments partitioning the latent heat flux into contributions from plant transpiration and surface evaporation. It is shown that the partitioning results are sensitive to the parameterization of kinetic fractionation used. We discuss and demonstrate an approach to calibrating select model parameters to observational data in a Bayesian estimation framework, requiring Markov Chain Monte Carlo sampling of the posterior distribution, which is shown to constrain uncertain parameters as well as inform relevant values for operational use. Finally, we discuss the application of the estimation scheme to iCLM4, including entropy as a measure of information content and specific challenges which arise in calibration models with a large number of parameters.
Spectral characteristics of background error covariance and multiscale data assimilation
Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
NASA Astrophysics Data System (ADS)
Reading, A. M.; Staal, T.; Halpin, J.; Whittaker, J. M.; Morse, P. E.
2017-12-01
The lithosphere of East Antarctica is one of the least explored regions of the planet, yet it is gaining in importance in global scientific research. Continental heat flux density and 3D glacial isostatic adjustment studies, for example, rely on a good knowledge of the deep structure in constraining model inputs.In this contribution, we use a multidisciplinary approach to constrain lithospheric domains. To seismic tomography models, we add constraints from magnetic studies and also new geological constraints. Geological knowledge exists around the periphery of East Antarctica and is reinforced in the knowledge of plate tectonic reconstructions. The subglacial geology of the Antarctic hinterland is largely unknown but the plate reconstructions allow the well-posed extrapolation of major terranes into the interior of the continent, guided by the seismic tomography and magnetic images. We find that the northern boundary of the lithospheric domain centred on the Gamburtsev Subglacial Mountains has a possible trend that runs south of the Lambert Glacier region, turning coastward through Wilkes Land. Other periphery-to-interior connections are less well constrained and the possibility of lithospheric domains that are entirely sub-glacial is high. We develop this framework to include a probabilistic method of handling alternate models and quantifiable uncertainties. We also show first results in using a Bayesian approach to predicting lithospheric boundaries from multivariate data.Within the newly constrained domains, we constrain heat flux (density) as the sum of basal heat flux and upper crustal heat flux. The basal heat flux is constrained by geophysical methods while the upper crustal heat flux is constrained by geology or predicted geology. In addition to heat flux constraints, we also consider the variations in friction experienced by moving ice sheets due to varying geology.
Structural Brain Connectivity Constrains within-a-Day Variability of Direct Functional Connectivity
Park, Bumhee; Eo, Jinseok; Park, Hae-Jeong
2017-01-01
The idea that structural white matter connectivity constrains functional connectivity (interactions among brain regions) has widely been explored in studies of brain networks; studies have mostly focused on the “average” strength of functional connectivity. The question of how structural connectivity constrains the “variability” of functional connectivity remains unresolved. In this study, we investigated the variability of resting state functional connectivity that was acquired every 3 h within a single day from 12 participants (eight time sessions within a 24-h period, 165 scans per session). Three different types of functional connectivity (functional connectivity based on Pearson correlation, direct functional connectivity based on partial correlation, and the pseudo functional connectivity produced by their difference) were estimated from resting state functional magnetic resonance imaging data along with structural connectivity defined using fiber tractography of diffusion tensor imaging. Those types of functional connectivity were evaluated with regard to properties of structural connectivity (fiber streamline counts and lengths) and types of structural connectivity such as intra-/inter-hemispheric edges and topological edge types in the rich club organization. We observed that the structural connectivity constrained the variability of direct functional connectivity more than pseudo-functional connectivity and that the constraints depended strongly on structural connectivity types. The structural constraints were greater for intra-hemispheric and heterologous inter-hemispheric edges than homologous inter-hemispheric edges, and feeder and local edges than rich club edges in the rich club architecture. While each edge was highly variable, the multivariate patterns of edge involvement, especially the direct functional connectivity patterns among the rich club brain regions, showed low variability over time. This study suggests that structural connectivity not only constrains the strength of functional connectivity, but also the within-a-day variability of functional connectivity and connectivity patterns, particularly the direct functional connectivity among brain regions. PMID:28848416
NASA Astrophysics Data System (ADS)
Quan, Lulin; Yang, Zhixin
2010-05-01
To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.
NASA Astrophysics Data System (ADS)
Ren, Xia; Yang, Yuanxi; Zhu, Jun; Xu, Tianhe
2017-11-01
Intersatellite Link (ISL) technology helps to realize the auto update of broadcast ephemeris and clock error parameters for Global Navigation Satellite System (GNSS). ISL constitutes an important approach with which to both improve the observation geometry and extend the tracking coverage of China's Beidou Navigation Satellite System (BDS). However, ISL-only orbit determination might lead to the constellation drift, rotation, and even lead to the divergence in orbit determination. Fortunately, predicted orbits with good precision can be used as a priori information with which to constrain the estimated satellite orbit parameters. Therefore, the precision of satellite autonomous orbit determination can be improved by consideration of a priori orbit information, and vice versa. However, the errors of rotation and translation in a priori orbit will remain in the ultimate result. This paper proposes a constrained precise orbit determination (POD) method for a sub-constellation of the new Beidou satellite constellation with only a few ISLs. The observation model of dual one-way measurements eliminating satellite clock errors is presented, and the orbit determination precision is analyzed with different data processing backgrounds. The conclusions are as follows. (1) With ISLs, the estimated parameters are strongly correlated, especially the positions and velocities of satellites. (2) The performance of determined BDS orbits will be improved by the constraints with more precise priori orbits. The POD precision is better than 45 m with a priori orbit constrain of 100 m precision (e.g., predicted orbits by telemetry tracking and control system), and is better than 6 m with precise priori orbit constraints of 10 m precision (e.g., predicted orbits by international GNSS monitoring & Assessment System (iGMAS)). (3) The POD precision is improved by additional ISLs. Constrained by a priori iGMAS orbits, the POD precision with two, three, and four ISLs is better than 6, 3, and 2 m, respectively. (4) The in-plane link and out-of-plane link have different contributions to observation configuration and system observability. The POD with weak observation configuration (e.g., one in-plane link and one out-of-plane link) should be tightly constrained with a priori orbits.
Evaluation of tropical Pacific observing systems using NCEP and GFDL ocean data assimilation systems
NASA Astrophysics Data System (ADS)
Xue, Yan; Wen, Caihong; Yang, Xiaosong; Behringer, David; Kumar, Arun; Vecchi, Gabriel; Rosati, Anthony; Gudgel, Rich
2017-08-01
The TAO/TRITON array is the cornerstone of the tropical Pacific and ENSO observing system. Motivated by the recent rapid decline of the TAO/TRITON array, the potential utility of TAO/TRITON was assessed for ENSO monitoring and prediction. The analysis focused on the period when observations from Argo floats were also available. We coordinated observing system experiments (OSEs) using the global ocean data assimilation system (GODAS) from the National Centers for Environmental Prediction and the ensemble coupled data assimilation (ECDA) from the Geophysical Fluid Dynamics Laboratory for the period 2004-2011. Four OSE simulations were conducted with inclusion of different subsets of in situ profiles: all profiles (XBT, moorings, Argo), all except the moorings, all except the Argo and no profiles. For evaluation of the OSE simulations, we examined the mean bias, standard deviation difference, root-mean-square difference (RMSD) and anomaly correlation against observations and objective analyses. Without assimilation of in situ observations, both GODAS and ECDA had large mean biases and RMSD in all variables. Assimilation of all in situ data significantly reduced mean biases and RMSD in all variables except zonal current at the equator. For GODAS, the mooring data is critical in constraining temperature in the eastern and northwestern tropical Pacific, while for ECDA both the mooring and Argo data is needed in constraining temperature in the western tropical Pacific. The Argo data is critical in constraining temperature in off-equatorial regions for both GODAS and ECDA. For constraining salinity, sea surface height and surface current analysis, the influence of Argo data was more pronounced. In addition, the salinity data from the TRITON buoys played an important role in constraining salinity in the western Pacific. GODAS was more sensitive to withholding Argo data in off-equatorial regions than ECDA because it relied on local observations to correct model biases and there were few XBT profiles in those regions. The results suggest that multiple ocean data assimilation systems should be used to assess sensitivity of ocean analyses to changes in the distribution of ocean observations to get more robust results that can guide the design of future tropical Pacific observing systems.
Cobalt adatoms on graphene: Effects of anisotropies on the correlated electronic structure
NASA Astrophysics Data System (ADS)
Mozara, R.; Valentyuk, M.; Krivenko, I.; Şaşıoǧlu, E.; Kolorenč, J.; Lichtenstein, A. I.
2018-02-01
Impurities on surfaces experience a geometric symmetry breaking induced not only by the on-site crystal-field splitting and the orbital-dependent hybridization, but also by different screening of the Coulomb interaction in different directions. We present a many-body study of the Anderson impurity model representing a Co adatom on graphene, taking into account all anisotropies of the effective Coulomb interaction, which we obtained by the constrained random-phase approximation. The most pronounced differences are naturally displayed by the many-body self-energy projected onto the single-particle states. For the solution of the Anderson impurity model and analytical continuation of the Matsubara data, we employed new implementations of the continuous-time hybridization expansion quantum Monte Carlo and the stochastic optimization method, and we verified the results in parallel with the exact diagonalization method.
Simulating flight boundary conditions for orbiter payload modal survey
NASA Technical Reports Server (NTRS)
Chung, Y. T.; Sernaker, M. L.; Peebles, J. H.
1993-01-01
An approach to simulate the characteristics of the payload/orbiter interfaces for the payload modal survey was developed. The flexure designed for this approach is required to provide adequate stiffness separation in the free and constrained interface degrees of freedom to closely resemble the flight boundary condition. Payloads will behave linearly and demonstrate similar modal effective mass distribution and load path as the flight if the flexure fixture is used for the payload modal survey. The potential non-linearities caused by the trunnion slippage during the conventional fixed base modal survey may be eliminated. Consequently, the effort to correlate the test and analysis models can be significantly reduced. An example is given to illustrate the selection and the sensitivity of the flexure stiffness. The advantages of using flexure fixtures for the modal survey and for the analytical model verification are also demonstrated.
Interstellar Travel and Galactic Colonization: Insights from Percolation Theory and the Yule Process
NASA Astrophysics Data System (ADS)
Lingam, Manasvi
2016-06-01
In this paper, percolation theory is employed to place tentative bounds on the probability p of interstellar travel and the emergence of a civilization (or panspermia) that colonizes the entire Galaxy. The ensuing ramifications with regard to the Fermi paradox are also explored. In particular, it is suggested that the correlation function of inhabited exoplanets can be used to observationally constrain p in the near future. It is shown, by using a mathematical evolution model known as the Yule process, that the probability distribution for civilizations with a given number of colonized worlds is likely to exhibit a power-law tail. Some of the dynamical aspects of this issue, including the question of timescales and generalizing percolation theory, were also studied. The limitations of these models, and other avenues for future inquiry, are also outlined.
Small-kernel, constrained least-squares restoration of sampled image data
NASA Technical Reports Server (NTRS)
Hazra, Rajeeb; Park, Stephen K.
1992-01-01
Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.
Do gamma-ray burst sources repeat?
NASA Technical Reports Server (NTRS)
Meegan, Charles A.; Hartmann, Dieter H.; Brainerd, J. J.; Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey; Kouveliotou, Chryssa; Fishman, Gerald; Blumenthal, George; Brock, Martin
1995-01-01
The demonstration of repeated gamma-ray bursts from an individual source would severely constrain burst source models. Recent reports (Quashnock and Lamb, 1993; Wang and Lingenfelter, 1993) of evidence for repetition in the first BATSE burst catalog have generated renewed interest in this issue. Here, we analyze the angular distribution of 585 bursts of the second BATSE catalog (Meegan et al., 1994). We search for evidence of burst recurrence using the nearest and farthest neighbor statistic and the two-point angular correlation function. We find the data to be consistent with the hypothesis that burst sources do not repeat; however, a repeater fraction of up to about 20% of the observed bursts cannot be excluded.
Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph; ...
2017-07-04
Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph
Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less
Identification of different geologic units using fuzzy constrained resistivity tomography
NASA Astrophysics Data System (ADS)
Singh, Anand; Sharma, S. P.
2018-01-01
Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.
How well can future CMB missions constrain cosmic inflation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Jérôme; Vennin, Vincent; Ringeval, Christophe, E-mail: jmartin@iap.fr, E-mail: christophe.ringeval@uclouvain.be, E-mail: vennin@iap.fr
2014-10-01
We study how the next generation of Cosmic Microwave Background (CMB) measurement missions (such as EPIC, LiteBIRD, PRISM and COrE) will be able to constrain the inflationary landscape in the hardest to disambiguate situation in which inflation is simply described by single-field slow-roll scenarios. Considering the proposed PRISM and LiteBIRD satellite designs, we simulate mock data corresponding to five different fiducial models having values of the tensor-to-scalar ratio ranging from 10{sup -1} down to 10{sup -7}. We then compute the Bayesian evidences and complexities of all Encyclopædia Inflationaris models in order to assess the constraining power of PRISM alone andmore » LiteBIRD complemented with the Planck 2013 data. Within slow-roll inflation, both designs have comparable constraining power and can rule out about three quarters of the inflationary scenarios, compared to one third for Planck 2013 data alone. However, we also show that PRISM can constrain the scalar running and has the capability to detect a violation of slow roll at second order. Finally, our results suggest that describing an inflationary model by its potential shape only, without specifying a reheating temperature, will no longer be possible given the accuracy level reached by the future CMB missions.« less
Aftershock triggering by complete Coulomb stress changes
Kilb, Debi; Gomberg, J.; Bodin, P.
2002-01-01
We examine the correlation between seismicity rate change following the 1992, M7.3, Landers, California, earthquake and characteristics of the complete Coulomb failure stress (CFS) changes (??CFS(t)) that this earthquake generated. At close distances the time-varying "dynamic" portion of the stress change depends on how the rupture develops temporally and spatially and arises from radiated seismic waves and from permanent coseismic fault displacement. The permanent "static" portion (??CFS) depends only on the final coseismic displacement. ??CFS diminishes much more rapidly with distance than the transient, dynamic stress changes. A common interpretation of the strong correlation between ??CFS and aftershocks is that load changes can advance or delay failure. Stress changes may also promote failure by physically altering properties of the fault or its environs. Because it is transient, ??CFS(t) can alter the failure rate only by the latter means. We calculate both ??CFS and the maximum positive value of ??CFS(t) (peak ??CFS(t)) using a reflectivity program. Input parameters are constrained by modeling Landers displacement seismograms. We quantify the correlation between maps of seismicity rate changes and maps of modeled ??CFS and peak ??CFS(t) and find agreement for both models. However, rupture directivity, which does not affect ??CFS, creates larger peak ??CFS(t) values northwest of the main shock. This asymmetry is also observed in seismicity rate changes but not in ??CFS. This result implies that dynamic stress changes are as effective as static stress changes in triggering aftershocks and may trigger earthquakes long after the waves have passed.
THE CLUSTERING CHARACTERISTICS OF H I-SELECTED GALAXIES FROM THE 40% ALFALFA SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Ann M.; Giovanelli, Riccardo; Haynes, Martha P.
The 40% Arecibo Legacy Fast ALFA survey catalog ({alpha}.40) of {approx}10,150 H I-selected galaxies is used to analyze the clustering properties of gas-rich galaxies. By employing the Landy-Szalay estimator and a full covariance analysis for the two-point galaxy-galaxy correlation function, we obtain the real-space correlation function and model it as a power law, {xi}(r) = (r/r{sub 0}){sup -{gamma}}, on scales <10 h{sup -1} Mpc. As the largest sample of blindly H I-selected galaxies to date, {alpha}.40 provides detailed understanding of the clustering of this population. We find {gamma} = 1.51 {+-} 0.09 and r{sub 0} = 3.3 + 0.3, -0.2more » h{sup -1} Mpc, reinforcing the understanding that gas-rich galaxies represent the most weakly clustered galaxy population known; we also observe a departure from a pure power-law shape at intermediate scales, as predicted in {Lambda}CDM halo occupation distribution models. Furthermore, we measure the bias parameter for the {alpha}.40 galaxy sample and find that H I galaxies are severely antibiased on small scales, but only weakly antibiased on large scales. The robust measurement of the correlation function for gas-rich galaxies obtained via the {alpha}.40 sample constrains models of the distribution of H I in simulated galaxies, and will be employed to better understand the role of gas in environmentally dependent galaxy evolution.« less
Indices of climate change based on patterns from CMIP5 models, and the range of projections
NASA Astrophysics Data System (ADS)
Watterson, I. G.
2018-05-01
Changes in temperature, precipitation, and other variables simulated by 40 current climate models for the 21st century are approximated as the product of the global mean warming and a spatial pattern of scaled changes. These fields of standardized change contain consistent features of simulated change, such as larger warming over land and increased high-latitude precipitation. However, they also differ across the ensemble, with standard deviations exceeding 0.2 for temperature over most continents, and 6% per degree for tropical precipitation. These variations are found to correlate, often strongly, with indices based on those of modes of interannual variability. Annular mode indices correlate, across the 40 models, with regional pressure changes and seasonal rainfall changes, particularly in South America and Europe. Equatorial ocean warming rates link to widespread anomalies, similarly to ENSO. A Pacific-Indian Dipole (PID) index representing the gradient in warming across the maritime continent is correlated with Australian rainfall with coefficient r of - 0.8. The component of equatorial warming orthogonal to this index, denoted EQN, has strong links to temperature and rainfall in Africa and the Americas. It is proposed that these indices and their associated patterns might be termed "modes of climate change". This is supported by an analysis of empirical orthogonal functions for the ensemble of standardized fields. Can such indices be used to help constrain projections? The relative similarity of the PID and EQN values of change, from models that have more skilful simulation of the present climate tropical pressure fields, provides a basis for this.
Detection of Antiferromagnetic Correlations in the Fermi-Hubbard Model
NASA Astrophysics Data System (ADS)
Hulet, Randall
2014-05-01
The Hubbard model, consisting of a cubic lattice with on-site interactions and kinetic energy arising from tunneling to nearest neighbors is a ``standard model'' of strongly correlated many-body physics, and it may also contain the essential ingredients of high-temperature superconductivity. While the Hamiltonian has only two terms it cannot be numerically solved for arbitrary density of spin-1/2 fermions due to exponential growth in the basis size. At a density of one spin-1/2 particle per site, however, the Hubbard model is known to exhibit antiferromagnetism at temperatures below the Néel temperature TN, a property shared by most of the undoped parent compounds of high-Tc superconductors. The realization of antiferromagnetism in a 3D optical lattice with atomic fermions has been impeded by the inability to attain sufficiently low temperatures. We have developed a method to perform evaporative cooling in a 3D cubic lattice by compensating the confinement envelope of the infrared optical lattice beams with blue-detuned laser beams. Evaporation can be controlled by the intensity of these non-retroreflected compensating beams. We observe significantly lower temperatures of a two-spin component gas of 6Li atoms in the lattice using this method. The cooling enables us to detect the development of short-range antiferromagnetic correlations using spin-sensitive Bragg scattering of light. Comparison with quantum Monte Carlo constrains the temperature in the lattice to 2-3 TN. We will discuss the prospects of attaining even lower temperatures with this method. Supported by DARPA/ARO, ONR, and NSF.
NASA Astrophysics Data System (ADS)
SUN, D.; TONG, L.
2002-05-01
A detailed model for the beams with partially debonded active constraining damping (ACLD) treatment is presented. In this model, the transverse displacement of the constraining layer is considered to be non-identical to that of the host structure. In the perfect bonding region, the viscoelastic core is modelled to carry both peel and shear stresses, while in the debonding area, it is assumed that no peel and shear stresses be transferred between the host beam and the constraining layer. The adhesive layer between the piezoelectric sensor and the host beam is also considered in this model. In active control, the positive position feedback control is employed to control the first mode of the beam. Based on this model, the incompatibility of the transverse displacements of the active constraining layer and the host beam is investigated. The passive and active damping behaviors of the ACLD patch with different thicknesses, locations and lengths are examined. Moreover, the effects of debonding of the damping layer on both passive and active control are examined via a simulation example. The results show that the incompatibility of the transverse displacements is remarkable in the regions near the ends of the ACLD patch especially for the high order vibration modes. It is found that a thinner damping layer may lead to larger shear strain and consequently results in a larger passive and active damping. In addition to the thickness of the damping layer, its length and location are also key factors to the hybrid control. The numerical results unveil that edge debonding can lead to a reduction of both passive and active damping, and the hybrid damping may be more sensitive to the debonding of the damping layer than the passive damping.
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2018-05-01
In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.
NASA Astrophysics Data System (ADS)
Gutknecht, B. D.; Götze, H.-J.; Jahr, T.; Jentzsch, G.; Mahatsente, R.; Zeumann, St.
2014-11-01
It is well known that the quality of gravity modelling of the Earth's lithosphere is heavily dependent on the limited number of available terrestrial gravity data. More recently, however, interest has grown within the geoscientific community to utilise the homogeneously measured satellite gravity and gravity gradient data for lithospheric scale modelling. Here, we present an interdisciplinary approach to determine the state of stress and rate of deformation in the Central Andean subduction system. We employed gravity data from terrestrial, satellite-based and combined sources using multiple methods to constrain stress, strain and gravitational potential energy (GPE). Well-constrained 3D density models, which were partly optimised using the combined regional gravity model IMOSAGA01C (Hosse et al. in Surv Geophys, 2014, this issue), were used as bases for the computation of stress anomalies on the top of the subducting oceanic Nazca plate and GPE relative to the base of the lithosphere. The geometries and physical parameters of the 3D density models were used for the computation of stresses and uplift rates in the dynamic modelling. The stress distributions, as derived from the static and dynamic modelling, reveal distinct positive anomalies of up to 80 MPa along the coastal Jurassic batholith belt. The anomalies correlate well with major seismicity in the shallow parts of the subduction system. Moreover, the pattern of stress distributions in the Andean convergent zone varies both along the north-south and west-east directions, suggesting that the continental fore-arc is highly segmented. Estimates of GPE show that the high Central Andes might be in a state of horizontal deviatoric tension. Models of gravity gradients from the Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite mission were used to compute Bouguer-like gradient anomalies at 8 km above sea level. The analysis suggests that data from GOCE add significant value to the interpretation of lithospheric structures, given that the appropriate topographic correction is applied.
Maximum entropy production: Can it be used to constrain conceptual hydrological models?
M.C. Westhoff; E. Zehe
2013-01-01
In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...
Constraining cosmic curvature by using age of galaxies and gravitational lenses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, Akshay; Mahajan, Shobhit; Mukherjee, Amitabha
We use two model-independent methods to constrain the curvature of the universe. In the first method, we study the evolution of the curvature parameter (Ω {sub k} {sup 0}) with redshift by using the observations of the Hubble parameter and transverse comoving distances obtained from the age of galaxies. Secondly, we also use an indirect method based on the mean image separation statistics of gravitationally lensed quasars. The basis of this methodology is that the average image separation of lensed images will show a positive, negative or zero correlation with the source redshift in a closed, open or flat universemore » respectively. In order to smoothen the datasets used in both the methods, we use a non-parametric method namely, Gaussian Process (GP). Finally from first method we obtain Ω {sub k} {sup 0} = 0.025±0.57 for a presumed flat universe while the cosmic curvature remains constant throughout the redshift region 0 < z < 1.37 which indicates that the universe may be homogeneous. Moreover, the combined result from both the methods suggests that the universe is marginally closed. However, a flat universe can be incorporated at 3σ level.« less
Precision probes of QCD at high energies
Alioli, Simone; Farina, Marco; Pappadopulo, Duccio; ...
2017-07-20
New physics, that is too heavy to be produced directly, can leave measurable imprints on the tails of kinematic distributions at the LHC.We use energetic QCD processes to perform novel measurements of the Standard Model (SM) Effective Field Theory. We show that the dijet invariant mass spectrum, and the inclusive jet transverse momentum spectrum, are sensitive to a dimension 6 operator that modifies the gluon propagator at high energies. The dominant effect is constructive or destructive interference with SM jet production. Here, we compare differential next-to-leading order predictions from POWHEG to public 7TeV jet data, including scale, PDF, and experimentalmore » uncertainties and their respective correlations. Furthermore, we constrain a New Physics (NP) scale of 3.5TeV with current data. We project the reach of future 13 and 100TeV measurements, which we estimate to be sensitive to NP scales of 8 and 60TeV, respectively. As an application, we apply our bounds to constrain heavy vector octet colorons that couple to the QCD current. We conclude that effective operators will surpass bump hunts, in terms of coloron mass reach, even for sequential couplings.« less
The inheritance of a Mesozoic landscape in western Scandinavia
Fredin, Ola; Viola, Giulio; Zwingmann, Horst; Sørlie, Ronald; Brönner, Marco; Lie, Jan-Erik; Grandal, Else Margrethe; Müller, Axel; Margreth, Annina; Vogt, Christoph; Knies, Jochen
2017-01-01
In-situ weathered bedrock, saprolite, is locally found in Scandinavia, where it is commonly thought to represent pre-Pleistocene weathering possibly associated with landscape formation. The age of weathering, however, remains loosely constrained, which has an impact on existing geological and landscape evolution models and morphotectonic correlations. Here we provide new geochronological evidence that some of the low-altitude basement landforms on- and offshore southwestern Scandinavia are a rejuvenated geomorphological relic from Mesozoic times. K-Ar dating of authigenic, syn-weathering illite from saprolitic remnants constrains original basement exposure in the Late Triassic (221.3±7.0–206.2±4.2 Ma) through deep weathering in a warm climate and subsequent partial mobilization of the saprolitic mantle into the overlying sediment cascade system. The data support the bulk geomorphological development of west Scandinavia coastal basement rocks during the Mesozoic and later, long-lasting relative tectonic stability. Pleistocene glaciations played an additional geomorphological role, selectively stripping the landscape from the Mesozoic overburden and carving glacial landforms down to Plio–Pleistocene times. Saprolite K-Ar dating offers unprecedented possibilities to study past weathering and landscape evolution processes. PMID:28452366
Cohen, Trevor; Blatter, Brett; Patel, Vimla
2005-01-01
Certain applications require computer systems to approximate intended human meaning. This is achievable in constrained domains with a finite number of concepts. Areas such as psychiatry, however, draw on concepts from the world-at-large. A knowledge structure with broad scope is required to comprehend such domains. Latent Semantic Analysis (LSA) is an unsupervised corpus-based statistical method that derives quantitative estimates of the similarity between words and documents from their contextual usage statistics. The aim of this research was to evaluate the ability of LSA to derive meaningful associations between concepts relevant to the assessment of dangerousness in psychiatry. An expert reference model of dangerousness was used to guide the construction of a relevant corpus. Derived associations between words in the corpus were evaluated qualitatively. A similarity-based scoring function was used to assign dangerousness categories to discharge summaries. LSA was shown to derive intuitive relationships between concepts and correlated significantly better than random with human categorization of psychiatric discharge summaries according to dangerousness. The use of LSA to derive a simulated knowledge structure can extend the scope of computer systems beyond the boundaries of constrained conceptual domains. PMID:16779020
Instability of liquid crystal elastomers
NASA Astrophysics Data System (ADS)
An, Ning; Li, Meie; Zhou, Jinxiong
2016-01-01
Nematic liquid crystal elastomers (LCEs) contract in the director direction but expand in other directions, perpendicular to the director, when heated. If the expansion of an LCE is constrained, compressive stress builds up in the LCE, and it wrinkles or buckles to release the stored elastic energy. Although the instability of soft materials is ubiquitous, the mechanism and programmable modulation of LCE instability has not yet been fully explored. We describe a finite element method (FEM) scheme to model the inhomogeneous deformation and instability of LCEs. A constrained LCE beam working as a valve for microfluidic flow, and a piece of LCE laminated with a nanoscale poly(styrene) (PS) film are analyzed in detail. The former uses the buckling of the LCE beam to occlude the microfluidic channel, while the latter utilizes wrinkling or buckling to measure the mechanical properties of hard film or to realize self-folding. Through rigorous instability analysis, we predict the critical conditions for the onset of instability, the wavelength and amplitude evolution of instability, and the instability patterns. The FEM results are found to correlate well with analytical results and reported experiments. These efforts shed light on the understanding and exploitation of the instabilities of LCEs.
Disk mass and disk heating in the spiral galaxy NGC 3223
NASA Astrophysics Data System (ADS)
Gentile, G.; Tydtgat, C.; Baes, M.; De Geyter, G.; Koleva, M.; Angus, G. W.; de Blok, W. J. G.; Saftly, W.; Viaene, S.
2015-04-01
We present the stellar and gaseous kinematics of an Sb galaxy, NGC 3223, with the aim of determining the vertical and radial stellar velocity dispersion as a function of radius, which can help to constrain disk heating theories. Together with the observed NIR photometry, the vertical velocity dispersion is also used to determine the stellar mass-to-light (M/L) ratio, typically one of the largest uncertainties when deriving the dark matter distribution from the observed rotation curve. We find a vertical-to-radial velocity dispersion ratio of σz/σR = 1.21 ± 0.14, significantly higher than expectations from known correlations, and a weakly-constrained Ks-band stellar M/L ratio in the range 0.5-1.7, which is at the high end of (but consistent with) the predictions of stellar population synthesis models. Such a weak constraint on the stellar M/L ratio, however, does not allow us to securely determine the dark matter density distribution. To achieve this, either a statistical approach or additional data (e.g. integral-field unit) are needed. Based on observations collected at the European Southern Observatory, Chile, under proposal 68.B-0588.
A Novel Triggerless Approach for Modeling Mass Wasting Susceptibility
NASA Astrophysics Data System (ADS)
Aly, M. H.; Rowden, K. W.
2017-12-01
Common approaches for modeling mass wasting susceptibility rely on using triggers, which are catalysts for failure, as critical inputs. Frequently used triggers include removal of the toe of a slope or vegetation and time correlated events such as seismicity or heavy precipitation. When temporal data are unavailable, correlating triggers with a particular mass wasting event (MWE) is futile. Meanwhile, geologic structures directly influence slope stability and are typically avoided in alternative modeling approaches. Depending on strata's dip direction, underlying geology can make a slope either stronger or weaker. To heuristically understand susceptibility and reliably infer risk, without being constrained by the previously mentioned limitations, a novel triggerless approach is conceived in this study. Core requisites include a digital elevation model and digitized geologic maps containing geologic formations delineated as polygons encompassing adequate distribution of structural attitudes. Tolerably simple geology composed of gently deformed, relatively flat-lying Carboniferous strata with minimal faulting or monoclines, ideal for applying this new triggerless approach, is found in the Boston Mountains, NW Arkansas, where 47 MWEs are documented. Two models are then created; one model has integrated Empirical Bayesian Kriging (EBK) and fuzzy logic, while the second model has employed a standard implementation of a weighted overlay. Statistical comparisons show that the first model has identified 83%, compared to only 28% for the latter model, of the failure events in categories ranging from moderate to very high susceptibility. These results demonstrate that the introduced triggerless approach is efficiently capable of modeling mass wasting susceptibility, by incorporating EBK and fuzzy logic, in areas lacking temporal datasets.
The in situ transverse lamina strength of composite laminates
NASA Technical Reports Server (NTRS)
Flaggs, D. L.
1983-01-01
The objective of the work reported in this presentation is to determine the in situ transverse strength of a lamina within a composite laminate. From a fracture mechanics standpoint, in situ strength may be viewed as constrained cracking that has been shown to be a function of both lamina thickness and the stiffness of adjacent plies that serve to constrain the cracking process. From an engineering point of view, however, constrained cracking can be perceived as an apparent increase in lamina strength. With the growing need to design more highly loaded composite structures, the concept of in situ strength may prove to be a viable means of increasing the design allowables of current and future composite material systems. A simplified one dimensional analytical model is presented that is used to predict the strain at onset of transverse cracking. While it is accurate only for the most constrained cases, the model is important in that the predicted failure strain is seen to be a function of a lamina's thickness d and of the extensional stiffness bE theta of the adjacent laminae that constrain crack propagation in the 90 deg laminae.
Freezing Transition Studies Through Constrained Cell Model Simulation
NASA Astrophysics Data System (ADS)
Nayhouse, Michael; Kwon, Joseph Sang-Il; Heng, Vincent R.; Amlani, Ankur M.; Orkoulas, G.
2014-10-01
In the present work, a simulation method based on cell models is used to deduce the fluid-solid transition of a system of particles that interact via a pair potential, , which is of the form with . The simulations are implemented under constant-pressure conditions on a generalized version of the constrained cell model. The constrained cell model is constructed by dividing the volume into Wigner-Seitz cells and confining each particle in a single cell. This model is a special case of a more general cell model which is formed by introducing an additional field variable that controls the number of particles per cell and, thus, the relative stability of the solid against the fluid phase. High field values force configurations with one particle per cell and thus favor the solid phase. Fluid-solid coexistence on the isotherm that corresponds to a reduced temperature of 2 is determined from constant-pressure simulations of the generalized cell model using tempering and histogram reweighting techniques. The entire fluid-solid phase boundary is determined through a thermodynamic integration technique based on histogram reweighting, using the previous coexistence point as a reference point. The vapor-liquid phase diagram is obtained from constant-pressure simulations of the unconstrained system using tempering and histogram reweighting. The phase diagram of the system is found to contain a stable critical point and a triple point. The phase diagram of the corresponding constrained cell model is also found to contain both a stable critical point and a triple point.
Bennington, Ninfa; Thurber, Clifford; Feigl, Kurt; ,
2011-01-01
Several studies of the 2004 Parkfield earthquake have linked the spatial distribution of the event’s aftershocks to the mainshock slip distribution on the fault. Using geodetic data, we find a model of coseismic slip for the 2004 Parkfield earthquake with the constraint that the edges of coseismic slip patches align with aftershocks. The constraint is applied by encouraging the curvature of coseismic slip in each model cell to be equal to the negative of the curvature of seismicity density. The large patch of peak slip about 15 km northwest of the 2004 hypocenter found in the curvature-constrained model is in good agreement in location and amplitude with previous geodetic studies and the majority of strong motion studies. The curvature-constrained solution shows slip primarily between aftershock “streaks” with the continuation of moderate levels of slip to the southeast. These observations are in good agreement with strong motion studies, but inconsistent with the majority of published geodetic slip models. Southeast of the 2004 hypocenter, a patch of peak slip observed in strong motion studies is absent from our curvature-constrained model, but the available GPS data do not resolve slip in this region. We conclude that the geodetic slip model constrained by the aftershock distribution fits the geodetic data quite well and that inconsistencies between models derived from seismic and geodetic data can be attributed largely to resolution issues.
Hee, S.; Vázquez, J. A.; Handley, W. J.; ...
2016-12-01
Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-α data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance ΛCDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as ‘phantom dark energy’) is identified within the 1.5σ confidence intervals of the posterior distribution. In order to identify themore » power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback–Leibler divergence. Moreover, this formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-α datasets. Furthermore, SNIa and BAO constrain most strongly around redshift range 0.1 - 0.5, whilst the Lyman-α data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hee, S.; Vázquez, J. A.; Handley, W. J.
Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-α data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance ΛCDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as ‘phantom dark energy’) is identified within the 1.5σ confidence intervals of the posterior distribution. In order to identify themore » power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback–Leibler divergence. Moreover, this formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-α datasets. Furthermore, SNIa and BAO constrain most strongly around redshift range 0.1 - 0.5, whilst the Lyman-α data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.« less
Constrained minimization of smooth functions using a genetic algorithm
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
NASA Technical Reports Server (NTRS)
Pawson, Steven; Ott, Lesley E.; Zhu, Zhengxin; Bowman, Kevin; Brix, Holger; Collatz, G. James; Dutkiewicz, Stephanie; Fisher, Joshua B.; Gregg, Watson W.; Hill, Chris;
2011-01-01
Forward GEOS-5 AGCM simulations of CO2, with transport constrained by analyzed meteorology for 2009-2010, are examined. The CO2 distributions are evaluated using AIRS upper tropospheric CO2 and ACOS-GOSAT total column CO2 observations. Different combinations of surface C02 fluxes are used to generate ensembles of runs that span some uncertainty in surface emissions and uptake. The fluxes are specified in GEOS-5 from different inventories (fossil and biofuel), different data-constrained estimates of land biological emissions, and different data-constrained ocean-biology estimates. One set of fluxes is based on the established "Transcom" database and others are constructed using contemporary satellite observations to constrain land and ocean process models. Likewise, different approximations to sub-grid transport are employed, to construct an ensemble of CO2 distributions related to transport variability. This work is part of NASA's "Carbon Monitoring System Flux Pilot Project,"
Disentangling interacting dark energy cosmologies with the three-point correlation function
NASA Astrophysics Data System (ADS)
Moresco, Michele; Marulli, Federico; Baldi, Marco; Moscardini, Lauro; Cimatti, Andrea
2014-10-01
We investigate the possibility of constraining coupled dark energy (cDE) cosmologies using the three-point correlation function (3PCF). Making use of the CODECS N-body simulations, we study the statistical properties of cold dark matter (CDM) haloes for a variety of models, including a fiducial ΛCDM scenario and five models in which dark energy (DE) and CDM mutually interact. We measure both the halo 3PCF, ζ(θ), and the reduced 3PCF, Q(θ), at different scales (2 < r [h-1 Mpc ] < 40) and redshifts (0 ≤ z ≤ 2). In all cDE models considered in this work, Q(θ) appears flat at small scales (for all redshifts) and at low redshifts (for all scales), while it builds up the characteristic V-shape anisotropy at increasing redshifts and scales. With respect to the ΛCDM predictions, cDE models show lower (higher) values of the halo 3PCF for perpendicular (elongated) configurations. The effect is also scale-dependent, with differences between ΛCDM and cDE models that increase at large scales. We made use of these measurements to estimate the halo bias, that results in fair agreement with the one computed from the two-point correlation function (2PCF). The main advantage of using both the 2PCF and 3PCF is to break the bias-σ8 degeneracy. Moreover, we find that our bias estimates are approximately independent of the assumed strength of DE coupling. This study demonstrates the power of a higher order clustering analysis in discriminating between alternative cosmological scenarios, for both present and forthcoming galaxy surveys, such as e.g. Baryon Oscillation Spectroscopic Survey and Euclid.
Pendall, Elise; Betancourt, Julio L.; Leavitt, Steven W.
1999-01-01
We compared two approaches to interpreting δD of cellulose nitrate in piñon pine needles (Pinus edulis) preserved in packrat middens from central New Mexico, USA. One approach was based on linear regression between modern δD values and climate parameters, and the other on a deterministic isotope model, modified from Craig and Gordon's terminal lake evaporation model that assumes steady-state conditions and constant isotope effects. One such effect, the net biochemical fractionation factor, was determined for a new species, piñon pine. Regressions showed that δD values in cellulose nitrate from annual cohorts of needles (1989–1996) were strongly correlated with growing season (May–August) precipitation amount, and δ13C values in the same samples were correlated with June relative humidity. The deterministic model reconstructed δD values of meteoric water used by plants after constraining relative humidity effects with δ13C values; growing season temperatures were estimated via modern correlations with δD values of meteoric water. Variations of this modeling approach have been applied to tree-ring cellulose before, but not to macrofossil cellulose, and comparisons to empirical relationships have not been provided. Results from fossil piñon needles spanning the last ∼40,000 years showed no significant trend in δD values of cellulose nitrate, suggesting either no change in the amount of summer precipitation (based on the transfer function) or δD values of meteoric water or temperature (based on the deterministic model). However, there were significant differences in δ13C values, and therefore relative humidity, between Pleistocene and Holocene.
Huff, W.D.; Bergstrom, Stig M.; Kolata, Dennis R.; Sun, H.
1998-01-01
The Lower Silurian Osmundsberg K-bentonite is a widespread ash bed that occurs throughout Baltoscandia and parts of northern Europe. This paper describes its characteristics at its type locality in the Province of Dalarna, Sweden. It contains mineralogical and chemical characteristics that permit its regional correlation in sections elsewhere in Sweden as well as Norway, Estonia, Denmark and Great Britain. The < 2 ??m clay fraction of the Osmundsberg bed contains abundant kaolinite in addition to randomly ordered (RO) illite/smectite (I/S). Modelling of the X-ray diffraction tracings showed the I/S consists of 18% illite and 82 % smectite. The high smectite and kaolinite content is indicative of a history with minimal burial temperatures. Analytical data from both pristine melt inclusions in primary quartz grains as well as whole rock samples can be used to constrain both the parental magma composition and the probable tectonic setting of the source volcanoes. The parental ash was dacitic to rhyolitic in composition and originated in a tectonically active collision margin setting. Whole rock chemical fingerprinting of coeval beds elsewhere in Baltoscandia produced a pronounced clustering of these samples in the Osmundsberg field of the discriminant analysis diagram. This, together with well-constrained biostratigraphic and lithostratigraphic data, provides the basis for regional correlation and supports the conclusion that the Osmundsberg K-bentonite is one of the most extensive fallout ash beds in the early Phanerozoic. The source volcano probably lay to the west of Baltica as part of the subduction complex associated with the closure of Iapetus.
Polarization and studies of evolved star mass loss
NASA Astrophysics Data System (ADS)
Sargent, Benjamin; Srinivasan, Sundar; Riebel, David; Meixner, Margaret
2012-05-01
Polarization studies of astronomical dust have proven very useful in constraining its properties. Such studies are used to constrain the spatial arrangement, shape, composition, and optical properties of astronomical dust grains. Here we explore possible connections between astronomical polarization observations to our studies of mass loss from evolved stars. We are studying evolved star mass loss in the Large Magellanic Cloud (LMC) by using photometry from the Surveying the Agents of a Galaxy's Evolution (SAGE; PI: M. Meixner) Spitzer Space Telescope Legacy program. We use the radiative transfer program 2Dust to create our Grid of Red supergiant and Asymptotic giant branch ModelS (GRAMS), in order to model this mass loss. To model emission of polarized light from evolved stars, however, we appeal to other radiative transfer codes. We probe how polarization observations might be used to constrain the dust shell and dust grain properties of the samples of evolved stars we are studying.
A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions
NASA Astrophysics Data System (ADS)
Lienert, Sebastian; Joos, Fortunat
2018-05-01
A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.
On the robustness of the Hβ Lick index as a cosmic clock in passive early-type galaxies
NASA Astrophysics Data System (ADS)
Concas, Alice; Pozzetti, L.; Moresco, M.; Cimatti, A.
2017-06-01
We examine the Hβ Lick index in a sample of ˜24 000 massive (log(M/M_{⊙})>10.75) and passive early-type galaxies extracted from the Sloan Digital Sky Survey at z < 0.3, in order to assess the reliability of this index to constrain the epoch of formation and age evolution of these systems. We further investigate the possibility of exploiting this index as `cosmic chronometer', I.e. to derive the Hubble parameter from its differential evolution with redshift, hence constraining cosmological models independently of other probes. We find that the Hβ strength increases with redshift as expected in passive evolution models, and shows at each redshift weaker values in more massive galaxies. However, a detailed comparison of the observed index with the predictions of stellar population synthesis models highlights a significant tension, with the observed index being systematically lower than expected. By analysing the stacked spectra, we find a weak [N II] λ6584 emission line (not detectable in the single spectra) that anti-correlates with the mass, which can be interpreted as a hint of the presence of ionized gas. We estimated the correction of the Hβ index by the residual emission component exploiting different approaches, but find it very uncertain and model dependent. We conclude that, while the qualitative trends of the observed Hβ-z relations are consistent with the expected passive and downsizing scenario, the possible presence of ionized gas even in the most massive and passive galaxies prevents us to use this index for a quantitative estimate of the age evolution and for cosmological applications.
Laterally constrained inversion for CSAMT data interpretation
NASA Astrophysics Data System (ADS)
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
CMB ISW-lensing bispectrum from cosmic strings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamauchi, Daisuke; Sendouda, Yuuiti; Takahashi, Keitaro, E-mail: yamauchi@resceu.s.u-tokyo.ac.jp, E-mail: sendouda@cc.hirosaki-u.ac.jp, E-mail: keitaro@sci.kumamoto-u.ac.jp
2014-02-01
We study the effect of weak lensing by cosmic (super-)strings on the higher-order statistics of the cosmic microwave background (CMB). A cosmic string segment is expected to cause weak lensing as well as an integrated Sachs-Wolfe (ISW) effect, the so-called Gott-Kaiser-Stebbins (GKS) effect, to the CMB temperature fluctuation, which are thus naturally cross-correlated. We point out that, in the presence of such a correlation, yet another kind of the post-recombination CMB temperature bispectra, the ISW-lensing bispectra, will arise in the form of products of the auto- and cross-power spectra. We first present an analytic method to calculate the autocorrelation ofmore » the temperature fluctuations induced by the strings, and the cross-correlation between the temperature fluctuation and the lensing potential both due to the string network. In our formulation, the evolution of the string network is assumed to be characterized by the simple analytic model, the velocity-dependent one scale model, and the intercommutation probability is properly incorporated in order to characterize the possible superstringy nature. Furthermore, the obtained power spectra are dominated by the Poisson-distributed string segments, whose correlations are assumed to satisfy the simple relations. We then estimate the signal-to-noise ratios of the string-induced ISW-lensing bispectra and discuss the detectability of such CMB signals from the cosmic string network. It is found that in the case of the smaller string tension, Gμ << 10{sup -7}, the ISW-lensing bispectrum induced by a cosmic string network can constrain the string-model parameters even more tightly than the purely GKS-induced bispectrum in the ongoing and future CMB observations on small scales.« less
CMB ISW-lensing bispectrum from cosmic strings
NASA Astrophysics Data System (ADS)
Yamauchi, Daisuke; Sendouda, Yuuiti; Takahashi, Keitaro
2014-02-01
We study the effect of weak lensing by cosmic (super-)strings on the higher-order statistics of the cosmic microwave background (CMB). A cosmic string segment is expected to cause weak lensing as well as an integrated Sachs-Wolfe (ISW) effect, the so-called Gott-Kaiser-Stebbins (GKS) effect, to the CMB temperature fluctuation, which are thus naturally cross-correlated. We point out that, in the presence of such a correlation, yet another kind of the post-recombination CMB temperature bispectra, the ISW-lensing bispectra, will arise in the form of products of the auto- and cross-power spectra. We first present an analytic method to calculate the autocorrelation of the temperature fluctuations induced by the strings, and the cross-correlation between the temperature fluctuation and the lensing potential both due to the string network. In our formulation, the evolution of the string network is assumed to be characterized by the simple analytic model, the velocity-dependent one scale model, and the intercommutation probability is properly incorporated in order to characterize the possible superstringy nature. Furthermore, the obtained power spectra are dominated by the Poisson-distributed string segments, whose correlations are assumed to satisfy the simple relations. We then estimate the signal-to-noise ratios of the string-induced ISW-lensing bispectra and discuss the detectability of such CMB signals from the cosmic string network. It is found that in the case of the smaller string tension, Gμ << 10-7, the ISW-lensing bispectrum induced by a cosmic string network can constrain the string-model parameters even more tightly than the purely GKS-induced bispectrum in the ongoing and future CMB observations on small scales.
LLNL Location and Detection Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, S C; Harris, D B; Anderson, M L
2003-07-16
We present two LLNL research projects in the topical areas of location and detection. The first project assesses epicenter accuracy using a multiple-event location algorithm, and the second project employs waveform subspace Correlation to detect and identify events at Fennoscandian mines. Accurately located seismic events are the bases of location calibration. A well-characterized set of calibration events enables new Earth model development, empirical calibration, and validation of models. In a recent study, Bondar et al. (2003) develop network coverage criteria for assessing the accuracy of event locations that are determined using single-event, linearized inversion methods. These criteria are conservative andmore » are meant for application to large bulletins where emphasis is on catalog completeness and any given event location may be improved through detailed analysis or application of advanced algorithms. Relative event location techniques are touted as advancements that may improve absolute location accuracy by (1) ensuring an internally consistent dataset, (2) constraining a subset of events to known locations, and (3) taking advantage of station and event correlation structure. Here we present the preliminary phase of this work in which we use Nevada Test Site (NTS) nuclear explosions, with known locations, to test the effect of travel-time model accuracy on relative location accuracy. Like previous studies, we find that the reference velocity-model and relative-location accuracy are highly correlated. We also find that metrics based on travel-time residual of relocated events are not a reliable for assessing either velocity-model or relative-location accuracy. In the topical area of detection, we develop specialized correlation (subspace) detectors for the principal mines surrounding the ARCES station located in the European Arctic. Our objective is to provide efficient screens for explosions occurring in the mines of the Kola Peninsula (Kovdor, Zapolyarny, Olenogorsk, Khibiny) and the major iron mines of northern Sweden (Malmberget, Kiruna). In excess of 90% of the events detected by the ARCES station are mining explosions, and a significant fraction are from these northern mining groups. The primary challenge in developing waveform correlation detectors is the degree of variation in the source time histories of the shots, which can result in poor correlation among events even in close proximity. Our approach to solving this problem is to use lagged subspace correlation detectors, which offer some prospect of compensating for variation and uncertainty in source time functions.« less
Implementation of remote sensing data for flood forecasting
NASA Astrophysics Data System (ADS)
Grimaldi, S.; Li, Y.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.
2016-12-01
Flooding is one of the most frequent and destructive natural disasters. A timely, accurate and reliable flood forecast can provide vital information for flood preparedness, warning delivery, and emergency response. An operational flood forecasting system typically consists of a hydrologic model, which simulates runoff generation and concentration, and a hydraulic model, which models riverine flood wave routing and floodplain inundation. However, these two types of models suffer from various sources of uncertainties, e.g., forcing data initial conditions, model structure and parameters. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using streamflow measurements, and such applications are limited in well-gauged areas. The recent increasing availability of spatially distributed Remote Sensing (RS) data offers new opportunities for flood events investigation and forecast. Based on an Australian case study, this presentation will discuss the use 1) of RS soil moisture data to constrain a hydrologic model, and 2) of RS-derived flood extent and level to constrain a hydraulic model. The hydrological model is based on a semi-distributed system coupled with a two-soil-layer rainfall-runoff model GRKAL and a linear Muskingum routing model. Model calibration was performed using either 1) streamflow data only or 2) both streamflow and RS soil moisture data. The model was then further constrained through the integration of real-time soil moisture data. The hydraulic model is based on LISFLOOD-FP which solves the 2D inertial approximation of the Shallow Water Equations. Streamflow data and RS-derived flood extent and levels were used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space was quantified and discussed.
NASA Astrophysics Data System (ADS)
Heggy, E.; Palmer, E. M.; Kofman, W. W.; Herique, A.; El Maarry, M. R.
2017-12-01
Rosetta's two-year orbital mission at comet 67P/Churyumov-Gerasimenko significantly improved our understanding of the Radar properties of cometary bodies and how they can be used to constrain the ambiguities associated to the dynamical formation of 67P by setting an upper limit on the size of the comet's initial building blocks using the CONSERT, VIRTIS and OSIRIS observations. We present here in an updated post-rendezvous three-dimensional dielectric, textural and structural model of the comet's surface and subsurface at VHF-, X- and S-band radar frequencies. We assess the radar properties of potential structural heterogeneities observed in the upper meters of the shallow subsurface as well as deeper structures across the comet head. We use CONSERT's bistatic radar sounding measurements of the nucleus `head' interior to constrain the dielectric properties and structure of the interior; VIRTIS' multi-spectral observations to constrain the surface mineralogy and the distribution of water-ice on the surface and the implications of the above on the spatial variability of the surface and shallow subsurface dielectric properties. Surface and shallow subsurface structural elements are derived from the OSIRIS' images of exposed outcrops and pit walls. Our dielectric analysis showing the lack of sufficient dielectric contrast correlated with the lack of signal broadening in the 90-MHz radar echoes observed by CONSERT suggests that the the apparent meter-sized inhomogeneities in the walls of deep pits originally interpreted as cometesimals forming the comet's primordial blocks, could be localized evolutionary features of high centered polygons caused by seasonal modifications to the near-subsurface ice formed through thermal expansion and contraction and may not be continuous through the head. Considering the three-dimensional dielectric variability of 67P as derived from CONSERT, VIRTIS, Arecibo observations and laboratory measurement we set an upper limit on the size of the comet's initial building blocks.
Abazov, Victor Mukhamedovich
2015-09-22
We present a simultaneous measurement of the forward-backward asymmetry and the top-quark polarization in tt¯ production in dilepton final states using 9.7 fb –1 of proton-antiproton collisions at √s=1.96 TeV with the D0 detector. To reconstruct the distributions of kinematic observables we employ a matrix element technique that calculates the likelihood of the possible tt¯ kinematic configurations. After accounting for the presence of background events and for calibration effects, we obtain a forward-backward asymmetry of A tt¯=(15.0±6.4(stat)±4.9(syst))% and a top-quark polarization times spin analyzing power in the beam basis of κP=(7.2±10.5(stat)±4.2(syst))%, with a correlation of –56% between the measurements. Ifmore » we constrain the forward-backward asymmetry to its expected standard model value, we obtain a measurement of the top polarization of κP=(11.3±9.1(stat)±1.9(syst))%. If we constrain the top polarization to its expected standard model value, we measure a forward-backward asymmetry of A tt¯=(17.5±5.6(stat)±3.1(syst))%. A combination with the D0 A tt¯ measurement in the lepton+jets final state yields an asymmetry of Att¯=(11.8±2.5(stat)±1.3(syst))%. Within their respective uncertainties, all these results are consistent with the standard model expectations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abazov, V. M.; Abbott, B.; Acharya, B. S.
2015-09-01
We present a simultaneous measurement of the forward-backward asymmetry and the top-quark polarization in tt¯ production in dilepton final states using 9.7 fb -1 of proton-antiproton collisions at √s=1.96 TeV with the D0 detector. To reconstruct the distributions of kinematic observables we employ a matrix element technique that calculates the likelihood of the possible tt¯ kinematic configurations. After accounting for the presence of background events and for calibration effects, we obtain a forward-backward asymmetry of A tt¯=(15.0±6.4(stat)±4.9(syst))% and a top-quark polarization times spin analyzing power in the beam basis of κP=(7.2±10.5(stat)±4.2(syst))%, with a correlation of -56% between the measurements. Ifmore » we constrain the forward-backward asymmetry to its expected standard model value, we obtain a measurement of the top polarization ofκP=(11.3±9.1(stat)±1.9(syst))%. If we constrain the top polarization to its expected standard model value, we measure a forward-backward asymmetry of A tt¯=(17.5±5.6(stat)±3.1(syst))%. A combination with the D0 A tt¯ measurement in the lepton+jets final state yields an asymmetry of A tt¯=(11.8±2.5(stat)±1.3(syst))%. Within their respective uncertainties, all these results are consistent with the standard model expectations.« less
NASA Astrophysics Data System (ADS)
Blonquist, J. M.; Wingate, L.; Ogeé, J.; Bowling, D. R.
2011-12-01
The stable carbon isotope composition of atmospheric CO2 (δ13Ca) can provide useful information on water use efficiency (WUE) dynamics of terrestrial ecosystems and potentially constrain models of CO2 and water fluxes at the land surface. This is due to the leaf-level relationship between photosynthetic 13CO2 discrimination (Δ), which influences δ13Ca, and the ratio of leaf intercellular to atmospheric CO2 mole fractions (Ci / Ca), which is related to WUE and is determined by the balance between C assimilation (CO2 demand) and stomatal conductance (CO2 supply). We used branch-scale Δ derived from tunable diode laser absorption spectroscopy measurements collected in a Maritime pine forest to estimate Ci / Ca variations over an entire growing season. We combined Ci / Ca estimates with rates of gross primary production (GPP) derived from eddy covariance (EC) to estimate canopy-scale stomatal conductance (Gs) and transpiration (T). Estimates of T were highly correlated to T estimates derived from sapflow data (y = 1.22x + 0.08; r2 = 0.61; slope P < 0.001) and T predictions from an ecosystem model (MuSICA) (y = 0.88x - 0.05; r2 = 0.64; slope P < 0.001). As an alternative to estimating T, Δ measurements can be used to estimate GPP by combining Ci / Ca estimates with Gs estimates from sapflow data. Estimates of GPP were determined in this fashion and were highly correlated to GPP values derived from EC (y = 0.82 + 0.07; r2 = 0.61; slope P < 0.001) and GPP predictions from MuSICA (y = 1.10 + 0.42; r2 = 0.50; slope P < 0.001). Results demonstrate that the leaf-level relationship between Δ and Ci / Ca can be extended to the canopy-scale and that Δ measurements have utility for partitioning ecosystem-scale CO2 and water fluxes.
Dark Energy Survey Year 1 Results: Weak Lensing Mass Calibration of redMaPPer Galaxy Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClintock, T.; et al.
We constrain the mass--richness scaling relation of redMaPPer galaxy clusters identified in the Dark Energy Survey Year 1 data using weak gravitational lensing. We split clusters intomore » $$4\\times3$$ bins of richness $$\\lambda$$ and redshift $z$ for $$\\lambda\\geq20$$ and $$0.2 \\leq z \\leq 0.65$$ and measure the mean masses of these bins using their stacked weak lensing signal. By modeling the scaling relation as $$\\langle M_{\\rm 200m}|\\lambda,z\\rangle = M_0 (\\lambda/40)^F ((1+z)/1.35)^G$$, we constrain the normalization of the scaling relation at the 5.0 per cent level as $$M_0 = [3.081 \\pm 0.075 ({\\rm stat}) \\pm 0.133 ({\\rm sys})] \\cdot 10^{14}\\ {\\rm M}_\\odot$$ at $$\\lambda=40$$ and $z=0.35$. The richness scaling index is constrained to be $$F=1.356 \\pm 0.051\\ ({\\rm stat})\\pm 0.008\\ ({\\rm sys})$$ and the redshift scaling index $$G=-0.30\\pm 0.30\\ ({\\rm stat})\\pm 0.06\\ ({\\rm sys})$$. These are the tightest measurements of the normalization and richness scaling index made to date. We use a semi-analytic covariance matrix to characterize the statistical errors in the recovered weak lensing profiles. Our analysis accounts for the following sources of systematic error: shear and photometric redshift errors, cluster miscentering, cluster member dilution of the source sample, systematic uncertainties in the modeling of the halo--mass correlation function, halo triaxiality, and projection effects. We discuss prospects for reducing this systematic error budget, which dominates the uncertainty on $$M_0$$. Our result is in excellent agreement with, but has significantly smaller uncertainties than, previous measurements in the literature, and augurs well for the power of the DES cluster survey as a tool for precision cosmology and upcoming galaxy surveys such as LSST, Euclid and WFIRST.« less
Protein 3D Structure Computed from Evolutionary Sequence Variation
Sheridan, Robert; Hopf, Thomas A.; Pagnani, Andrea; Zecchina, Riccardo; Sander, Chris
2011-01-01
The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing. In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy. We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues., including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7–4.8 Å Cα-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org). This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of protein structures, new strategies in protein and drug design, and the identification of functional genetic variants in normal and disease genomes. PMID:22163331
Impact of isoprene and HONO chemistry on ozone and OVOC formation in a semirural South Korean forest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S.; Kim, S. -Y.; Lee, M.
Rapid urbanization and economic development in East Asia in past decades has led to photochemical air pollution problems such as excess photochemical ozone and aerosol formation. Asian megacities such as Seoul, Tokyo, Shanghai, Guangzhou, and Beijing are surrounded by densely forested areas, and recent research has consistently demonstrated the importance of biogenic volatile organic compounds (VOCs) from vegetation in determining oxidation capacity in the suburban Asian megacity regions. Uncertainties in constraining tropospheric oxidation capacity, dominated by hydroxyl radical, undermine our ability to assess regional photochemical air pollution problems. We present an observational data set of CO, NO x, SO 2,more » ozone, HONO, and VOCs (anthropogenic and biogenic) from Taehwa research forest (TRF) near the Seoul metropolitan area in early June 2012. The data show that TRF is influenced both by aged pollution and fresh biogenic volatile organic compound emissions. With the data set, we diagnose HO x (OH, HO 2, and RO 2) distributions calculated using the University of Washington chemical box model (UWCM v2.1) with near-explicit VOC oxidation mechanisms from MCM v3.2 (Master Chemical Mechanism). Uncertainty from unconstrained HONO sources and radical recycling processes highlighted in recent studies is examined using multiple model simulations with different model constraints. The results suggest that (1) different model simulation scenarios cause systematic differences in HO x distributions, especially OH levels (up to 2.5 times), and (2) radical destruction (HO 2 + HO 2 or HO 2 + RO 2) could be more efficient than radical recycling (RO 2 + NO), especially in the afternoon. Implications of the uncertainties in radical chemistry are discussed with respect to ozone–VOC–NO x sensitivity and VOC oxidation product formation rates. Overall, the NO x limited regime is assessed except for the morning hours (8 a.m. to 12 p.m. local standard time), but the degree of sensitivity can significantly vary depending on the model scenarios. The model results also suggest that RO 2 levels are positively correlated with oxygenated VOCs (OVOCs) production that is not routinely constrained by observations. These unconstrained OVOCs can cause higher-than-expected OH loss rates (missing OH reactivity) and secondary organic aerosol formation. The series of modeling experiments constrained by observations strongly urge observational constraint of the radical pool to enable precise understanding of regional photochemical pollution problems in the East Asian megacity region.« less
Impact of isoprene and HONO chemistry on ozone and OVOC formation in a semirural South Korean forest
Kim, S.; Kim, S. -Y.; Lee, M.; ...
2015-04-29
Rapid urbanization and economic development in East Asia in past decades has led to photochemical air pollution problems such as excess photochemical ozone and aerosol formation. Asian megacities such as Seoul, Tokyo, Shanghai, Guangzhou, and Beijing are surrounded by densely forested areas, and recent research has consistently demonstrated the importance of biogenic volatile organic compounds (VOCs) from vegetation in determining oxidation capacity in the suburban Asian megacity regions. Uncertainties in constraining tropospheric oxidation capacity, dominated by hydroxyl radical, undermine our ability to assess regional photochemical air pollution problems. We present an observational data set of CO, NO x, SO 2,more » ozone, HONO, and VOCs (anthropogenic and biogenic) from Taehwa research forest (TRF) near the Seoul metropolitan area in early June 2012. The data show that TRF is influenced both by aged pollution and fresh biogenic volatile organic compound emissions. With the data set, we diagnose HO x (OH, HO 2, and RO 2) distributions calculated using the University of Washington chemical box model (UWCM v2.1) with near-explicit VOC oxidation mechanisms from MCM v3.2 (Master Chemical Mechanism). Uncertainty from unconstrained HONO sources and radical recycling processes highlighted in recent studies is examined using multiple model simulations with different model constraints. The results suggest that (1) different model simulation scenarios cause systematic differences in HO x distributions, especially OH levels (up to 2.5 times), and (2) radical destruction (HO 2 + HO 2 or HO 2 + RO 2) could be more efficient than radical recycling (RO 2 + NO), especially in the afternoon. Implications of the uncertainties in radical chemistry are discussed with respect to ozone–VOC–NO x sensitivity and VOC oxidation product formation rates. Overall, the NO x limited regime is assessed except for the morning hours (8 a.m. to 12 p.m. local standard time), but the degree of sensitivity can significantly vary depending on the model scenarios. The model results also suggest that RO 2 levels are positively correlated with oxygenated VOCs (OVOCs) production that is not routinely constrained by observations. These unconstrained OVOCs can cause higher-than-expected OH loss rates (missing OH reactivity) and secondary organic aerosol formation. The series of modeling experiments constrained by observations strongly urge observational constraint of the radical pool to enable precise understanding of regional photochemical pollution problems in the East Asian megacity region.« less
NASA Astrophysics Data System (ADS)
Grunblatt, Samuel K.; Huber, Daniel; Gaidos, Eric; Lopez, Eric D.; Howard, Andrew W.; Isaacson, Howard T.; Sinukoff, Evan; Vanderburg, Andrew; Nofi, Larissa; Yu, Jie; North, Thomas S. H.; Chaplin, William; Foreman-Mackey, Daniel; Petigura, Erik; Ansdell, Megan; Weiss, Lauren; Fulton, Benjamin; Lin, Douglas N. C.
2017-12-01
Despite more than 20 years since the discovery of the first gas giant planet with an anomalously large radius, the mechanism for planet inflation remains unknown. Here, we report the discovery of K2-132b, an inflated gas giant planet found with the NASA K2 Mission, and a revised mass for another inflated planet, K2-97b. These planets orbit on ≈9 day orbits around host stars that recently evolved into red giants. We constrain the irradiation history of these planets using models constrained by asteroseismology and Keck/High Resolution Echelle Spectrometer spectroscopy and radial velocity measurements. We measure planet radii of 1.31 ± 0.11 R J and 1.30 ± 0.07 R J, respectively. These radii are typical for planets receiving the current irradiation, but not the former, zero age main-sequence irradiation of these planets. This suggests that the current sizes of these planets are directly correlated to their current irradiation. Our precise constraints of the masses and radii of the stars and planets in these systems allow us to constrain the planetary heating efficiency of both systems as 0.03{ % }-0.02 % +0.03 % . These results are consistent with a planet re-inflation scenario, but suggest that the efficiency of planet re-inflation may be lower than previously theorized. Finally, we discuss the agreement within 10% of the stellar masses and radii, and the planet masses, radii, and orbital periods of both systems, and speculate that this may be due to selection bias in searching for planets around evolved stars.
A femtoscopic correlation analysis tool using the Schrödinger equation (CATS)
NASA Astrophysics Data System (ADS)
Mihaylov, D. L.; Mantovani Sarti, V.; Arnold, O. W.; Fabbietti, L.; Hohlweger, B.; Mathis, A. M.
2018-05-01
We present a new analysis framework called "Correlation Analysis Tool using the Schrödinger equation" (CATS) which computes the two-particle femtoscopy correlation function C( k), with k being the relative momentum for the particle pair. Any local interaction potential and emission source function can be used as an input and the wave function is evaluated exactly. In this paper we present a study on the sensitivity of C( k) to the interaction potential for different particle pairs: p-p, p-Λ, K^-p, K^+-p, p-Ξ ^- and Λ- Λ. For the p-p Argonne v_{18} and Reid Soft-Core potentials have been tested. For the other pair systems we present results based on strong potentials obtained from effective Lagrangians such as χ EFT for p-Λ, Jülich models for K(\\bar{K})-N and Nijmegen models for Λ-Λ. For the p-Ξ^- pairs we employ the latest lattice results from the HAL QCD collaboration. Our detailed study of different interacting particle pairs as a function of the source size and different potentials shows that femtoscopic measurements can be exploited in order to constrain the final state interactions among hadrons. In particular, small collision systems of the order of 1 fm, as produced in pp collisions at the LHC, seem to provide a suitable environment for quantitative studies of this kind.
Compromise Approach-Based Genetic Algorithm for Constrained Multiobjective Portfolio Selection Model
NASA Astrophysics Data System (ADS)
Li, Jun
In this paper, fuzzy set theory is incorporated into a multiobjective portfolio selection model for investors’ taking into three criteria: return, risk and liquidity. The cardinality constraint, the buy-in threshold constraint and the round-lots constraints are considered in the proposed model. To overcome the difficulty of evaluation a large set of efficient solutions and selection of the best one on non-dominated surface, a compromise approach-based genetic algorithm is presented to obtain a compromised solution for the proposed constrained multiobjective portfolio selection model.
Medvigy, David; Moorcroft, Paul R
2012-01-19
Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.
Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model
Marsh, John E.; Campbell, Tom A.
2016-01-01
The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory. PMID:27242396
Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model.
Marsh, John E; Campbell, Tom A
2016-01-01
The rostral brainstem receives both "bottom-up" input from the ascending auditory system and "top-down" descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory.
NASA Astrophysics Data System (ADS)
Bouligand, Claire; Coutant, Olivier; Glen, Jonathan M. G.
2016-07-01
In this study, we present the analysis and interpretation of a new ground magnetic survey acquired at the Soufrière volcano on Guadeloupe Island. Observed short-wavelength magnetic anomalies are compared to those predicted assuming a constant magnetization within the sub-surface. The good correlation between modeled and observed data over the summit of the dome indicates that the shallow sub-surface displays relatively constant and high magnetization intensity. In contrast, the poor correlation at the base of the dome suggests that the underlying material is non- to weakly-magnetic, consistent with what is expected for a talus comprised of randomly oriented and highly altered and weathered boulders. The new survey also reveals a dipole anomaly that is not accounted for by a constant magnetization in the sub-surface and suggests the existence of material with decreased magnetization beneath the Soufrière lava dome. We construct simple models to constrain its dimensions and propose that this body corresponds to hydrothermally altered material within and below the dome. The very large inferred volume for such material may have implications on the stability of the dome.
NASA Technical Reports Server (NTRS)
Prescod-Weinstein, Chanda; Afshordi, Niayesh
2011-01-01
Structure formation provides a strong test of any cosmic acceleration model because a successful dark energy model must not inhibit or overpredict the development of observed large-scale structures. Traditional approaches to studies of structure formation in the presence of dark energy or a modified gravity implement a modified Press-Schechter formalism, which relates the linear overdensities to the abundance of dark matter haloes at the same time. We critically examine the universality of the Press-Schechter formalism for different cosmologies, and show that the halo abundance is best correlated with spherical linear overdensity at 94% of collapse (or observation) time. We then extend this argument to ellipsoidal collapse (which decreases the fractional time of best correlation for small haloes), and show that our results agree with deviations from modified Press-Schechter formalism seen in simulated mass functions. This provides a novel universal prescription to measure linear density evolution, based on current and future observations of cluster (or dark matter) halo mass function. In particular, even observations of cluster abundance in a single epoch will constrain the entire history of linear growth of cosmological of perturbations.
Tropospheric transport differences between models using the same large-scale meteorological fields
NASA Astrophysics Data System (ADS)
Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.
2017-01-01
The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.
NASA Astrophysics Data System (ADS)
Jungman, Gerard
1992-11-01
Yukawa-coupling-constant unification together with the known fermion masses is used to constrain SO(10) models. We consider the case of one (heavy) generation, with the tree-level relation mb=mτ, calculating the limits on the intermediate scales due to the known limits on fermion masses. This analysis extends previous analyses which addressed only the simplest symmetry-breaking schemes. In the case where the low-energy model is the standard model with one Higgs doublet, there are very strong constraints due to the known limits on the top-quark mass and the τ-neutrino mass. The two-Higgs-doublet case is less constrained, though we can make progress in constraining this model also. We identify those parameters to which the viability of the model is most sensitive. We also discuss the ``triviality'' bounds on mt obtained from the analysis of the Yukawa renormalization-group equations. Finally we address the role of a speculative constraint on the τ-neutrino mass, arising from the cosmological implications of anomalous B+L violation in the early Universe.
NASA Astrophysics Data System (ADS)
Wang, F. Y.
2011-07-01
Gamma-ray bursts (GRBs) are brief flashes of gamma-rays occurring at cosmological distances. GRB was discovered by Vela satellite in 1967. The discovery of afterglows in 1997 made it possible to measure the GRBs' redshifts and confirmed the cosmological origin. GRB cosmology includes utilizing long GRBs as standard candles to constrain the dark energy and cosmological parameters, measuring the high-redshift star formation rate (SFR), probing the metal enrichment history of the universe, dust, quantum gravity, etc. The correlations between GRB observables in the prompt emission and afterglow phases were discovered, so we can use these correlations as standard candles to constrain the cosmological parameters and dark energy, especially at high redshifts. Observations show that long GRBs may be associated with supernovae. So long GRBs are promising tools to measure the high-redshift SFR. GRB afterglows have a smooth continuum, so the extraction of IGM absorption features from the spectrum is very easy. The information of metal enrichment history and reionization can be obtained from the absorption lines. In this thesis, we investigate the high-redshift cosmology using GRBs, called GRB cosmology. This is a new and fast developing field. The structure of this thesis is as follows. In the first chapter, we introduce the progress of GRB studies. First we introduce the progress of GRB studies in various satellite eras, mainly in the Swift and Fermi eras. The fireball model and standard afterglow model are also presented. In chapter 2, we introduce the standard cosmology model, astronomical observations and dark energy models. Then progress on the GRB cosmology studies is introduced. Some of my works including what to be submitted are also introduced in this chapter. In chapter 3, we present our studies on constraining the cosmological parameters and dark energy using latest observations. We use SNe Ia, GRBs, CMB, BAO, the X-ray gas mass fraction in clusters and the linear growth rate of perturbations, and find that the ΛCDM is the best fitted model. The transition redshift z_{T} is from 0.40_{-0.08}^{+0.14} to 0.65_{-0.05}^{+0.10}. This is the first time to combine GRBs with other observations to constrain the cosmological parameters, dark energy and transition redshift. In chapter 4, we investigate the early dark energy model using GRBs, SNe Ia, CMB and BAO. The negligible dark energy at high redshift will influence the growth of cosmic structures and leave observable signatures that are different from the standard cosmology. We propose that GRBs are promising tools to study the early dark energy. We find that the fractional dark energy density is less than 0.03 and the linear growth index of perturbations is 0.66. In chapter 5, we use a model-independent method to constrain the dark energy equation of state (EOS) w(z). Among the parameters describing the properties of dark energy, EOS is the most important. Whether and how it evolves with time are crucial in distinguishing different cosmological models. In our analysis, we include high-redshift GRBs. We find that w(z)<0 at z>1.7, and EOS deviates from the cosmological constant at z>0.5 at 95.4% confidence level. In chapter 6, we probe the cosmographic parameters to distinguish between the dark energy and modified gravity models. These two families of models can drive the universe to acclerate. We first derive the expressions of deceleration, jerk and snap parameters in the dark energy and modified gravity models. The snap parameters in these models are different, so they can be used to distinguish between the models. In chapter 7, we measure the high-redshift SFR using long GRBs. Swift observations reveal that the number of high-redshift GRBs is larger than the predication from SFR. We find that the evolving initial mass function can interpret this discrepancy. We study the high-redshift SFR up to z˜ 8.2 considering the Swift GRBs tracing the star formation history and the cosmic metallicity evolution in different background cosmological models. In chapter 8, we present the observational signatures of Pop III GRBs and study the pre-galactic metal enrichment with the metal absorption lines in the GRB spectrum from first galaxy. We focus on the unusual circumburst environment inside the systems that hosted Pop III stars. The metals in the first galaxies produced by the first supernova explosion are likely to reside in the low-ionization states (C II, O I, Si II and Fe II). When GRB afterglow goes through the metal polluted region, the metal absorption lines may appear. The topology of metal enrichment could be highly inhomogeneous, so along different lines of sight, the metal absorption lines may show distinct signatures. A summary of the open questions in GRB cosmology filed is presented in chapter 9.
Precise measurement of the angular correlation parameter aβν in the β decay of 35Ar with LPCTrap
NASA Astrophysics Data System (ADS)
Fabian, X.; Ban, G.; Boussaïd, R.; Breitenfeldt, M.; Couratin, C.; Delahaye, P.; Durand, D.; Finlay, P.; Fléchard, X.; Guillon, B.; Lemière, Y.; Leredde, A.; Liénard, E.; Méry, A.; Naviliat-Cuncic, O.; Pierre, E.; Porobic, T.; Quéméner, G.; Rodríguez, D.; Severijns, N.; Thomas, J. C.; Van Gorp, S.
2014-03-01
Precise measurements in the β decay of the 35Ar nucleus enable to search for deviations from the Standard Model (SM) in the weak sector. These measurements enable either to check the CKM matrix unitarity or to constrain the existence of exotic currents rejected in the V-A theory of the SM. For this purpose, the β-ν angular correlation parameter, aβν, is inferred from a comparison between experimental and simulated recoil ion time-of-flight distributions following the quasi-pure Fermi transition of 35Ar1+ ions confined in the transparent Paul trap of the LPCTrap device at GANIL. During the last experiment, 1.5×106 good events have been collected, which corresponds to an expected precision of less than 0.5% on the aβν value. The required simulation is divided between the use of massive GPU parallelization and the GEANT4 toolkit for the source-cloud kinematics and the tracking of the decay products.
Media multitasking and failures of attention in everyday life.
Ralph, Brandon C W; Thomson, David R; Cheyne, James Allan; Smilek, Daniel
2014-09-01
Using a series of online self-report measures, we examine media multitasking, a particularly pervasive form of multitasking, and its relations to three aspects of everyday attention: (1) failures of attention and cognitive errors (2) mind wandering, and (3) attentional control with an emphasis on attentional switching and distractibility. We observed a positive correlation between levels of media multitasking and self-reports of attentional failures, as well as with reports of both spontaneous and deliberate mind wandering. No correlation was observed between media multitasking and self-reported memory failures, lending credence to the hypothesis that media multitasking may be specifically related to problems of inattention, rather than cognitive errors in general. Furthermore, media multitasking was not related with self-reports of difficulties in attention switching or distractibility. We offer a plausible causal structural model assessing both direct and indirect effects among media multitasking, attentional failures, mind wandering, and cognitive errors, with the heuristic goal of constraining and motivating theories of the effects of media multitasking on inattention.
Terrestrial Sagnac delay constraining modified gravity models
NASA Astrophysics Data System (ADS)
Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.
2018-04-01
Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.
Modeling and simulating networks of interdependent protein interactions.
Stöcker, Bianca K; Köster, Johannes; Zamir, Eli; Rahmann, Sven
2018-05-21
Protein interactions are fundamental building blocks of biochemical reaction systems underlying cellular functions. The complexity and functionality of these systems emerge not only from the protein interactions themselves but also from the dependencies between these interactions, as generated by allosteric effects or mutual exclusion due to steric hindrance. Therefore, formal models for integrating and utilizing information about interaction dependencies are of high interest. Here, we describe an approach for endowing protein networks with interaction dependencies using propositional logic, thereby obtaining constrained protein interaction networks ("constrained networks"). The construction of these networks is based on public interaction databases as well as text-mined information about interaction dependencies. We present an efficient data structure and algorithm to simulate protein complex formation in constrained networks. The efficiency of the model allows fast simulation and facilitates the analysis of many proteins in large networks. In addition, this approach enables the simulation of perturbation effects, such as knockout of single or multiple proteins and changes of protein concentrations. We illustrate how our model can be used to analyze a constrained human adhesome protein network, which is responsible for the formation of diverse and dynamic cell-matrix adhesion sites. By comparing protein complex formation under known interaction dependencies versus without dependencies, we investigate how these dependencies shape the resulting repertoire of protein complexes. Furthermore, our model enables investigating how the interplay of network topology with interaction dependencies influences the propagation of perturbation effects across a large biochemical system. Our simulation software CPINSim (for Constrained Protein Interaction Network Simulator) is available under the MIT license at http://github.com/BiancaStoecker/cpinsim and as a Bioconda package (https://bioconda.github.io).
Davis, Tyler; Love, Bradley C.; Preston, Alison R.
2012-01-01
Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and adjust their representations to support behavior in future encounters. Many techniques that are available to understand the neural basis of category learning assume that the multiple processes that subserve it can be neatly separated between different trials of an experiment. Model-based functional magnetic resonance imaging offers a promising tool to separate multiple, simultaneously occurring processes and bring the analysis of neuroimaging data more in line with category learning’s dynamic and multifaceted nature. We use model-based imaging to explore the neural basis of recognition and entropy signals in the medial temporal lobe and striatum that are engaged while participants learn to categorize novel stimuli. Consistent with theories suggesting a role for the anterior hippocampus and ventral striatum in motivated learning in response to uncertainty, we find that activation in both regions correlates with a model-based measure of entropy. Simultaneously, separate subregions of the hippocampus and striatum exhibit activation correlated with a model-based recognition strength measure. Our results suggest that model-based analyses are exceptionally useful for extracting information about cognitive processes from neuroimaging data. Models provide a basis for identifying the multiple neural processes that contribute to behavior, and neuroimaging data can provide a powerful test bed for constraining and testing model predictions. PMID:22746951
Seismic waveform inversion for core-mantle boundary topography
NASA Astrophysics Data System (ADS)
Colombi, Andrea; Nissen-Meyer, Tarje; Boschi, Lapo; Giardini, Domenico
2014-07-01
The topography of the core-mantle boundary (CMB) is directly linked to the dynamics of both the mantle and the outer core, although it is poorly constrained and understood. Recent studies have produced topography models with mutual agreement up to degree 2. A broad-band waveform inversion strategy is introduced and applied here, with relatively low computational cost and based on a first-order Born approximation. Its performance is validated using synthetic waveforms calculated in theoretical earth models that include different topography patterns with varying lateral wavelengths, from 600 to 2500 km, and magnitudes (˜10 km peak-to-peak). The source-receiver geometry focuses mainly on the Pdiff, PKP, PcP and ScS phases. The results show that PKP branches, PcP and ScS generally perform well and in a similar fashion, while Pdiff yields unsatisfactory results. We investigate also how 3-D mantle correction influences the output models, and find that despite the disturbance introduced, the models recovered do not appear to be biased, provided that the 3-D model is correct. Using cross-correlated traveltimes, we derive new topography models from both P and S waves. The static corrections used to remove the mantle effect are likely to affect the inversion, compromising the agreement between models derived from P and S data. By modelling traveltime residuals starting from sensitivity kernels, we show how the simultaneous use of volumetric and boundary kernels can reduce the bias coming from mantle structures. The joint inversion approach should be the only reliable method to invert for CMB topography using absolute cross-correlation traveltimes.
Occurrence of Somma-Vesuvio fine ashes in the tephrostratigraphic record of Panarea, Aeolian Islands
NASA Astrophysics Data System (ADS)
Donatella, De Rita; Daniela, Dolfi; Corrado, Cimarelli
2008-10-01
Ash-rich tephra layers interbedded in the pyroclastic successions of Panarea island (Aeolian archipelago, Southern Italy) have been analyzed and related to their original volcanic sources. One of these tephra layers is particularly important as it can be correlated by its chemical and morphoscopic characteristics to the explosive activity of Somma-Vesuvio. Correlation with the Pomici di Base eruption, that is considered one of the largest explosive events causing the demolition of the Somma stratovolcano, seems the most probable. The occurrence on Panarea island of fine ashes related to this eruption is of great importance for several reasons: 1) it allows to better constrain the time stratigraphy of the Panarea volcano; 2) it provides a useful tool for tephrochronological studies in southern Italy and finally 3) it allows to improve our knowledge on the distribution of the products of the Pomici di Base eruption giving new insights on the dispersion trajectories of fine ashes from plinian plumes. Other exotic tephra layers interbedded in the Panarea pyroclastic successions have also been found. Chemical and sedimentological characteristics of these layers allow their correlation with local vents from the Aeolian Islands thus constraining the late explosive activity of Panarea dome.
Cosmological measurements with general relativistic galaxy correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Montanari, Francesco; Durrer, Ruth
We investigate the cosmological dependence and the constraining power of large-scale galaxy correlations, including all redshift-distortions, wide-angle, lensing and gravitational potential effects on linear scales. We analyze the cosmological information present in the lensing convergence and in the gravitational potential terms describing the so-called ''relativistic effects'', and we find that, while smaller than the information contained in intrinsic galaxy clustering, it is not negligible. We investigate how neglecting them does bias cosmological measurements performed by future spectroscopic and photometric large-scale surveys such as SKA and Euclid. We perform a Fisher analysis using the CLASS code, modified to include scale-dependent galaxymore » bias and redshift-dependent magnification and evolution bias. Our results show that neglecting relativistic terms, especially lensing convergence, introduces an error in the forecasted precision in measuring cosmological parameters of the order of a few tens of percent, in particular when measuring the matter content of the Universe and primordial non-Gaussianity parameters. The analysis suggests a possible substantial systematic error in cosmological parameter constraints. Therefore, we argue that radial correlations and integrated relativistic terms need to be taken into account when forecasting the constraining power of future large-scale number counts of galaxy surveys.« less
NASA Technical Reports Server (NTRS)
Rajan, P. K.; Khan, Ajmal
1993-01-01
Spatial light modulators (SLMs) are being used in correlation-based optical pattern recognition systems to implement the Fourier domain filters. Currently available SLMs have certain limitations with respect to the realizability of these filters. Therefore, it is necessary to incorporate the SLM constraints in the design of the filters. The design of a SLM-constrained minimum average correlation energy (SLM-MACE) filter using the simulated annealing-based optimization technique was investigated. The SLM-MACE filter was synthesized for three different types of constraints. The performance of the filter was evaluated in terms of its recognition (discrimination) capabilities using computer simulations. The correlation plane characteristics of the SLM-MACE filter were found to be reasonably good. The SLM-MACE filter yielded far better results than the analytical MACE filter implemented on practical SLMs using the constrained magnitude technique. Further, the filter performance was evaluated in the presence of noise in the input test images. This work demonstrated the need to include the SLM constraints in the filter design. Finally, a method is suggested to reduce the computation time required for the synthesis of the SLM-MACE filter.
Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals.
Singh, Anurag; Dandapat, Samarendra
2017-04-01
In recent years, compressed sensing (CS) has emerged as an effective alternative to conventional wavelet based data compression techniques. This is due to its simple and energy-efficient data reduction procedure, which makes it suitable for resource-constrained wireless body area network (WBAN)-enabled electrocardiogram (ECG) telemonitoring applications. Both spatial and temporal correlations exist simultaneously in multi-channel ECG (MECG) signals. Exploitation of both types of correlations is very important in CS-based ECG telemonitoring systems for better performance. However, most of the existing CS-based works exploit either of the correlations, which results in a suboptimal performance. In this work, within a CS framework, the authors propose to exploit both types of correlations simultaneously using a sparse Bayesian learning-based approach. A spatiotemporal sparse model is employed for joint compression/reconstruction of MECG signals. Discrete wavelets transform domain block sparsity of MECG signals is exploited for simultaneous reconstruction of all the channels. Performance evaluations using Physikalisch-Technische Bundesanstalt MECG diagnostic database show a significant gain in the diagnostic reconstruction quality of the MECG signals compared with the state-of-the art techniques at reduced number of measurements. Low measurement requirement may lead to significant savings in the energy-cost of the existing CS-based WBAN systems.
Kwicklis, Edward M.; Wolfsberg, Andrew V.; Stauffer, Philip H.; Walvoord, Michelle Ann; Sully, Michael J.
2006-01-01
Multiphase, multicomponent numerical models of long-term unsaturated-zone liquid and vapor movement were created for a thick alluvial basin at the Nevada Test Site to predict present-day liquid and vapor fluxes. The numerical models are based on recently developed conceptual models of unsaturated-zone moisture movement in thick alluvium that explain present-day water potential and tracer profiles in terms of major climate and vegetation transitions that have occurred during the past 10 000 yr or more. The numerical models were calibrated using borehole hydrologic and environmental tracer data available from a low-level radioactive waste management site located in a former nuclear weapons testing area. The environmental tracer data used in the model calibration includes tracers that migrate in both the liquid and vapor phases (??D, ??18O) and tracers that migrate solely as dissolved solutes (Cl), thus enabling the estimation of some gas-phase as well as liquid-phase transport parameters. Parameter uncertainties and correlations identified during model calibration were used to generate parameter combinations for a set of Monte Carlo simulations to more fully characterize the uncertainty in liquid and vapor fluxes. The calculated background liquid and vapor fluxes decrease as the estimated time since the transition to the present-day arid climate increases. However, on the whole, the estimated fluxes display relatively little variability because correlations among parameters tend to create parameter sets for which changes in some parameters offset the effects of others in the set. Independent estimates on the timing since the climate transition established from packrat midden data were essential for constraining the model calibration results. The study demonstrates the utility of environmental tracer data in developing numerical models of liquid- and gas-phase moisture movement and the importance of considering parameter correlations when using Monte Carlo analysis to characterize the uncertainty in moisture fluxes. ?? Soil Science Society of America.
The thermodynamic properties of normal liquid helium 3
NASA Astrophysics Data System (ADS)
Modarres, M.; Moshfegh, H. R.
2009-09-01
The thermodynamic properties of normal liquid helium 3 are calculated by using the lowest order constrained variational (LOCV) method. The Landau Fermi liquid model and Fermi-Dirac distribution function are considered as our statistical model for the uncorrelated quantum fluid picture and the Lennard-Jones and Aziz potentials are used in our truncated cluster expansion (LOCV) to calculate the correlated energy. The single particle energy is treated variationally through an effective mass. The free energy, pressure, entropy, chemical potential and liquid phase diagram as well as the helium 3 specific heat are evaluated, discussed and compared with the corresponding available experimental data. It is found that the critical temperature for the existence of the pure gas phase is about 4.90 K (4.45 K), which is higher than the experimental prediction of 3.3 K, and the helium 3 flashing temperature is around 0.61 K (0.50 K) for the Lennard-Jones (Aziz) potential.
Lingam, Manasvi
2016-06-01
In this paper, percolation theory is employed to place tentative bounds on the probability p of interstellar travel and the emergence of a civilization (or panspermia) that colonizes the entire Galaxy. The ensuing ramifications with regard to the Fermi paradox are also explored. In particular, it is suggested that the correlation function of inhabited exoplanets can be used to observationally constrain p in the near future. It is shown, by using a mathematical evolution model known as the Yule process, that the probability distribution for civilizations with a given number of colonized worlds is likely to exhibit a power-law tail. Some of the dynamical aspects of this issue, including the question of timescales and generalizing percolation theory, were also studied. The limitations of these models, and other avenues for future inquiry, are also outlined. Complex life-Extraterrestrial life-Panspermia-Life detection-SETI. Astrobiology 16, 418-426.
NASA Technical Reports Server (NTRS)
Thomson, J. A. L.; Meng, J. C. S.
1975-01-01
A possible measurement program designed to obtain the information requisite to determining the feasibility of airborne and/or satellite-borne LDV (Laser Doppler Velocimeter) systems is discussed. Measurements made from the ground are favored over an airborne measurement as far as for the purpose of determining feasibility is concerned. The expected signal strengths for scattering at various altitude and elevation angles are examined; it appears that both molecular absorption and ambient turbulence degrade the signal at low elevation angles and effectively constrain the ground based measurement of elevation angles exceeding a critical value. The nature of the wind shear and turbulence to be expected are treated from a linear hydrodynamic model - a mountain lee wave model. The spatial and temporal correlation distances establish requirements on the range resolution, the maximum detectable range and the allowable integration time.
Geoid, topography, and convection-driven crustal deformation on Venus
NASA Technical Reports Server (NTRS)
Simons, Mark; Hager, Bradford H.; Solomon, Sean C.
1993-01-01
High-resolution Magellan images and altimetry of Venus reveal a wide range of styles and scales of surface deformation that cannot readily be explained within the classical terrestrial plate tectonic paradigm. The high correlation of long-wavelength topography and gravity and the large apparent depths of compensation suggest that Venus lacks an upper-mantle low-viscosity zone. A key difference between Earth and Venus may be the degree of coupling between the convecting mantle and the overlying lithosphere. Mantle flow should then have recognizable signatures in the relationships between the observed surface topography, crustal deformation, and the gravity field. Therefore, comparison of model results with observational data can help to constrain such parameters as crustal and thermal boundary layer thicknesses as well as the character of mantle flow below different Venusian features. We explore in this paper the effects of this coupling by means of a finite element modelling technique.
NASA Technical Reports Server (NTRS)
Mehr, Ali Farhang; Tumer, Irem
2005-01-01
In this paper, we will present a new methodology that measures the "worth" of deploying an additional testing instrument (sensor) in terms of the amount of information that can be retrieved from such measurement. This quantity is obtained using a probabilistic model of RLV's that has been partially developed in the NASA Ames Research Center. A number of correlated attributes are identified and used to obtain the worth of deploying a sensor in a given test point from an information-theoretic viewpoint. Once the information-theoretic worth of sensors is formulated and incorporated into our general model for IHM performance, the problem can be formulated as a constrained optimization problem where reliability and operational safety of the system as a whole is considered. Although this research is conducted specifically for RLV's, the proposed methodology in its generic form can be easily extended to other domains of systems health monitoring.
Nitrogen Species in the Post-Pinatubo Stratosphere: Model Analysis Utilizing UARS Measurements
NASA Technical Reports Server (NTRS)
Danilin, Michael Y.; Rodriguez, Jose M.; Hu, Wen-Jie; Ko, Malcolm K. W.; Weisenstein, Debra K.; Kumer, John B.; Mergenthaler, John L.; Russel, James M., III; Koike, Makoto; Yue, Glenn K.
1999-01-01
We present an analysis of the impact of heterogeneous chemistry on the partitioning of nitrogen species measured by the Upper Atmosphere Research Satellite (UARS) instruments. The UARS measurements utilized include N2O, HNO3, and ClONO2 from the cryogenic limb array etalon spectrometer (CLAES), version 7 (v.7), and temperature, methane, ozone, H2O, HCl, NO and NO2 from the halogen occultation experiment (HALOE), version 18. The analysis is carried out for the UARS data obtained between January 1992 and September 1994 in the 100-to 1-mbar (approx. 17-47 km) altitude range and over 10 degrees latitude bins from 70 S to 70 N. The spatiotemporal evolution of aerosol surface area density (SAD) is adopted from analysis of the Stratospheric Aerosol and Gas Experiment (SAGE) II data. A diurnal steady state photochemical box model, constrained by the temperature, ozone, H2O, CH4, aerosol SAD, and columns of O2 and O3 above the point of interest, has been used as the main tool to analyze these data. Total inorganic nitrogen (NOy) is obtained by three different methods: (1) as a sum of the UARS-measured NO, NO2, HNO3, and ClONO2; (2) from the N2O-NOy correlation, and (3) from the CH4-NOy correlation. To validate our current understanding of stratospheric heterogeneous chemistry for post-Pinatubo conditions, the model-calculated monthly averaged NOx/NOy ratios and the NO, NO2, and HNO3 profiles are compared with the UARS-derived data. In general, the UARS-constrained box model captures the main features of nitrogen species partitioning in the post-Pinatubo years, such as recovery of NOx after the eruption, their seasonal variability and vertical profiles. However, the model underestimates the NO2 content, particularly in the 30- to 7-mbar (approx.23-32 km) range. Comparisons of the calculated temporal behavior of the partial columns of NO2 and HNO3 and ground-based measurements at 45 S and 45 N are also presented. Our analysis indicates that ground-based and HALOE v.18 measurements of the NO2 vertical columns are consistent within the range of their uncertainties and are systematically higher (up to 50%) than the model results at midlatitudes in both hemispheres. Reasonable agreement is obtained for HNO3 columns at 45 S, suggesting some problems with nitrogen species partitioning in the model. Outstanding uncertainties are discussed.
NASA Technical Reports Server (NTRS)
Danilin, Michael Y.; Rodriguez, Jose M.; Hu, Wenjie; Ko, Malcolm K. W.; Weisenstein, Debra K.; Kumer, John B.; Mergenthaler, John L.; Russell, James M., III; Koike, Makoto; Yue, Glenn K.
1999-01-01
We present an analysis of the impact of heterogeneous chemistry on the partitioning of nitrogen species measured by the Upper Atmosphere Research Satellite (UARS) instruments. The UARS measurements utilized include N2O, HNO3, and ClONO2 from the cryogenic limb array etalon spectrometer (CLAES), version 7 (v.7), and temperature, methane, ozone, H2O, HCl, NO and NO2 from the halogen occultation experiment (HALOE), version 18. The analysis is carried out for the UARS data obtained between January 1992 and September 1994 in the 100- to 1-mbar (approx. 17-47 km) altitude range and over 10 deg latitude bins from 70 deg S to 70 deg N. The spatiotemporal evolution of aerosol surface area density (SAD) is adopted from analysis of the Stratospheric Aerosol and Gas Experiment (SAGE) II data. A diurnal steady state photochemical box model, constrained by the temperature, ozone, H2O, CH4, aerosol SAD, and columns of O2 and O3 above the point of interest, has been used as the main tool to analyze these data. Total inorganic nitrogen (NOY) is obtained by three different methods: (1) as a sum of the UARS-measured NO, NO2, HNO3, and ClONO2; (2) from the N2O-NOY correlation; and (3) from the CH4-NOY correlation. To validate our current understanding of stratospheric heterogeneous chemistry for post-Pinatubo conditions, the model-calculated monthly averaged NO(x)/NO(y) ratios and the NO, NO2, and HNO3 profiles are compared with the UARS-derived data. In general, the UARS-constrained box model captures the main features of nitrogen species partitioning in the post-Pinatubo years, such as recovery of NO(x) after the eruption, their seasonal variability and vertical profiles. However, the model underestimates the NO2 content, particularly in the 30- to 7-mbar (approx. 23-32 km) range. Comparisons of the calculated temporal behavior of the partial columns of NO2 and HNO3 and ground-based measurements at 45 deg S and 45 deg N are also presented. Our analysis indicates that ground-based and HALOE v.18 measurements of the NO2 vertical columns are consistent within the range of their uncertainties and are systematically higher (up to 50%) than the model results at midlatitudes in both hemispheres. Reasonable agreement is obtained for HNO3 columns at 45 deg S, suggesting some problems with nitrogen species partitioning in the model. Outstanding uncertainties are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasanda, Simon Muya; Moodley, Kavilan, E-mail: simon.muya.kasanda@gmail.com, E-mail: moodleyk41@ukzn.ac.za
2014-12-01
We forecast how current (PLANCK) and future (PRISM) cosmic microwave background (CMB) experiments constrain the adiabatic mode and its admixtures with primordial isocurvature modes. The forecasts are based on measurements of the reconstructed CMB lensing potential and lensing-induced CMB B-mode polarization anisotropies in combination with the CMB temperature and E-mode polarization anisotropies. We first study the characteristic features of the CMB temperature, polarization and lensing spectra for adiabatic and isocurvature modes. We then consider how information from the CMB lensing potential and B-mode polarization induced by lensing can improve constraints on an admixture of adiabatic and three correlated isocurvature modes.more » We find that the CMB lensing spectrum improves constraints on isocurvature modes by at most 10% for the PLANCK and PRISM experiments. The limited improvement is a result of the low amplitude of isocurvature lensing spectra and cancellations between these spectra that render them only slightly detectable. There is a larger gain from using the lensing-induced B-mode polarization spectrum measured by PRISM. In this case constraints on isocurvature mode amplitudes improve by as much as 40% relative to the CMB temperature and E-mode polarization constraints. The addition of both lensing and lensing-induced B-mode polarization information constrains isocurvature mode amplitudes at the few percent level or better. In the case of admixtures of the adiabatic mode with one or two correlated isocurvature modes we find that constraints at the percent level or better are possible. We investigate the dependence of our results to various assumptions in our analysis, such as the inclusion of dark energy parameters, the CMB temperature-lensing correlation, and the presence of primordial tensor modes, and find that these assumptions do not significantly change our main results.« less
Measurement of psychological disorders using cognitive diagnosis models.
Templin, Jonathan L; Henson, Robert A
2006-09-01
Cognitive diagnosis models are constrained (multiple classification) latent class models that characterize the relationship of questionnaire responses to a set of dichotomous latent variables. Having emanated from educational measurement, several aspects of such models seem well suited to use in psychological assessment and diagnosis. This article presents the development of a new cognitive diagnosis model for use in psychological assessment--the DINO (deterministic input; noisy "or" gate) model--which, as an illustrative example, is applied to evaluate and diagnose pathological gamblers. As part of this example, a demonstration of the estimates obtained by cognitive diagnosis models is provided. Such estimates include the probability an individual meets each of a set of dichotomous Diagnostic and Statistical Manual of Mental Disorders (text revision [DSM-IV-TR]; American Psychiatric Association, 2000) criteria, resulting in an estimate of the probability an individual meets the DSM-IV-TR definition for being a pathological gambler. Furthermore, a demonstration of how the hypothesized underlying factors contributing to pathological gambling can be measured with the DINO model is presented, through use of a covariance structure model for the tetrachoric correlation matrix of the dichotomous latent variables representing DSM-IV-TR criteria. Copyright 2006 APA
Hacker, David E; Hoinka, Jan; Iqbal, Emil S; Przytycka, Teresa M; Hartman, Matthew C T
2017-03-17
Highly constrained peptides such as the knotted peptide natural products are promising medicinal agents because of their impressive biostability and potent activity. Yet, libraries of highly constrained peptides are challenging to prepare. Here, we present a method which utilizes two robust, orthogonal chemical steps to create highly constrained bicyclic peptide libraries. This technology was optimized to be compatible with in vitro selections by mRNA display. We performed side-by-side monocyclic and bicyclic selections against a model protein (streptavidin). Both selections resulted in peptides with mid-nanomolar affinity, and the bicyclic selection yielded a peptide with remarkable protease resistance.
Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation
NASA Astrophysics Data System (ADS)
Du, Jiaoman; Yu, Lean; Li, Xiang
2016-04-01
Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.
Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required
NASA Astrophysics Data System (ADS)
Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.
2017-12-01
A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely constraining the starting model. We also explore what types of datasets are needed to uniquely constrain the orientation(s) of anisotropic symmetry if the mechanism is assumed.
NASA Astrophysics Data System (ADS)
Hee, S.; Vázquez, J. A.; Handley, W. J.; Hobson, M. P.; Lasenby, A. N.
2017-04-01
Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era cosmic microwave background, baryonic acoustic oscillations (BAO), Type Ia supernova (SNIa) and Lyman α (Lyα) data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance Λ cold dark matter (ΛCDM) model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other, a supernegative equation of state (also known as 'phantom dark energy') is identified within the 1.5σ confidence intervals of the posterior distribution. To identify the power of different data sets in constraining the dark energy equation of state, we use a novel formulation of the Kullback-Leibler divergence. This formalism quantifies the information the data add when moving from priors to posteriors for each possible data set combination. The SNIa and BAO data sets are shown to provide much more constraining power in comparison to the Lyα data sets. Further, SNIa and BAO constrain most strongly around redshift range 0.1-0.5, whilst the Lyα data constrain weakly over a broader range. We do not attribute the supernegative favouring to any particular data set, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.
Constraining ecosystem processes from tower fluxes and atmospheric profiles.
Hill, T C; Williams, M; Woodward, F I; Moncrieff, J B
2011-07-01
The planetary boundary layer (PBL) provides an important link between the scales and processes resolved by global atmospheric sampling/modeling and site-based flux measurements. The PBL is in direct contact with the land surface, both driving and responding to ecosystem processes. Measurements within the PBL (e.g., by radiosondes, aircraft profiles, and flask measurements) have a footprint, and thus an integrating scale, on the order of 1-100 km. We use the coupled atmosphere-biosphere model (CAB) and a Bayesian data assimilation framework to investigate the amount of biosphere process information that can be inferred from PBL measurements. We investigate the information content of PBL measurements in a two-stage study. First, we demonstrate consistency between the coupled model (CAB) and measurements, by comparing the model to eddy covariance flux tower measurements (i.e., water and carbon fluxes) and also PBL scalar profile measurements (i.e., water, carbon dioxide, and temperature) from Canadian boreal forest. Second, we use the CAB model in a set of Bayesian inversions experiments using synthetic data for a single day. In the synthetic experiment, leaf area and respiration were relatively well constrained, whereas surface albedo and plant hydraulic conductance were only moderately constrained. Finally, the abilities of the PBL profiles and the eddy covariance data to constrain the parameters were largely similar and only slightly lower than the combination of both observations.
Weighting climate model projections using observational constraints.
Gillett, Nathan P
2015-11-13
Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081-2100 relative to 1986-2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5-95% warming range of 0.8-2.5 K is somewhat lower than the unweighted range of 1.1-2.6 K reported in the IPCC AR5. © 2015 The Authors.
Numerical methods for the inverse problem of density functional theory
Jensen, Daniel S.; Wasserman, Adam
2017-07-17
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Numerical methods for the inverse problem of density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Daniel S.; Wasserman, Adam
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Solid-like features in dense vapors near the fluid critical point
NASA Astrophysics Data System (ADS)
Ruppeiner, George; Dyjack, Nathan; McAloon, Abigail; Stoops, Jerry
2017-06-01
The phase diagram (pressure versus temperature) of the pure fluid is typically envisioned as being featureless apart from the presence of the liquid-vapor coexistence curve terminating at the critical point. However, a number of recent authors have proposed that this simple picture misses important features, such as the Widom line, the Fisher-Widom line, and the Frenkel line. In our paper, we discuss another way of augmenting the pure fluid phase diagram, lines of zero thermodynamic curvature R = 0 separating regimes of fluid solid-like behavior (R > 0) from gas-like or liquid-like behavior (R < 0). We systematically evaluate R for the 121 pure fluids in the NIST/REFPROP (version 9.1) fluid database near the saturated vapor line from the triple point to the critical point. Our specific goal was to identify regions of positive R abutting the saturated vapor line ("feature D"). We found the following: (i) 97/121 of the NIST/REFPROP fluids have feature D. (ii) The presence and character of feature D correlates with molecular complexity, taken to be the number of atoms Q per molecule. (iii) The solid-like properties of feature D might be attributable to a mesoscopic model based on correlations among coordinated spinning molecules, a model that might be testable with computer simulations. (iv) There are a number of correlations between thermodynamic quantities, including the acentric factor ω , but we found little explicit correlation between ω and the shape of a molecule. (v) Feature D seriously constrains the size of the asymptotic fluid critical point regime, possibly resolving a long-standing mystery about why these are so small. (vi) Feature D correlates roughly with regimes of anomalous sound propagation.
Using galaxy pairs to investigate the three-point correlation function in the squeezed limit
NASA Astrophysics Data System (ADS)
Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.
2017-11-01
We investigate the three-point correlation function (3PCF) in the squeezed limit by considering galaxy pairs as discrete objects and cross-correlating them with the galaxy field. We develop an efficient algorithm using fast Fourier transforms to compute such cross-correlations and their associated pair-galaxy bias bp, g and the squeezed 3PCF coefficient Qeff. We implement our method using N-body cosmological simulations and a fiducial halo occupation distribution (HOD) and present the results in both the real space and redshift space. In real space, we observe a peak in bp, g and Qeff at pair separation of ∼2 Mpc, attributed to the fact that galaxy pairs at 2 Mpc separation trace the most massive dark matter haloes. We also see strong anisotropy in the bp, g and Qeff signals that track the large-scale filamentary structure. In redshift space, both the 2 Mpc peak and the anisotropy are significantly smeared out along the line of sight due to finger-of-God effect. In both the real space and redshift space, the squeezed 3PCF shows a factor of 2 variation, contradicting the hierarchical ansatz, but offering rich information on the galaxy-halo connection. Thus, we explore the possibility of using the squeezed 3PCF to constrain the HOD. When we compare two simple HOD models that are closely matched in their projected two-point correlation function (2PCF), we do not yet see a strong variation in the 3PCF that is clearly disentangled from variations in the projected 2PCF. Nevertheless, we propose that more complicated HOD models, e.g. those incorporating assembly bias, can break degeneracies in the 2PCF and show a distinguishable squeezed 3PCF signal.
NASA Astrophysics Data System (ADS)
Wallace, K. L.; Kaufman, D. S.; Schiff, C. J.; Kathan, K.; Werner, A.; Hancock, J.; Hagel, L. A.
2010-12-01
Sediment cores recovered from three kettle lakes, all within 10 km of Anchorage, Alaska contain a record of tephra fall from major eruptive events of Cook Inlet volcanoes during the past 11250 yr. Prominent tephra layers from multiple cores within each lake were first correlated within each basin using physical properties, major-oxide glass geochemistry, and constrained by bracketing radiocarbon age. Distinct tephra from each lake were then correlated among all three lakes using the same criteria to develop a composite tephrostratigraphic framework for the Anchorage area. Lorraine Lake, the northern-most lake contains 17 distinct tephra layers; Goose Lake, the eastern most lake contains 10 distinct tephra layers; and Little Campbell Lake, to the west, contains 7 distinct tephra layers. Thinner, less-prominent tephra layers, reflecting smaller or more distant eruptions, also occur but are not included as part of this study. Of the 33 tephra layers, only two could be confidently correlated among all three lakes, and four other correlative deposits were recognized in two of the three lakes. The minimum number of unique major tephra-fall events in the Anchorage area is 22 in the past 11200 years, or about 1 event every 500 years. This number underestimates the actual number of eruptions because not attempt was made to locate crypto-tephra. All but perhaps one tephra deposit originated from Cook Inlet volcanoes with the most prolific source being Mount Spurr/Crater Peak, which is accountable for at least 8 deposits. Combining radiocarbon ages to produce an independent age model for each lake is in progress and will aid in confirming correlations and assigning detailed modeled-tephra age and uncertainty to each tephra layer.
Configurational entropy as a tool to select a physical thick brane model
NASA Astrophysics Data System (ADS)
Chinaglia, M.; Cruz, W. T.; Correa, R. A. C.; de Paula, W.; Moraes, P. H. R. S.
2018-04-01
We analize braneworld scenarios via a configurational entropy (CE) formalism. Braneworld scenarios have drawn attention mainly due to the fact that they can explain the hierarchy problem and unify the fundamental forces through a symmetry breaking procedure. Those scenarios localize matter in a (3 + 1) hypersurface, the brane, which is inserted in a higher dimensional space, the bulk. Novel analytical braneworld models, in which the warp factor depends on a free parameter n, were recently released in the literature. In this article we will provide a way to constrain this parameter through the relation between information and dynamics of a system described by the CE. We demonstrate that in some cases the CE is an important tool in order to provide the most probable physical system among all the possibilities. In addition, we show that the highest CE is correlated to a tachyonic sector of the configuration, where the solutions for the corresponding model are dynamically unstable.
NASA Astrophysics Data System (ADS)
Reinert, K. A.
The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.
Vortex Formation and Acceleration of a Fish-Inspired Robot Performing Starts from Rest
NASA Astrophysics Data System (ADS)
Devoria, Adam; Bapst, Jonathan; Ringuette, Matthew
2009-11-01
We investigate the unsteady flow of a fish-inspired robot executing starts from rest, with the objective of understanding the connection among the kinematics, vortex formation, and acceleration performance. Several fish perform ``fast starts,'' where the body bends into a ``C'' or ``S'' shape while turning (phase I), followed by a straightening of the body and caudal fin and a linear acceleration (phase II). The resulting highly 3-D, unsteady vortex formation and its relationship to the acceleration are not well understood. The self-propelled robotic model contains motor-driven joints with programmable motion to emulate phase II of a simplified C-start. The experiments are conducted in a water tank, and the model is constrained to 1 direction along rails. The velocity is measured using digital particle image velocimetry (DPIV) in multiple planes. Vortex boundaries are identified using the finite-time Lyapunov exponent, then the unsteady vortex circulation is computed. The thrust is estimated from the identified vortices, and correlated with the circulation and model acceleration for different kinematics.
NASA Astrophysics Data System (ADS)
Özer, Ahmet Özkan
2016-04-01
An infinite dimensional model for a three-layer active constrained layer (ACL) beam model, consisting of a piezoelectric elastic layer at the top and an elastic host layer at the bottom constraining a viscoelastic layer in the middle, is obtained for clamped-free boundary conditions by using a thorough variational approach. The Rao-Nakra thin compliant layer approximation is adopted to model the sandwich structure, and the electrostatic approach (magnetic effects are ignored) is assumed for the piezoelectric layer. Instead of the voltage actuation of the piezoelectric layer, the piezoelectric layer is proposed to be activated by a charge (or current) source. We show that, the closed-loop system with all mechanical feedback is shown to be uniformly exponentially stable. Our result is the outcome of the compact perturbation argument and a unique continuation result for the spectral problem which relies on the multipliers method. Finally, the modeling methodology of the paper is generalized to the multilayer ACL beams, and the uniform exponential stabilizability result is established analogously.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Visual Working Memory Capacity: From Psychophysics and Neurobiology to Individual Differences
Luck, Steven J.; Vogel, Edward K.
2013-01-01
Visual working memory capacity is of great interest because it is strongly correlated with overall cognitive ability, can be understood at the level of neural circuits, and is easily measured. Recent studies have shown that capacity influences tasks ranging from saccade targeting to analogical reasoning. A debate has arisen over whether capacity is constrained by a limited number of discrete representations or by an infinitely divisible resource, but the empirical evidence and neural network models currently favor a discrete item limit. Capacity differs markedly across individuals and groups, and recent research indicates that some of these differences reflect true differences in storage capacity whereas others reflect variations in the ability to use memory capacity efficiently. PMID:23850263
Tunability of the circadian action of tetrachromatic solid-state light sources
NASA Astrophysics Data System (ADS)
Žukauskas, A.; Vaicekauskas, R.
2015-01-01
An approach to the optimization of the spectral power distribution of solid-state light sources with the tunable non-image forming photobiological effect on the human circadian rhythm is proposed. For tetrachromatic clusters of model narrow-band (direct-emission) light-emitting diodes (LEDs), the limiting tunability of the circadian action factor (CAF), which is the ratio of the circadian efficacy to luminous efficacy of radiation, was established as a function of constraining color fidelity and luminous efficacy of radiation. For constant correlated color temperatures (CCTs), the CAF of the LED clusters can be tuned above and below that of the corresponding blackbody radiators, whereas for variable CCT, the clusters can have circadian tunability covering that of a temperature-tunable blackbody radiator.
Semileptonic B-meson decays to light pseudoscalar mesons on the HISQ ensembles
NASA Astrophysics Data System (ADS)
Gelzer, Zechariah; Bernard, C.; Tar, C. De; El-Khadra, AX; Gámiz, E.; Gottlieb, Steven; Kronfeld, Andreas S.; Liu, Yuzhi; Meurice, Y.; Simone, J. N.; Toussaint, D.; Water, R. S. Van de; Zhou, R.
2018-03-01
We report the status of an ongoing lattice-QCD calculation of form factors for exclusive semileptonic decays of B mesons with both charged currents (B → πlv, Bs → Klv) and neutral currents (B → πl+l-, B → Kl+l-). The results are important for constraining or revealing physics beyond the Standard Model. This work uses MILC's (2+1 + 1)-flavor ensembles with the HISQ action for the sea and light valence quarks and the clover action in the Fermilab interpretation for the b quark. Simulations are carried out at three lattice spacings down to 0.088 fm, with both physical and unphysical sea-quark masses. We present preliminary results for correlation-function fits.
NASA Astrophysics Data System (ADS)
Bark, Chung W.; Cho, Kyung C.; Koo, Yang M.; Tamura, Nobumichi; Ryu, Sangwoo; Jang, Hyun M.
2007-03-01
The dramatically enhanced polarizations and saturation magnetizations observed in the epitaxially constrained BiFeO3 (BFO) thin films with their pronounced grain-orientation dependence have attracted much attention and are attributed largely to the constrained in-plane strain. Thus, it is highly desirable to directly obtain information on the two-dimensional (2D) distribution of the in-plane strain and its correlation with the grain orientation of each corresponding microregion. Here the authors report a 2D quantitative mapping of the grain orientation and the local triaxial strain field in a 250nm thick multiferroic BFO film using a synchrotron x-ray microdiffraction technique. This direct scanning measurement demonstrates that the deviatoric component of the in-plane strain tensor is between 5×10-3 and 6×10-3 and that the local triaxial strain is fairly well correlated with the grain orientation in that particular region.
Four-state rock-paper-scissors games in constrained Newman-Watts networks.
Zhang, Guo-Yong; Chen, Yong; Qi, Wei-Kai; Qing, Shao-Meng
2009-06-01
We study the cyclic dominance of three species in two-dimensional constrained Newman-Watts networks with a four-state variant of the rock-paper-scissors game. By limiting the maximal connection distance Rmax in Newman-Watts networks with the long-range connection probability p , we depict more realistically the stochastic interactions among species within ecosystems. When we fix mobility and vary the value of p or Rmax, the Monte Carlo simulations show that the spiral waves grow in size, and the system becomes unstable and biodiversity is lost with increasing p or Rmax. These results are similar to recent results of Reichenbach et al. [Nature (London) 448, 1046 (2007)], in which they increase the mobility only without including long-range interactions. We compared extinctions with or without long-range connections and computed spatial correlation functions and correlation length. We conclude that long-range connections could improve the mobility of species, drastically changing their crossover to extinction and making the system more unstable.
NASA Astrophysics Data System (ADS)
He, W.; Ju, W.; Chen, H.; Peters, W.; van der Velde, I.; Baker, I. T.; Andrews, A. E.; Zhang, Y.; Launois, T.; Campbell, J. E.; Suntharalingam, P.; Montzka, S. A.
2016-12-01
Carbonyl sulfide (OCS) is a promising novel atmospheric tracer for studying carbon cycle processes. OCS shares a similar pathway as CO2 during photosynthesis but not released through a respiration-like process, thus could be used to partition Gross Primary Production (GPP) from Net Ecosystem-atmosphere CO2 Exchange (NEE). This study uses joint atmospheric observations of OCS and CO2 to constrain GPP and ecosystem respiration (Re). Flask data from tower and aircraft sites over North America are collected. We employ our recently developed CarbonTracker (CT)-Lagrange carbon assimilation system, which is based on the CT framework and the Weather Research and Forecasting - Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) model, and the Simple Biosphere model with simulated OCS (SiB3-OCS) that provides prior GPP, Re and plant uptake fluxes of OCS. Derived plant OCS fluxes from both process model and GPP-scaled model are tested in our inversion. To investigate the ability of OCS to constrain GPP and understand the uncertainty propagated from OCS modeling errors to constrained fluxes in a dual-tracer system including OCS and CO2, two inversion schemes are implemented and compared: (1) a two-step scheme, which firstly optimizes GPP using OCS observations, and then simultaneously optimizes GPP and Re using CO2 observations with OCS-constrained GPP in the first step as prior; (2) a joint scheme, which simultaneously optimizes GPP and Re using OCS and CO2 observations. We will evaluate the result using an estimated GPP from space-borne solar-induced fluorescence observations and a data-driven GPP upscaled from FLUXNET data with a statistical model (Jung et al., 2011). Preliminary result for the year 2010 shows the joint inversion makes simulated mole fractions more consistent with observations for both OCS and CO2. However, the uncertainty of OCS simulation is larger than that of CO2. The two-step and joint schemes perform similarly in improving the consistence with observations for OCS, implicating that OCS could provide independent constraint in joint inversion. Optimization makes less total GPP and Re but more NEE, when testing with prior CO2 fluxes from two biosphere models. This study gives an in-depth insight into the role of joint atmospheric OCS and CO2 observations in constraining CO2 fluxes.
Connecting medieval megadroughts and surface climate in the Last Millennium Reanalysis
NASA Astrophysics Data System (ADS)
Erb, M. P.; Emile-Geay, J.; Anderson, D. M.; Hakim, G. J.; Horlick, K. A.; Noone, D.; Perkins, W. A.; Steig, E. J.; Tardif, R.
2016-12-01
The North American Drought Atlas shows severe, long-lasting droughts during the Medieval Climate Anomaly. Because drought frequency and severity over the coming century is an area of vital interest, better understanding the causes of these historic droughts is crucial. A variety of research has suggested that a La Niña state was important for producing medieval megadroughts [1], and other work has indicated the potential roles of the Atlantic Multidecadal Oscillation [2] and internal atmospheric variability [3]. Correlations between drought and large-scale climate patterns also exist in the instrumental record [4], but understanding these relationships is far from complete. To investigate these relationships further, a data assimilation approach is employed. Proxy records - including tree rings, corals, and ice cores - are used to constrain climate states over the Common Era. By using general circulation model (GCM) output to quantify the covariances in the climate system, climate can be constrained not just at proxy sites but for all covarying locations and climate fields. Multiple GCMs will be employed to offset the limitations of imperfect model physics. This "Last Millennium Reanalysis" will be used to quantify relationships between North American medieval megadroughts and sea surface temperature patterns in the Atlantic and Pacific. 1. Cook, E. R., et al., Earth-Sci. Rev. 81, 93 (2007). 2. Oglesby, R., et al., Global Planet. Change 84-85, 56 (2012). 3. Stevenson, S., et al., J. Climate 28, 1865 (2015). 4. Cook, B. I., et al., J. Climate 27, 383 (2014).
NASA Astrophysics Data System (ADS)
Hunt, Alison C.; Cook, David L.; Lichtenberg, Tim; Reger, Philip M.; Ek, Mattias; Golabek, Gregor J.; Schönbächler, Maria
2018-01-01
The short-lived 182Hf-182W decay system is a powerful chronometer for constraining the timing of metal-silicate separation and core formation in planetesimals and planets. Neutron capture effects on W isotopes, however, significantly hamper the application of this tool. In order to correct for neutron capture effects, Pt isotopes have emerged as a reliable in-situ neutron dosimeter. This study applies this method to IAB iron meteorites, in order to constrain the timing of metal segregation on the IAB parent body. The ε182W values obtained for the IAB iron meteorites range from -3.61 ± 0.10 to -2.73 ± 0.09. Correlating εiPt with ε182W data yields a pre-neutron capture ε182W of -2.90 ± 0.06. This corresponds to a metal-silicate separation age of 6.0 ± 0.8 Ma after CAI for the IAB parent body, and is interpreted to represent a body-wide melting event. Later, between 10 and 14 Ma after CAI, an impact led to a catastrophic break-up and subsequent reassembly of the parent body. Thermal models of the interior evolution that are consistent with these estimates suggest that the IAB parent body underwent metal-silicate separation as a result of internal heating by short-lived radionuclides and accreted at around 1.4 ± 0.1 Ma after CAIs with a radius of greater than 60 km.
The Green Sahara: Climate Change, Hydrologic History and Human Occupation
NASA Technical Reports Server (NTRS)
Blom, Ronald G.; Farr, Tom G.; Feynmann, Joan; Ruzmaikin, Alexander; Paillou, Philippe
2009-01-01
Archaeology can provide insight into interactions of climate change and human activities in sensitive areas such as the Sahara, to the benefit of both disciplines. Such analyses can help set bounds on climate change projections, perhaps identify elements of tipping points, and provide constraints on models. The opportunity exists to more precisely constrain the relationship of natural solar and climate interactions, improving understanding of present and future anthropogenic forcing. We are beginning to explore the relationship of human occupation of the Sahara and long-term solar irradiance variations synergetic with changes in atmospheric-ocean circulation patterns. Archaeological and climate records for the last 12 K years are gaining adequate precision to make such comparisons possible. We employ a range of climate records taken over the globe (e.g. Antarctica, Greenland, Cariaco Basin, West African Ocean cores, records from caves) to identify the timing and spatial patterns affecting Saharan climate to compare with archaeological records. We see correlation in changing ocean temperature patterns approx. contemporaneous with drying of the Sahara approx. 6K years BP. The role of radar images and other remote sensing in this work includes providing a geographically comprehensive geomorphic overview of this key area. Such coverage is becoming available from the Japanese PALSAR radar system, which can guide field work to collect archaeological and climatic data to further constrain the climate change chronology and link to models. Our initial remote sensing efforts concentrate on the Gilf Kebir area of Egypt.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping
2018-01-01
An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.
2012-01-01
Background Few studies discuss the indicators used to assess the effect on cost containment in healthcare across hospitals in a single-payer national healthcare system with constrained medical resources. We present the intraclass correlation coefficient (ICC) to assess how well Taiwan constrained hospital-provided medical services in such a system. Methods A custom Excel-VBA routine to record the distances of standard deviations (SDs) from the central line (the mean over the previous 12 months) of a control chart was used to construct and scale annual medical expenditures sequentially from 2000 to 2009 for 421 hospitals in Taiwan to generate the ICC. The ICC was then used to evaluate Taiwan’s year-based convergent power to remain unchanged in hospital-provided constrained medical services. A bubble chart of SDs for a specific month was generated to present the effects of using control charts in a national healthcare system. Results ICCs were generated for Taiwan’s year-based convergent power to constrain its medical services from 2000 to 2009. All hospital groups showed a gradually well-controlled supply of services that decreased from 0.772 to 0.415. The bubble chart identified outlier hospitals that required investigation of possible excessive reimbursements in a specific time period. Conclusion We recommend using the ICC to annually assess a nation’s year-based convergent power to constrain medical services across hospitals. Using sequential control charts to regularly monitor hospital reimbursements is required to achieve financial control in a single-payer nationwide healthcare system. PMID:22587736
Chien, Tsair-Wei; Chou, Ming-Ting; Wang, Wen-Chung; Tsai, Li-Shu; Lin, Weir-Sen
2012-05-15
Few studies discuss the indicators used to assess the effect on cost containment in healthcare across hospitals in a single-payer national healthcare system with constrained medical resources. We present the intraclass correlation coefficient (ICC) to assess how well Taiwan constrained hospital-provided medical services in such a system. A custom Excel-VBA routine to record the distances of standard deviations (SDs) from the central line (the mean over the previous 12 months) of a control chart was used to construct and scale annual medical expenditures sequentially from 2000 to 2009 for 421 hospitals in Taiwan to generate the ICC. The ICC was then used to evaluate Taiwan's year-based convergent power to remain unchanged in hospital-provided constrained medical services. A bubble chart of SDs for a specific month was generated to present the effects of using control charts in a national healthcare system. ICCs were generated for Taiwan's year-based convergent power to constrain its medical services from 2000 to 2009. All hospital groups showed a gradually well-controlled supply of services that decreased from 0.772 to 0.415. The bubble chart identified outlier hospitals that required investigation of possible excessive reimbursements in a specific time period. We recommend using the ICC to annually assess a nation's year-based convergent power to constrain medical services across hospitals. Using sequential control charts to regularly monitor hospital reimbursements is required to achieve financial control in a single-payer nationwide healthcare system.
GRBs as standard candles: There is no “circularity problem” (and there never was)
NASA Astrophysics Data System (ADS)
Graziani, Carlo
2011-02-01
Beginning with the 2002 discovery of the "Amati Relation" of GRB spectra, there has been much interest in the possibility that this and other correlations of GRB phenomenology might be used to make GRBs into standard candles. One recurring apparent difficulty with this program has been that some of the primary observational quantities to be fit as "data" - to wit, the isotropic-equivalent prompt energy Eiso and the collimation-corrected "total" prompt energy Eγ - depend for their construction on the very cosmological models that they are supposed to help constrain. This is the so-called "circularity problem" of standard candle GRBs. This paper is intended to point out that the circularity problem is not in fact a problem at all, except to the extent that it amounts to a self-inflicted wound. It arises essentially because of an unfortunate choice of data variables - "source-frame" variables such as Eiso, which are unnecessarily encumbered by cosmological considerations. If, instead, the empirical correlations of GRB phenomenology which are formulated in source-variables are mapped to the primitive observational variables (such as fluence) and compared to the observations in that space, then all taint of circularity disappears. I also indicate here a set of procedures for encoding high-dimensional empirical correlations (such as between Eiso, Epk(src),tjet(src), and T45(src)) in a "Gaussian Tube" smeared model that includes both the correlation and its intrinsic scatter, and how that source-variable model may easily be mapped to the space of primitive observables, to be convolved with the measurement errors and fashioned into a likelihood. I discuss the projections of such Gaussian tubes into sub-spaces, which may be used to incorporate data from GRB events that may lack some element of the data (for example, GRBs without ascertained jet-break times). In this way, a large set of inhomogeneously observed GRBs may be assimilated into a single analysis, so long as each possesses at least two correlated data attributes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renk, Janina; Zumalacárregui, Miguel; Montanari, Francesco, E-mail: renk@thphys.uni-heidelberg.de, E-mail: miguel.zumalacarregui@nordita.org, E-mail: francesco.montanari@helsinki.fi
2016-07-01
We address the impact of consistent modifications of gravity on the largest observable scales, focusing on relativistic effects in galaxy number counts and the cross-correlation between the matter large scale structure (LSS) distribution and the cosmic microwave background (CMB). Our analysis applies to a very broad class of general scalar-tensor theories encoded in the Horndeski Lagrangian and is fully consistent on linear scales, retaining the full dynamics of the scalar field and not assuming quasi-static evolution. As particular examples we consider self-accelerating Covariant Galileons, Brans-Dicke theory and parameterizations based on the effective field theory of dark energy, using the himore » class code to address the impact of these models on relativistic corrections to LSS observables. We find that especially effects which involve integrals along the line of sight (lensing convergence, time delay and the integrated Sachs-Wolfe effect—ISW) can be considerably modified, and even lead to O(1000%) deviations from General Relativity in the case of the ISW effect for Galileon models, for which standard probes such as the growth function only vary by O(10%). These effects become dominant when correlating galaxy number counts at different redshifts and can lead to ∼ 50% deviations in the total signal that might be observable by future LSS surveys. Because of their integrated nature, these deep-redshift cross-correlations are sensitive to modifications of gravity even when probing eras much before dark energy domination. We further isolate the ISW effect using the cross-correlation between LSS and CMB temperature anisotropies and use current data to further constrain Horndeski models. Forthcoming large-volume galaxy surveys using multiple-tracers will search for all these effects, opening a new window to probe gravity and cosmic acceleration at the largest scales available in our universe.« less
The impact of galaxy formation on satellite kinematics and redshift-space distortions
NASA Astrophysics Data System (ADS)
Orsi, Álvaro A.; Angulo, Raúl E.
2018-04-01
Galaxy surveys aim to map the large-scale structure of the Universe and use redshift-space distortions to constrain deviations from general relativity and probe the existence of massive neutrinos. However, the amount of information that can be extracted is limited by the accuracy of theoretical models used to analyse the data. Here, by using the L-Galaxies semi-analytical model run over the Millennium-XXL N-body simulation, we assess the impact of galaxy formation on satellite kinematics and the theoretical modelling of redshift-space distortions. We show that different galaxy selection criteria lead to noticeable differences in the radial distributions and velocity structure of satellite galaxies. Specifically, whereas samples of stellar mass selected galaxies feature satellites that roughly follow the dark matter, emission line satellite galaxies are located preferentially in the outskirts of haloes and display net infall velocities. We demonstrate that capturing these differences is crucial for modelling the multipoles of the correlation function in redshift space, even on large scales. In particular, we show how modelling small-scale velocities with a single Gaussian distribution leads to a poor description of the measured clustering. In contrast, we propose a parametrization that is flexible enough to model the satellite kinematics and that leads to an accurate description of the correlation function down to sub-Mpc scales. We anticipate that our model will be a necessary ingredient in improved theoretical descriptions of redshift-space distortions, which together could result in significantly tighter cosmological constraints and a more optimal exploitation of future large data sets.
NASA Astrophysics Data System (ADS)
Adamczyk, L.; Adams, J. R.; Adkins, J. K.; Agakishiev, G.; Aggarwal, M. M.; Ahammed, Z.; Ajitanand, N. N.; Alekseev, I.; Anderson, D. M.; Aoyama, R.; Aparin, A.; Arkhipkin, D.; Aschenauer, E. C.; Ashraf, M. U.; Attri, A.; Averichev, G. S.; Bairathi, V.; Barish, K.; Behera, A.; Bellwied, R.; Bhasin, A.; Bhati, A. K.; Bhattarai, P.; Bielcik, J.; Bielcikova, J.; Bland, L. C.; Bordyuzhin, I. G.; Bouchet, J.; Brandenburg, J. D.; Brandin, A. V.; Brown, D.; Bryslawskyj, J.; Bunzarov, I.; Butterworth, J.; Caines, H.; Calderón de la Barca Sánchez, M.; Campbell, J. M.; Cebra, D.; Chakaberia, I.; Chaloupka, P.; Chang, Z.; Chankova-Bunzarova, N.; Chatterjee, A.; Chattopadhyay, S.; Chen, J. H.; Chen, X.; Chen, X.; Cheng, J.; Cherney, M.; Christie, W.; Contin, G.; Crawford, H. J.; Dedovich, T. G.; Deng, J.; Deppner, I. M.; Derevschikov, A. A.; Didenko, L.; Dilks, C.; Dong, X.; Drachenberg, J. L.; Draper, J. E.; Dunlop, J. C.; Efimov, L. G.; Elsey, N.; Engelage, J.; Eppley, G.; Esha, R.; Esumi, S.; Evdokimov, O.; Ewigleben, J.; Eyser, O.; Fatemi, R.; Fazio, S.; Federic, P.; Federicova, P.; Fedorisin, J.; Feng, Z.; Filip, P.; Finch, E.; Fisyak, Y.; Flores, C. E.; Fujita, J.; Fulek, L.; Gagliardi, C. A.; Geurts, F.; Gibson, A.; Girard, M.; Grosnick, D.; Gunarathne, D. S.; Guo, Y.; Gupta, A.; Guryn, W.; Hamad, A. I.; Hamed, A.; Harlenderova, A.; Harris, J. W.; He, L.; Heppelmann, S.; Heppelmann, S.; Herrmann, N.; Hirsch, A.; Horvat, S.; Huang, X.; Huang, H. Z.; Huang, T.; Huang, B.; Humanic, T. J.; Huo, P.; Igo, G.; Jacobs, W. W.; Jentsch, A.; Jia, J.; Jiang, K.; Jowzaee, S.; Judd, E. G.; Kabana, S.; Kalinkin, D.; Kang, K.; Kapukchyan, D.; Kauder, K.; Ke, H. W.; Keane, D.; Kechechyan, A.; Khan, Z.; Kikoła, D. P.; Kim, C.; Kisel, I.; Kisiel, A.; Kochenda, L.; Kocmanek, M.; Kollegger, T.; Kosarzewski, L. K.; Kraishan, A. F.; Krauth, L.; Kravtsov, P.; Krueger, K.; Kulathunga, N.; Kumar, L.; Kvapil, J.; Kwasizur, J. H.; Lacey, R.; Landgraf, J. M.; Landry, K. D.; Lauret, J.; Lebedev, A.; Lednicky, R.; Lee, J. H.; Li, W.; Li, C.; Li, X.; Li, Y.; Lidrych, J.; Lin, T.; Lisa, M. A.; Liu, Y.; Liu, H.; Liu, F.; Liu, P.; Ljubicic, T.; Llope, W. J.; Lomnitz, M.; Longacre, R. S.; Luo, X.; Luo, S.; Ma, L.; Ma, Y. G.; Ma, G. L.; Ma, R.; Magdy, N.; Majka, R.; Mallick, D.; Margetis, S.; Markert, C.; Matis, H. S.; Mayes, D.; Meehan, K.; Mei, J. C.; Miller, Z. W.; Minaev, N. G.; Mioduszewski, S.; Mishra, D.; Mizuno, S.; Mohanty, B.; Mondal, M. M.; Morozov, D. A.; Mustafa, M. K.; Nasim, Md.; Nayak, T. K.; Nelson, J. M.; Nemes, D. B.; Nie, M.; Nigmatkulov, G.; Niida, T.; Nogach, L. V.; Nonaka, T.; Nurushev, S. B.; Odyniec, G.; Ogawa, A.; Oh, K.; Okorokov, V. A.; Olvitt, D.; Page, B. S.; Pak, R.; Pandit, Y.; Panebratsev, Y.; Pawlik, B.; Pei, H.; Perkins, C.; Pluta, J.; Poniatowska, K.; Porter, J.; Posik, M.; Pruthi, N. K.; Przybycien, M.; Putschke, J.; Quintero, A.; Ramachandran, S.; Ray, R. L.; Reed, R.; Rehbein, M. J.; Ritter, H. G.; Roberts, J. B.; Rogachevskiy, O. V.; Romero, J. L.; Roth, J. D.; Ruan, L.; Rusnak, J.; Rusnakova, O.; Sahoo, N. R.; Sahu, P. K.; Salur, S.; Sandweiss, J.; Saur, M.; Schambach, J.; Schmah, A. M.; Schmidke, W. B.; Schmitz, N.; Schweid, B. R.; Seger, J.; Sergeeva, M.; Seto, R.; Seyboth, P.; Shah, N.; Shahaliev, E.; Shanmuganathan, P. V.; Shao, M.; Shen, W. Q.; Shi, S. S.; Shi, Z.; Shou, Q. Y.; Sichtermann, E. P.; Sikora, R.; Simko, M.; Singha, S.; Skoby, M. J.; Smirnov, N.; Smirnov, D.; Solyst, W.; Sorensen, P.; Spinka, H. M.; Srivastava, B.; Stanislaus, T. D. S.; Stewart, D. J.; Strikhanov, M.; Stringfellow, B.; Suaide, A. A. P.; Sugiura, T.; Sumbera, M.; Summa, B.; Sun, X.; Sun, X. M.; Sun, Y.; Surrow, B.; Svirida, D. N.; Tang, Z.; Tang, A. H.; Taranenko, A.; Tarnowsky, T.; Tawfik, A.; Thäder, J.; Thomas, J. H.; Timmins, A. R.; Tlusty, D.; Todoroki, T.; Tokarev, M.; Trentalange, S.; Tribble, R. E.; Tribedy, P.; Tripathy, S. K.; Trzeciak, B. A.; Tsai, O. D.; Tu, B.; Ullrich, T.; Underwood, D. G.; Upsal, I.; Van Buren, G.; van Nieuwenhuizen, G.; Vasiliev, A. N.; Videbæk, F.; Vokal, S.; Voloshin, S. A.; Vossen, A.; Wang, G.; Wang, Y.; Wang, Y.; Wang, F.; Webb, G.; Webb, J. C.; Wen, L.; Westfall, G. D.; Wieman, H.; Wissink, S. W.; Witt, R.; Wu, Y.; Xiao, Z. G.; Xie, G.; Xie, W.; Xu, N.; Xu, Y. F.; Xu, Q. H.; Xu, Z.; Yang, Y.; Yang, C.; Yang, S.; Yang, Q.; Ye, Z.; Ye, Z.; Yi, L.; Yip, K.; Yoo, I.-K.; Zbroszczyk, H.; Zha, W.; Zhang, J. B.; Zhang, J.; Zhang, S.; Zhang, J.; Zhang, S.; Zhang, Z.; Zhang, Y.; Zhang, L.; Zhang, X. P.; Zhao, J.; Zhong, C.; Zhou, C.; Zhou, L.; Zhu, X.; Zhu, Z.; Zyzak, M.
2018-05-01
The transversity distribution, which describes transversely polarized quarks in transversely polarized nucleons, is a fundamental component of the spin structure of the nucleon, and is only loosely constrained by global fits to existing semi-inclusive deep inelastic scattering (SIDIS) data. In transversely polarized p↑ + p collisions it can be accessed using transverse polarization dependent fragmentation functions which give rise to azimuthal correlations between the polarization of the struck parton and the final state scalar mesons. This letter reports on spin dependent di-hadron correlations measured by the STAR experiment. The new dataset corresponds to 25 pb-1 integrated luminosity of p↑ + p collisions at √{ s } = 500 GeV, an increase of more than a factor of ten compared to our previous measurement at √{ s } = 200 GeV. Non-zero asymmetries sensitive to transversity are observed at a Q2 of several hundred GeV and are found to be consistent with the former measurement and a model calculation. We expect that these data will enable an extraction of transversity with comparable precision to current SIDIS datasets but at much higher momentum transfers where subleading effects are suppressed.
Nesting behavior of house mice (Mus domesticus) selected for increased wheel-running activity.
Carter, P A; Swallow, J G; Davis, S J; Garland, T
2000-03-01
Nest building was measured in "active" (housed with access to running wheels) and "sedentary" (without wheel access) mice (Mus domesticus) from four replicate lines selected for 10 generations for high voluntary wheel-running behavior, and from four randombred control lines. Based on previous studies of mice bidirectionally selected for thermoregulatory nest building, it was hypothesized that nest building would show a negative correlated response to selection on wheel-running. Such a response could constrain the evolution of high voluntary activity because nesting has also been shown to be positively genetically correlated with successful production of weaned pups. With wheel access, selected mice of both sexes built significantly smaller nests than did control mice. Without wheel access, selected females also built significantly smaller nests than did control females, but only when body mass was excluded from the statistical model, suggesting that body mass mediated this correlated response to selection. Total distance run and mean running speed on wheels was significantly higher in selected mice than in controls, but no differences in amount of time spent running were measured, indicating a complex cause of the response of nesting to selection for voluntary wheel running.
NASA Astrophysics Data System (ADS)
Grava, Cesare; Hurley, Dana M.; Retherford, Kurt D.; Gladstone, G. Randall; Feldman, Paul D.; Pryor, Wayne R.; Greathouse, Thomas K.; Mandt, Kathleen E.
2017-04-01
Helium was one of the first elements discovered in the lunar exosphere, being detected by the mass spectrometer LACE (Lunar Atmosphere Composition Experiment) deployed at the lunar surface during the Apollo 17 mission. Most of it comes from neutralization of solar wind alpha particles impinging on the lunar surface, but there is increasing evidence that a non-negligible fraction of it diffuses from the interior of the Moon, as a result of radioactive decay of thorium and uranium. Therefore, pinpointing the amount of endogenic helium can constrain the abundance of these two elements in the crust, with implication for the formation of the Moon. The Lyman-Alpha Mapping Project (LAMP) far-UV spectrograph onboard the Lunar Reconnaissance Orbiter (LRO) carried out an atmospheric campaign to study the lunar exospheric helium. The spacecraft was pitched along the direction of motion to look through a longer illuminated column of gas, compared to the usual nadir-looking mode, and therefore enhancing the brightness of the emission line at 58.4 nm of helium atoms resonantly scattering solar photons. The lines of sight of the observations spanned a variety of local times, latitudes, longitudes, and altitudes, allowing us to reconstruct the temporal and spatial distribution of helium and its radial density profile with the help of an exospheric model. Moreover, correlating the helium density inferred by LAMP with the flux of solar wind alpha particles (the main source of lunar helium) measured from the twin ARTEMIS spacecraft, it is possible to constrain the amount of helium which comes from the interior of the Moon via outgassing. While most of the observations can be explained by the exospheric model, we have found discrepancies between the model and LAMP observations, with the former underestimating the latter, especially at northern selenographic latitudes, when LRO altitude is maximum. Such discrepancies suggest that the vertical distribution of helium differs from a Chamberlain exospheric model, an interesting result considered that helium does not interact with the lunar surface, and may be indicative of a different thermal population of helium atoms. We present results from over 150 observations performed routinely from 2013 to 2016 to look for trends in the spatial and temporal distribution of helium and to constrain the fraction of endogenous helium compared to the solar wind contribution.
Advance in prediction of soil slope instabilities
NASA Astrophysics Data System (ADS)
Sigarán-Loría, C.; Hack, R.; Nieuwenhuis, J. D.
2012-04-01
Six generic soils (clays and sands) were systematically modeled with plane-strain finite elements (FE) at varying heights and inclinations. A dataset was generated in order to develop predictive relations of soil slope instabilities, in terms of co-seismic displacements (u), under strong motions with a linear multiple regression. For simplicity, the seismic loads are monochromatic artificial sinusoidal functions at four frequencies: 1, 2, 4, and 6 Hz, and the slope failure criterion used corresponds to near 10% Cartesian shear strains along a continuous region comparable to a slip surface. The generated dataset comprises variables from the slope geometry and site conditions: height, H, inclination, i, shear wave velocity from the upper 30 m, vs30, site period, Ts; as well as the input strong motion: yield acceleration, ay (equal to peak ground acceleration, PGA in this research), frequency, f; and in some cases moment magnitude, M, and Arias intensity, Ia, assumed from empirical correlations. Different datasets or scenarios were created: "Magnitude-independent", "Magnitude-dependent", and "Soil-dependent", and the data was statistically explored and analyzed with varying mathematical forms. Qualitative relations show that the permanent deformations are highly related to the soil class for the clay slopes, but not for the sand slopes. Furthermore, the slope height does not constrain the variability in the co-seismic displacements. The input frequency decreases the variability of the co-seismic displacements for the "Magnitude-dependent" and "Soil-dependent" datasets. The empirical models were developed with two and three predictors. For the sands it was not possible because they could not satisfy the constrains from the statistical method. For the clays, the best models with the smallest errors coincided with the simple general form of multiple regression with three predictors (e.g. near 0.16 and 0.21 standard error, S.E. and 0.75 and 0.55 R2 for the "M-independent" and "M-dependent" datasets correspondingly). From the models with two predictors, a 2nd-order polynom gave the best performance but with a not-significant parameter. The best models with both predictors significant have slightly larger error and smaller R2, e.g. 0.15 S.E., 44% R2 with ay and i. The predictive models obtained with the three scenarios from the clay slopes provide well-constrained predictions but low R2, suggesting the predictors are "not complete", most likely in relation to the simplicity used in the strong motion characterization. Nevertheless, the findings from this work demonstrate the potential from analytical methods in developing more precise predictions as well as the importance on treating different different ground types.
Charge redistribution in QM:QM ONIOM model systems: a constrained density functional theory approach
NASA Astrophysics Data System (ADS)
Beckett, Daniel; Krukau, Aliaksandr; Raghavachari, Krishnan
2017-11-01
The ONIOM hybrid method has found considerable success in QM:QM studies designed to approximate a high level of theory at a significantly reduced cost. This cost reduction is achieved by treating only a small model system with the target level of theory and the rest of the system with a low, inexpensive, level of theory. However, the choice of an appropriate model system is a limiting factor in ONIOM calculations and effects such as charge redistribution across the model system boundary must be considered as a source of error. In an effort to increase the general applicability of the ONIOM model, a method to treat the charge redistribution effect is developed using constrained density functional theory (CDFT) to constrain the charge experienced by the model system in the full calculation to the link atoms in the truncated model system calculations. Two separate CDFT-ONIOM schemes are developed and tested on a set of 20 reactions with eight combinations of levels of theory. It is shown that a scheme using a scaled Lagrange multiplier term obtained from the low-level CDFT model calculation outperforms ONIOM at each combination of levels of theory from 32% to 70%.
NASA Astrophysics Data System (ADS)
Beeler, N. M.; Thomas, Amanda; Bürgmann, Roland; Shelly, David
2018-01-01
Families of recurring low-frequency earthquakes (LFEs) within nonvolcanic tremor on the San Andreas Fault in central California are sensitive to tidal stresses. LFEs occur at all levels of the tides, are strongly correlated and in phase with the 200 Pa shear stresses, and weakly and not systematically correlated with the 2 kPa tidal normal stresses. We assume that LFEs are small sources that repeatedly fail during shear within a much larger scale, aseismically slipping fault zone and consider two different models of the fault slip: (1) modulation of the fault slip rate by the tidal stresses or (2) episodic slip, triggered by the tides. LFEs are strongly clustered with duration much shorter than the semidiurnal tide; they cannot be significantly modulated on that time scale. The recurrence times of clusters, however, are many times longer than the semidiurnal, leading to an appearance of tidal triggering. In this context we examine the predictions of laboratory-observed triggered frictional (dilatant) fault slip. The undrained end-member model produces no sensitivity to the tidal normal stress, and slip onsets are in phase with the tidal shear stress. The tidal correlation constrains the diffusivity to be less than 1 × 10-6/s and the product of the friction and dilatancy coefficients to be at most 5 × 10-7, orders of magnitude smaller than observed at room temperature. In the absence of dilatancy the effective normal stress at failure would be about 55 kPa. For this model the observations require intrinsic weakness, low dilatancy, and lithostatic pore fluid.
Testing the consistency of three-point halo clustering in Fourier and configuration space
NASA Astrophysics Data System (ADS)
Hoffmann, K.; Gaztañaga, E.; Scoccimarro, R.; Crocce, M.
2018-05-01
We compare reduced three-point correlations Q of matter, haloes (as proxies for galaxies) and their cross-correlations, measured in a total simulated volume of ˜100 (h-1 Gpc)3, to predictions from leading order perturbation theory on a large range of scales in configuration space. Predictions for haloes are based on the non-local bias model, employing linear (b1) and non-linear (c2, g2) bias parameters, which have been constrained previously from the bispectrum in Fourier space. We also study predictions from two other bias models, one local (g2 = 0) and one in which c2 and g2 are determined by b1 via approximately universal relations. Overall, measurements and predictions agree when Q is derived for triangles with (r1r2r3)1/3 ≳60 h-1 Mpc, where r1 - 3 are the sizes of the triangle legs. Predictions for Qmatter, based on the linear power spectrum, show significant deviations from the measurements at the BAO scale (given our small measurement errors), which strongly decrease when adding a damping term or using the non-linear power spectrum, as expected. Predictions for Qhalo agree best with measurements at large scales when considering non-local contributions. The universal bias model works well for haloes and might therefore be also useful for tightening constraints on b1 from Q in galaxy surveys. Such constraints are independent of the amplitude of matter density fluctuation (σ8) and hence break the degeneracy between b1 and σ8, present in galaxy two-point correlations.
Tharakaraman, Kannan; Watanabe, Satoru; Chan, Kuan Rong; Huan, Jia; Subramanian, Vidya; Chionh, Yok Hian; Raguram, Aditya; Quinlan, Devin; McBee, Megan; Ong, Eugenia Z; Gan, Esther S; Tan, Hwee Cheng; Tyagi, Anu; Bhushan, Shashi; Lescar, Julien; Vasudevan, Subhash G; Ooi, Eng Eong; Sasisekharan, Ram
2018-05-09
Following the recent emergence of Zika virus (ZIKV), many murine and human neutralizing anti-ZIKV antibodies have been reported. Given the risk of virus escape mutants, engineering antibodies that target mutationally constrained epitopes with therapeutically relevant potencies can be valuable for combating future outbreaks. Here, we applied computational methods to engineer an antibody, ZAb_FLEP, that targets a highly networked and therefore mutationally constrained surface formed by the envelope protein dimer. ZAb_FLEP neutralized a breadth of ZIKV strains and protected mice in distinct in vivo models, including resolving vertical transmission and fetal mortality in infected pregnant mice. Serial passaging of ZIKV in the presence of ZAb_FLEP failed to generate viral escape mutants, suggesting that its epitope is indeed mutationally constrained. A single-particle cryo-EM reconstruction of the Fab-ZIKV complex validated the structural model and revealed insights into ZAb_FLEP's neutralization mechanism. ZAb_FLEP has potential as a therapeutic in future outbreaks. Copyright © 2018. Published by Elsevier Inc.
Liao, Bolin; Zhang, Yunong; Jin, Long
2016-02-01
In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.
NASA Astrophysics Data System (ADS)
Hulsman, Petra; Savenije, Hubert; Bogaard, Thom
2017-04-01
In hydrology and water resources management, precipitation and discharge are the main time series for hydrological modelling. However, in African river catchments, the quantity and quality of the available precipitation stations and discharge measurements are unfortunately often inadequate for reliable hydrological modelling. To cope with these uncertainties, this study proposes to calibrate on water levels and to constrain the model using the Normalised Difference Infrared Index (NDII) as a proxy for root zone moisture stress. With the NDII, the leaf water content can be monitored. Previous studies related the NDII to the equivalent water thickness (EWT) of leaves, which is used to determine the vegetation water content (VWC). As the water content in the leaves is related to the water content in the root zone, the NDII can also be used as indicator of the soil moisture content in the root zone. In previous studies it was found that the root zone moisture content is exponentially correlated to the NDII during periods of moisture stress. In this study, the semi-distributed rainfall runoff model FLEX-Topo has been applied to the Mara River Basin. In this model seven sub-basins are distinguished and four hydrological response units with each a unique model structure based on the expected dominant flow processes. To calibrate the model, the water levels have been back-calculated from modelled discharges, using cross-section data and the Strickler formula calibrating parameter 'k•s1/2', and compared to measured water levels. In addition, the correlation between the NDII and root zone moisture content has been analysed for this river basin for each sub-catchment and hydrological response unit. Also, the application of the NDII as model constraint or for calibration has been analysed.
Wallis, Thomas S. A.; Dorr, Michael; Bex, Peter J.
2015-01-01
Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings. PMID:26057546
A probabilistic assessment of calcium carbonate export and dissolution in the modern ocean
NASA Astrophysics Data System (ADS)
Battaglia, Gianna; Steinacher, Marco; Joos, Fortunat
2016-05-01
The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 export fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Monte Carlo scheme to construct a 1000-member ensemble with the Bern3D ocean model. Ensemble results are constrained by comparing simulated and observation-based fields of excess dissolved calcium carbonate (TA*). The minerals calcite and aragonite are modelled explicitly and ocean-sediment fluxes are considered. For local dissolution rates, either a strong or a weak dependency on CaCO3 saturation is assumed. In addition, there is the option to have saturation-independent dissolution above the saturation horizon. The median (and 68 % confidence interval) of the constrained model ensemble for global biogenic CaCO3 export is 0.90 (0.72-1.05) Gt C yr-1, that is within the lower half of previously published estimates (0.4-1.8 Gt C yr-1). The spatial pattern of CaCO3 export is broadly consistent with earlier assessments. Export is large in the Southern Ocean, the tropical Indo-Pacific, the northern Pacific and relatively small in the Atlantic. The constrained results are robust across a range of diapycnal mixing coefficients and, thus, ocean circulation strengths. Modelled ocean circulation and transport timescales for the different set-ups were further evaluated with CFC11 and radiocarbon observations. Parameters and mechanisms governing dissolution are hardly constrained by either the TA* data or the current compilation of CaCO3 flux measurements such that model realisations with and without saturation-dependent dissolution achieve skill. We suggest applying saturation-independent dissolution rates in Earth system models to minimise computational costs.
NASA Astrophysics Data System (ADS)
Peylin, P. P.; Bacour, C.; MacBean, N.; Maignan, F.; Bastrikov, V.; Chevallier, F.
2017-12-01
Predicting the fate of carbon stocks and their sensitivity to climate change and land use/management strongly relies on our ability to accurately model net and gross carbon fluxes. However, simulated carbon and water fluxes remain subject to large uncertainties, partly because of unknown or poorly calibrated parameters. Over the past ten years, the carbon cycle data assimilation system at the Laboratoire des Sciences du Climat et de l'Environnement has investigated the benefit of assimilating multiple carbon cycle data streams into the ORCHIDEE LSM, the land surface component of the Institut Pierre Simon Laplace Earth System Model. These datasets have included FLUXNET eddy covariance data (net CO2 flux and latent heat flux) to constrain hourly to seasonal time-scale carbon cycle processes, remote sensing of the vegetation activity (MODIS NDVI) to constrain the leaf phenology, biomass data to constrain "slow" (yearly to decadal) processes of carbon allocation, and atmospheric CO2 concentrations to provide overall large scale constraints on the land carbon sink. Furthermore, we have investigated technical issues related to multiple data stream assimilation and choice of optimization algorithm. This has provided a wide-ranging perspective on the challenges we face in constraining model parameters and thus better quantifying, and reducing, model uncertainty in projections of the future global carbon sink. We review our past studies in terms of the impact of the optimization on key characteristics of the carbon cycle, e.g. the partition of the northern latitudes vs tropical land carbon sink, and compare to the classic atmospheric flux inversion approach. Throughout, we discuss our work in context of the abovementioned challenges, and propose solutions for the community going forward, including the potential of new observations such as atmospheric COS concentrations and satellite-derived Solar Induced Fluorescence to constrain the gross carbon fluxes of the ORCHIDEE model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Anuradha; Arun, K. G.; Sathyaprakash, B. S., E-mail: axg645@psu.edu, E-mail: kgarun@cmi.ac.in, E-mail: bss25@psu.edu
We show that the inferred merger rate and chirp masses of binary black holes (BBHs) detected by advanced LIGO (aLIGO) can be used to constrain the rate of double neutron star (DNS) and neutron star–black hole (NSBH) mergers in the universe. We explicitly demonstrate this by considering a set of publicly available population synthesis models of Dominik et al. and show that if all the BBH mergers, GW150914, LVT151012, GW151226, and GW170104, observed by aLIGO arise from isolated binary evolution, the predicted DNS merger rate may be constrained to be 2.3–471.0 Gpc{sup −3} yr{sup −1} and that of NSBH mergersmore » will be constrained to 0.2–48.5 Gpc{sup −3} yr{sup −1}. The DNS merger rates are not constrained much, but the NSBH rates are tightened by a factor of ∼4 as compared to their previous rates. Note that these constrained DNS and NSBH rates are extremely model-dependent and are compared to the unconstrained values 2.3–472.5 Gpc{sup −3} yr{sup −1} and 0.2–218 Gpc{sup −3} yr{sup −1}, respectively, using the same models of Dominik et al. (2012a). These rate estimates may have implications for short Gamma Ray Burst progenitor models assuming they are powered (solely) by DNS or NSBH mergers. While these results are based on a set of open access population synthesis models, which may not necessarily be the representative ones, the proposed method is very general and can be applied to any number of models, thereby yielding more realistic constraints on the DNS and NSBH merger rates from the inferred BBH merger rate and chirp mass.« less
Quantifying How Observations Inform a Numerical Reanalysis of Hawaii
NASA Astrophysics Data System (ADS)
Powell, B. S.
2017-11-01
When assimilating observations into a model via state-estimation, it is possible to quantify how each observation changes the modeled estimate of a chosen oceanic metric. Using an existing 2 year reanalysis of Hawaii that includes more than 31 million observations from satellites, ships, SeaGliders, and autonomous floats, I assess which observations most improve the estimates of the transport and eddy kinetic energy. When the SeaGliders were in the water, they comprised less than 2.5% of the data, but accounted for 23% of the transport adjustment. Because the model physics constrains advanced state-estimation, the prescribed covariances are propagated in time to identify observation-model covariance. I find that observations that constrain the isopycnal tilt across the transport section provide the greatest impact in the analysis. In the case of eddy kinetic energy, observations that constrain the surface-driven upper ocean have more impact. This information can help to identify optimal sampling strategies to improve both state-estimates and forecasts.
Constraining dark sector perturbations I: cosmic shear and CMB lensing
NASA Astrophysics Data System (ADS)
Battye, Richard A.; Moss, Adam; Pearson, Jonathan A.
2015-04-01
We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant Script L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=-1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales.
NASA Technical Reports Server (NTRS)
Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.
2011-01-01
Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.
Constraint-Based Local Search for Constrained Optimum Paths Problems
NASA Astrophysics Data System (ADS)
Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal
Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...
NASA Astrophysics Data System (ADS)
Baisden, W. T.
2011-12-01
Time-series radiocarbon measurements have substantial ability to constrain the size and residence time of the soil C pools commonly represented in ecosystem models. Radiocarbon remains unique in the ability to constrain the large stabilized C pool with decadal residence times. Radiocarbon also contributes usefully to constraining the size and turnover rate of the passive pool, but typically struggles to constrain pools with residence times less than a few years. Overall, the number of pools and associated turnover rates that can be constrained depends upon the number of time-series samples available, the appropriateness of chemical or physical fractions to isolate unequivocal pools, and the utility of additional C flux data to provide additional constraints. In New Zealand pasture soils, we demonstrate the ability to constrain decadal turnover times with in a few years for the stabilized pool and reasonably constrain the passive fraction. Good constraint is obtained with two time-series samples spaced 10 or more years apart after 1970. Three or more time-series samples further improve the level of constraint. Work within this context shows that a two-pool model does explain soil radiocarbon data for the most detailed profiles available (11 time-series samples), and identifies clear and consistent differences in rates of C turnover and passive fraction in Andisols vs Non-Andisols. Furthermore, samples from multiple horizons can commonly be combined, yielding consistent residence times and passive fraction estimates that are stable with, or increase with, depth in different sites. Radiocarbon generally fails to quantify rapid C turnover, however. Given that the strength of radiocarbon is estimating the size and turnover of the stabilized (decadal) and passive (millennial) pools, the magnitude of fast cycling pool(s) can be estimated by subtracting the radiocarbon-based estimates of turnover within stabilized and passive pools from total estimates of NPP. In grazing land, these estimates can be derived primarily from measured aboveground NPP and calculated belowground NPP. Results suggest that only 19-36% of heterotrophic soil respiration is derived from the soil C with rapid turnover times. A final logical step in synthesis is the analysis of temporal variation in NPP, primarily due to climate, as driver of changes in plant inputs and resulting in dynamic changes in rapid and decadal soil C pools. In sites with good time series samples from 1959-1975, we examine the apparent impacts of measured or modelled (Biome-BGC) NPP on soil Δ14C. Ultimately, these approaches have the ability to empirically constrain, and provide limited verification, of the soil C cycle as commonly depicted ecosystem biogeochemistry models.
Phase-field model of domain structures in ferroelectric thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y. L.; Hu, S. Y.; Liu, Z. K.
A phase-field model for predicting the coherent microstructure evolution in constrained thin films is developed. It employs an analytical elastic solution derived for a constrained film with arbitrary eigenstrain distributions. The domain structure evolution during a cubic{r_arrow}tetragonal proper ferroelectric phase transition is studied. It is shown that the model is able to simultaneously predict the effects of substrate constraint and temperature on the volume fractions of domain variants, domain-wall orientations, domain shapes, and their temporal evolution. {copyright} 2001 American Institute of Physics.
Interpreting the cosmic far-infrared background anisotropies using a gas regulator model
NASA Astrophysics Data System (ADS)
Wu, Hao-Yi; Doré, Olivier; Teyssier, Romain; Serra, Paolo
2018-04-01
Cosmic far-infrared background (CFIRB) is a powerful probe of the history of star formation rate (SFR) and the connection between baryons and dark matter across cosmic time. In this work, we explore to which extent the CFIRB anisotropies can be reproduced by a simple physical framework for galaxy evolution, the gas regulator (bathtub) model. This model is based on continuity equations for gas, stars, and metals, taking into account cosmic gas accretion, star formation, and gas ejection. We model the large-scale galaxy bias and small-scale shot noise self-consistently, and we constrain our model using the CFIRB power spectra measured by Planck. Because of the simplicity of the physical model, the goodness of fit is limited. We compare our model predictions with the observed correlation between CFIRB and gravitational lensing, bolometric infrared luminosity functions, and submillimetre source counts. The strong clustering of CFIRB indicates a large galaxy bias, which corresponds to haloes of mass 1012.5 M⊙ at z = 2, higher than the mass associated with the peak of the star formation efficiency. We also find that the far-infrared luminosities of haloes above 1012 M⊙ are higher than the expectation from the SFR observed in ultraviolet and optical surveys.
NASA Astrophysics Data System (ADS)
Williams, C. R.
2012-12-01
The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V-polarization at the instrument view angles of nadir to 17 degrees (for DPR) and 48 & 53 degrees off nadir (for GMI). The GPM DSD Working Group is generating integral tables with GV observed DSD correlations and is performing sensitivity and verification tests. One advantage of keeping scattering tables separate from integral tables is that research can progress on the electromagnetic scattering of particles independent of cloud microphysics research. Another advantage of keeping the tables separate is that multiple scattering tables will be needed for frozen precipitation. Scattering tables are being developed for individual frozen particles based on habit, density and operating frequency. And a third advantage of keeping scattering and integral tables separate is that this framework provides an opportunity to communicate GV findings about DSD correlations into integral tables, and thus, into satellite algorithms.
Flexible Energy Scheduling Tool for Integrating Variable Generation | Grid
, security-constrained economic dispatch, and automatic generation control programs. DOWNLOAD PAPER Electric commitment, security-constrained economic dispatch, and automatic generation control sub-models. Each sub resolutions and operating strategies can be explored. FESTIV produces not only economic metrics but also
Spin correlations and new physics in τ -lepton decays at the LHC
Hayreter, Alper; Valencia, German
2015-07-31
We use spin correlations to constrain anomalous τ -lepton couplings at the LHC including its anomalous magnetic moment, electric dipole moment and weak dipole moments. Single spin correlations are ideal to probe interference terms between the SM and new dipole-type couplings as they are not suppressed by the τ -lepton mass. Double spin asymmetries give rise to T -odd correlations useful to probe CP violation purely within the new physics amplitudes, as their appearance from interference with the SM is suppressed by m τ. We compare our constraints to those obtained earlier on the basis of deviations from the Drell-Yanmore » cross-section.« less
NASA Technical Reports Server (NTRS)
Liang, Z.; Fixsen, D. J.; Gold, B.
2012-01-01
We show that a one-component variable-emissivity-spectral-index model (the free- model) provides more physically motivated estimates of dust temperature at the Galactic polar caps than one- or two-component fixed-emissivity-spectral-index models (fixed- models) for interstellar dust thermal emission at far-infrared and millimeter wavelengths. For the comparison we have fit all-sky one-component dust models with fixed or variable emissivity spectral index to a new and improved version of the 210-channel dust spectra from the COBE-FIRAS, the 100-240 micrometer maps from the COBE-DIRBE and the 94 GHz dust map from the WMAP. The best model, the free-alpha model, is well constrained by data at 60-3000 GHz over 86 per cent of the total sky area. It predicts dust temperature (T(sub dust)) to be 13.7-22.7 (plus or minus 1.3) K, the emissivity spectral index (alpha) to be 1.2-3.1 (plus or minus 0.3) and the optical depth (tau) to range 0.6-46 x 10(exp -5) with a 23 per cent uncertainty. Using these estimates, we present all-sky evidence for an inverse correlation between the emissivity spectral index and dust temperature, which fits the relation alpha = 1/(delta + omega (raised dot) T(sub dust) with delta = -.0.510 plus or minus 0.011 and omega = 0.059 plus or minus 0.001. This best model will be useful to cosmic microwave background experiments for removing foreground dust contamination and it can serve as an all-sky extended-frequency reference for future higher resolution dust models.
Kendrick, Katherine J.; Matti, Jonathan; Mahan, Shannon
2015-01-01
The fault history of the Mill Creek strand of the San Andreas fault (SAF) in the San Gorgonio Pass region, along with the reconstructed geomorphology surrounding this fault strand, reveals the important role of the left-lateral Pinto Mountain fault in the regional fault strand switching. The Mill Creek strand has 7.1–8.7 km total slip. Following this displacement, the Pinto Mountain fault offset the Mill Creek strand 1–1.25 km, as SAF slip transferred to the San Bernardino, Banning, and Garnet Hill strands. An alluvial complex within the Mission Creek watershed can be linked to palinspastic reconstruction of drainage segments to constrain slip history of the Mill Creek strand. We investigated surface remnants through detailed geologic mapping, morphometric and stratigraphic analysis, geochronology, and pedogenic analysis. The degree of soil development constrains the duration of surface stability when correlated to other regional, independently dated pedons. This correlation indicates that the oldest surfaces are significantly older than 500 ka. Luminescence dates of 106 ka and 95 ka from (respectively) 5 and 4 m beneath a younger fan surface are consistent with age estimates based on soil-profile development. Offset of the Mill Creek strand by the Pinto Mountain fault suggests a short-term slip rate of ∼10–12.5 mm/yr for the Pinto Mountain fault, and a lower long-term slip rate. Uplift of the Yucaipa Ridge block during the period of Mill Creek strand activity is consistent with thermochronologic modeled uplift estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naftchi-Ardebili, Kasra; Hau, Nathania W.; Mazziotti, David A.
2011-11-15
Variational minimization of the ground-state energy as a function of the two-electron reduced density matrix (2-RDM), constrained by necessary N-representability conditions, provides a polynomial-scaling approach to studying strongly correlated molecules without computing the many-electron wave function. Here we introduce a route to enhancing necessary conditions for N representability through rank restriction of the 2-RDM. Rather than adding computationally more expensive N-representability conditions, we directly enhance the accuracy of two-particle (2-positivity) conditions through rank restriction, which removes degrees of freedom in the 2-RDM that are not sufficiently constrained. We select the rank of the particle-hole 2-RDM by deriving the ranks associatedmore » with model wave functions, including both mean-field and antisymmetrized geminal power (AGP) wave functions. Because the 2-positivity conditions are exact for quantum systems with AGP ground states, the rank of the particle-hole 2-RDM from the AGP ansatz provides a minimum for its value in variational 2-RDM calculations of general quantum systems. To implement the rank-restricted conditions, we extend a first-order algorithm for large-scale semidefinite programming. The rank-restricted conditions significantly improve the accuracy of the energies; for example, the percentages of correlation energies recovered for HF, CO, and N{sub 2} improve from 115.2%, 121.7%, and 121.5% without rank restriction to 97.8%, 101.1%, and 100.0% with rank restriction. Similar results are found at both equilibrium and nonequilibrium geometries. While more accurate, the rank-restricted N-representability conditions are less expensive computationally than the full-rank conditions.« less
Investigating an SPI and Measuring Baseline FUV Variability in the GJ 436 Hot-Neptune System
NASA Astrophysics Data System (ADS)
Loyd, R. O.
2017-08-01
Closely-orbiting, massive planets can measurably affect the activity of their host star through tides, magnetic disturbances, or even mass transfer. Observations of these star planet interactions (SPIs) provide a window into stellar and planetary physics that may eventually lead to constraints on planetary magnetic fields. Recently, the MUSCLES Treasury Survey of 11 exoplanet host stars revealed correlations providing the first-ever evidence of SPIs in M dwarf systems. This evidence additionally suggests that N V 1238,1242 Angstrom emission best traces SPIs, a feature that merits further investigation. To this end, we propose an experiment using the M dwarf + hot Neptune system GJ 436 that will also benefit upcoming transit observations. GJ 436 is ideal for an SPI experiment because (1) escaped gas from its known rapidly evaporating hot Neptune could be funneled onto the star and (2) it displays a tentative SPI signal in existing, incomplete N V observations. The proposed experiment will complete these N V observations to constrain a model of modulation in N V flux resulting from a stellar hot spot induced by the planet. The results will provide evidence for or against hot spot SPIs producing the correlations observed in the MUSCLES Survey. Furthemore, the acquired data will establish a broader FUV baseline to constrain day-timescale variability and facular emission in FUV lines, needed for the interpretation of upcoming transit observations of GJ 436b. For this reason, we waive our proprietary rights to the data. Establishing GJ 436's baseline FUV variability and testing the hot spot hypothesis are only possible through the FUV capabilities of HST.
Constraining the Active Galactic Nucleus Contribution in a Multiwavelength Study of Seyfert Galaxies
NASA Technical Reports Server (NTRS)
Melendez, M.; Kraemer, S.B.; Schmitt, H.R.; Crenshaw, D.M.; Deo, R.P.; Mushotzky, R.F.; Bruhweiler, F.C.
2008-01-01
We have studied the relationship between the high- and low-ionization [O IV] (lambda)25.89 microns, [Ne III] (lambda)15.56 microns, and [Ne II] (lambda)12.81 microns emission lines with the aim of constraining the active galactic nuclei (AGNs) and star formation contributions for a sample of 103 Seyfert galaxies.We use the [O IV] and [Ne II] emission as tracers for the AGN power and star formation to investigate the ionization state of the emission-line gas.We find that Seyfert 2 galaxies have, on average, lower [O IV]/[Ne II] ratios than Seyfert 1 galaxies. This result suggests two possible scenarios: (1) Seyfert 2 galaxies have intrinsically weaker AGNs, or (2) Seyfert 2 galaxies have relatively higher star formation rates than Seyfert 1 galaxies. We estimate the fraction of [Ne II] directly associated with the AGNs and find that Seyfert 2 galaxies have a larger contribution from star formation, by a factor of approx.1.5 on average, than what is found in Seyfert 1 galaxies. Using the stellar component of [Ne II] as a tracer of the current star formation, we found similar star formation rates in Seyfert 1 and Seyfert 2 galaxies.We examined the mid- and far-infrared continua and found that [Ne II] is well correlated with the continuum luminosity at 60 microns and that both [Ne III] and [O IV] are better correlated with the 25 micron luminosities than with the continuum at longer wavelengths, suggesting that the mid-infrared continuum luminosity is dominated by the AGN, while the far-infrared luminosity is dominated by star formation. Overall, these results test the unified model of AGNs and suggest that the differences between Seyfert galaxies cannot be solely due to viewing angle dependence.
Path analysis of the genetic integration of traits in the sand cricket: a novel use of BLUPs.
Roff, D A; Fairbairn, D J
2011-09-01
This study combines path analysis with quantitative genetics to analyse a key life history trade-off in the cricket, Gryllus firmus. We develop a path model connecting five traits associated with the trade-off between flight capability and reproduction and test this model using phenotypic data and estimates of breeding values (best linear unbiased predictors) from a half-sibling experiment. Strong support by both types of data validates our causal model and indicates concordance between the phenotypic and genetic expression of the trade-off. Comparisons of the trade-off between sexes and wing morphs reveal that these discrete phenotypes are not genetically independent and that the evolutionary trajectories of the two wing morphs are more tightly constrained to covary than those of the two sexes. Our results illustrate the benefits of combining a quantitative genetic analysis, which examines statistical correlations between traits, with a path model that focuses upon the causal components of variation. © 2011 The Authors. Journal of Evolutionary Biology © 2011 European Society For Evolutionary Biology.
Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong
2015-08-07
Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
Statistics of Dark Matter Halos from Gravitational Lensing.
Jain; Van Waerbeke L
2000-02-10
We present a new approach to measure the mass function of dark matter halos and to discriminate models with differing values of Omega through weak gravitational lensing. We measure the distribution of peaks from simulated lensing surveys and show that the lensing signal due to dark matter halos can be detected for a wide range of peak heights. Even when the signal-to-noise ratio is well below the limit for detection of individual halos, projected halo statistics can be constrained for halo masses spanning galactic to cluster halos. The use of peak statistics relies on an analytical model of the noise due to the intrinsic ellipticities of source galaxies. The noise model has been shown to accurately describe simulated data for a variety of input ellipticity distributions. We show that the measured peak distribution has distinct signatures of gravitational lensing, and its non-Gaussian shape can be used to distinguish models with different values of Omega. The use of peak statistics is complementary to the measurement of field statistics, such as the ellipticity correlation function, and is possibly not susceptible to the same systematic errors.
Testing gravity using large-scale redshift-space distortions
NASA Astrophysics Data System (ADS)
Raccanelli, Alvise; Bertacca, Daniele; Pietrobon, Davide; Schmidt, Fabian; Samushia, Lado; Bartolo, Nicola; Doré, Olivier; Matarrese, Sabino; Percival, Will J.
2013-11-01
We use luminous red galaxies from the Sloan Digital Sky Survey (SDSS) II to test the cosmological structure growth in two alternatives to the standard Λ cold dark matter (ΛCDM)+general relativity (GR) cosmological model. We compare observed three-dimensional clustering in SDSS Data Release 7 (DR7) with theoretical predictions for the standard vanilla ΛCDM+GR model, unified dark matter (UDM) cosmologies and the normal branch Dvali-Gabadadze-Porrati (nDGP). In computing the expected correlations in UDM cosmologies, we derive a parametrized formula for the growth factor in these models. For our analysis we apply the methodology tested in Raccanelli et al. and use the measurements of Samushia et al. that account for survey geometry, non-linear and wide-angle effects and the distribution of pair orientation. We show that the estimate of the growth rate is potentially degenerate with wide-angle effects, meaning that extremely accurate measurements of the growth rate on large scales will need to take such effects into account. We use measurements of the zeroth and second-order moments of the correlation function from SDSS DR7 data and the Large Suite of Dark Matter Simulations (LasDamas), and perform a likelihood analysis to constrain the parameters of the models. Using information on the clustering up to rmax = 120 h-1 Mpc, and after marginalizing over the bias, we find, for UDM models, a speed of sound c∞ ≤ 6.1e-4, and, for the nDGP model, a cross-over scale rc ≥ 340 Mpc, at 95 per cent confidence level.
Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models
NASA Astrophysics Data System (ADS)
Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.
2017-06-01
The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.
Ambient Seismic Source Inversion in a Heterogeneous Earth: Theory and Application to the Earth's Hum
NASA Astrophysics Data System (ADS)
Ermert, Laura; Sager, Korbinian; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas
2017-11-01
The sources of ambient seismic noise are extensively studied both to better understand their influence on ambient noise tomography and related techniques, and to infer constraints on their excitation mechanisms. Here we develop a gradient-based inversion method to infer the space-dependent and time-varying source power spectral density of the Earth's hum from cross correlations of continuous seismic data. The precomputation of wavefields using spectral elements allows us to account for both finite-frequency sensitivity and for three-dimensional Earth structure. Although similar methods have been proposed previously, they have not yet been applied to data to the best of our knowledge. We apply this method to image the seasonally varying sources of Earth's hum during North and South Hemisphere winter. The resulting models suggest that hum sources are localized, persistent features that occur at Pacific coasts or shelves and in the North Atlantic during North Hemisphere winter, as well as South Pacific coasts and several distinct locations in the Southern Ocean in South Hemisphere winter. The contribution of pelagic sources from the central North Pacific cannot be constrained. Besides improving the accuracy of noise source locations through the incorporation of finite-frequency effects and 3-D Earth structure, this method may be used in future cross-correlation waveform inversion studies to provide initial source models and source model updates.
Fracture toughness of the nickel-alumina laminates by digital image-correlation technique
NASA Astrophysics Data System (ADS)
Mekky, Waleed
The purpose of this work is to implement the digital image correlation technique (DIC) in composite laminate fracture testing. The latter involves measuring the crack opening displacement (COD) during stable crack propagation and characterizing the strain development in a constrained nickel layer under applied loading. The major challenge to measure the COD of alternated metal/ceramic layers is the elastic-mismatch effect. This leads to oscillating COD measurement. Smoothing the result with built-in modules of commercial software leads to a loss of data accuracy. A least-squares fitting routine for the data output gave acceptable COD profiles. The behavior of a single Ni ligament sandwiched between two Al2O3 layers was determined for two Ni thicknesses (0.125 and 0.25mm). Modeling of the behavior via a modified Bridgman approach for rectangular cross section samples, proved limited as different mechanisms are operating. Nevertheless, the behavior is however captured to a point, but the model underestimates the results vis a vis experimental ones. The fracture-resistance curves for Nickel/Alumina laminates were developed experimentally and modeled via LEFM using the weight function approach and utilizing single-ligament-, and COD-, data. The crack-tip toughness was found to increase with Ni layer thickness due to crack-tip-shielding. The crack-initiation-toughness was estimated from the stress field and the crack-opening-displacement of the main crack.
Romano, F.; Trasatti, E.; Lorito, S.; Piromallo, C.; Piatanesi, A.; Ito, Y.; Zhao, D.; Hirata, K.; Lanucara, P.; Cocco, M.
2014-01-01
The 2011 Tohoku earthquake (Mw = 9.1) highlighted previously unobserved features for megathrust events, such as the large slip in a relatively limited area and the shallow rupture propagation. We use a Finite Element Model (FEM), taking into account the 3D geometrical and structural complexities up to the trench zone, and perform a joint inversion of tsunami and geodetic data to retrieve the earthquake slip distribution. We obtain a close spatial correlation between the main deep slip patch and the local seismic velocity anomalies, and large shallow slip extending also to the North coherently with a seismically observed low-frequency radiation. These observations suggest that the friction controlled the rupture, initially confining the deeper rupture and then driving its propagation up to the trench, where it spreads laterally. These findings are relevant to earthquake and tsunami hazard assessment because they may help to detect regions likely prone to rupture along the megathrust, and to constrain the probability of high slip near the trench. Our estimate of ~40 m slip value around the JFAST (Japan Trench Fast Drilling Project) drilling zone contributes to constrain the dynamic shear stress and friction coefficient of the fault obtained by temperature measurements to ~0.68 MPa and ~0.10, respectively. PMID:25005351
NASA Astrophysics Data System (ADS)
Sun, Shuai; Hou, Guiting; Zheng, Chunfang
2017-11-01
Stress variation associated with folding is one of the controlling factors in the development of tectonic fractures, however, little attention has been paid to the influence of neutral surfaces during folding on fracture distribution in a fault-related fold. In this study, we take the Cretaceous Bashijiqike Formation in the Kuqa Depression as an example and analyze the distribution of tectonic fractures in fault-related folds by core observation and logging data analysis. Three fracture zones are identified in a fault-related fold: a tensile zone, a transition zone and a compressive zone, which may be constrained by two neutral surfaces of fold. Well correlation reveals that the tensile zone and the transition zone reach the maximum thickness at the fold hinge and get thinner in the fold limbs. A 2D viscoelastic stress field model of a fault-related fold was constructed to further investigate the mechanism of fracturing. Statistical and numerical analysis reveal that the tensile zone and the transition zone become thicker with decreasing interlimb angle. Stress variation associated with folding is the first level of control over the general pattern of fracture distribution while faulting is a secondary control over the development of local fractures in a fault-related fold.
Reactivation of intrabasement structures during rifting: A case study from offshore southern Norway
NASA Astrophysics Data System (ADS)
Phillips, Thomas B.; Jackson, Christopher A.-L.; Bell, Rebecca E.; Duffy, Oliver B.; Fossen, Haakon
2016-10-01
Pre-existing structures within crystalline basement may exert a significant influence over the evolution of rifts. However, the exact manner in which these structures reactivate and thus their degree of influence over the overlying rift is poorly understood. Using borehole-constrained 2D and 3D seismic reflection data from offshore southern Norway we identify and constrain the three-dimensional geometry of a series of enigmatic intrabasement reflections. Through 1D waveform modelling and 3D mapping of these reflection packages, we correlate them to the onshore Caledonian thrust belt and Devonian shear zones. Based on the seismic-stratigraphic architecture of the post-basement succession, we identify several phases of reactivation of the intrabasement structures associated with multiple tectonic events. Reactivation preferentially occurs along relatively thick (c. 1 km), relatively steeply dipping (c. 30°) structures, with three main styles of interactions observed between them and overlying faults: i) faults exploiting intrabasement weaknesses represented by intra-shear zone mylonites; ii) faults that initiate within the hangingwall of the shear zones, inheriting their orientation and merging with said structure at depth; or iii) faults that initiate independently from and cross-cut intrabasement structures. We demonstrate that large-scale discrete shear zones act as a long-lived structural template for fault initiation during multiple phases of rifting.
NASA Astrophysics Data System (ADS)
Jesús Moral García, Francisco; Rebollo Castillo, Francisco Javier; Monteiro Santos, Fernando
2016-04-01
Maps of apparent electrical conductivity of the soil are commonly used in precision agriculture to indirectly characterize some important properties like salinity, water, and clay content. Traditionally, these studies are made through an empirical relationship between apparent electrical conductivity and properties measured in soil samples collected at a few locations in the experimental area and at a few selected depths. Recently, some authors have used not the apparent conductivity values but the soil bulk conductivity (in 2D or 3D) calculated from measured apparent electrical conductivity through the application of an inversion method. All the published works used data collected with electromagnetic (EM) instruments. We present a new software to invert the apparent electrical conductivity data collected with VERIS 3100 and 3150 (or the more recent version with three pairs of electrodes) using the 1D spatially constrained inversion method (1D SCI). The software allows the calculation of the distribution of the bulk electrical conductivity in the survey area till a depth of 1 m. The algorithm is applied to experimental data and correlations with clay and water content have been established using soil samples collected at some boreholes. Keywords: Digital soil mapping; inversion modelling; VERIS; soil apparent electrical conductivity.
Nucleon effective E-mass in neutron-rich matter from the Migdal–Luttinger jump
Cai, Bao-Jun; Li, Bao-An
2016-03-25
The well-known Migdal-Luttinger theorem states that the jump of the single-nucleon momentum distribution at the Fermi surface is equal to the inverse of the nucleon effective E-mass. Recent experiments studying short-range correlations (SRC) in nuclei using electron-nucleus scatterings at the Jefferson National Laboratory (JLAB) together with model calculations constrained significantly the Migdal-Luttinger jump at saturation density of nuclear matter. We show that the corresponding nucleon effective E-mass is consequently constrained to M-0(*,E)/M approximate to 2.22 +/- 0.35 in symmetric nuclear matter (SNM) and the E-mass of neutrons is smaller than that of protons in neutron-rich matter. Moreover, the average depletionmore » of the nucleon Fermi sea increases (decreases) approximately linearly with the isospin asymmetry delta according to kappa(p/n) approximate to 0.21 +/- 0.06 +/- (0.19 +/- 0.08)delta for protons (neutrons). These results will help improve our knowledge about the space-time non-locality of the single-nucleon potential in neutron-rich nucleonic matter Useful in both nuclear physics and astrophysics. (C) 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Funded by SCOAP(3).« less
NASA Astrophysics Data System (ADS)
Badhan, Mahmuda A.; Mandell, Avi M.; Hesman, Brigette; Nixon, Conor; Deming, Drake; Irwin, Patrick; Barstow, Joanna; Garland, Ryan
2015-11-01
Understanding the formation environments and evolution scenarios of planets in nearby planetary systems requires robust measures for constraining their atmospheric physical properties. Here we have utilized a combination of two different parameter retrieval approaches, Optimal Estimation and Markov Chain Monte Carlo, as part of the well-validated NEMESIS atmospheric retrieval code, to infer a range of temperature profiles and molecular abundances of H2O, CO2, CH4 and CO from available dayside thermal emission observations of several hot-Jupiter candidates. In order to keep the number of parameters low and henceforth retrieve more plausible profile shapes, we have used a parametrized form of the temperature profile based upon an analytic radiative equilibrium derivation in Guillot et al. 2010 (Line et al. 2012, 2014). We show retrieval results on published spectroscopic and photometric data from both the Hubble Space Telescope and Spitzer missions, and compare them with simulations from the upcoming JWST mission. In addition, since NEMESIS utilizes correlated distribution of absorption coefficients (k-distribution) amongst atmospheric layers to compute these models, updates to spectroscopic databases can impact retrievals quite significantly for such high-temperature atmospheres. As high-temperature line databases are continually being improved, we also compare retrievals between old and newer databases.
Romano, F; Trasatti, E; Lorito, S; Piromallo, C; Piatanesi, A; Ito, Y; Zhao, D; Hirata, K; Lanucara, P; Cocco, M
2014-07-09
The 2011 Tohoku earthquake (Mw = 9.1) highlighted previously unobserved features for megathrust events, such as the large slip in a relatively limited area and the shallow rupture propagation. We use a Finite Element Model (FEM), taking into account the 3D geometrical and structural complexities up to the trench zone, and perform a joint inversion of tsunami and geodetic data to retrieve the earthquake slip distribution. We obtain a close spatial correlation between the main deep slip patch and the local seismic velocity anomalies, and large shallow slip extending also to the North coherently with a seismically observed low-frequency radiation. These observations suggest that the friction controlled the rupture, initially confining the deeper rupture and then driving its propagation up to the trench, where it spreads laterally. These findings are relevant to earthquake and tsunami hazard assessment because they may help to detect regions likely prone to rupture along the megathrust, and to constrain the probability of high slip near the trench. Our estimate of ~40 m slip value around the JFAST (Japan Trench Fast Drilling Project) drilling zone contributes to constrain the dynamic shear stress and friction coefficient of the fault obtained by temperature measurements to ~0.68 MPa and ~0.10, respectively.
Geomagnetic Reversals in Neoproterozoic Cap Carbonates and Time Constraints on Snowball Earth Events
NASA Astrophysics Data System (ADS)
Trindade, R. I.; Font, E.; Nedelec, A.
2008-05-01
The end of the Neoproterozoic is characterized by ubiquitous glacial deposition being followed by the onset of extensive carbonate platforms, marking important changes in climate. The duration of these climatic oscillations is still poorly constrained with estimates varying from hundreds to hundreds of thousand years. Here we report a high-resolution magnetostratigraphic study of Neoproterozoic cap carbonates from the Amazon Craton. These rocks represent the first transgressive carbonate sequence after glacial deposits and present the isotopic signatures and sedimentary structures that typify cap carbonates elsewhere in the world, such as negative delta13C values, tubes, aragonite-pseudomorph crystal fans, pseudo-tepees (megaripples). Age constraints are given by shifts in 87Sr/86Sr ratios towards values greater than 0.7081 and by a Pb-Pb age of 627 ± 32 Ma. Two sections five kilometers apart were sampled with a 20 cm spacing (=101 sites) and revealed five coherent reversals. Magnetization is carried by detrital hematite. These data were used to constrain both the paleogeographic position of the Amazon Craton by the end of Neoproterozoic glaciations, and the time of cap carbonate deposition (in the order of hundreds of thousand years) with implications for geochemical models. Comparison with results from correlative successions in Africa, Oman and Australia will also be presented.
Jee, M. James; Tyson, J. Anthony; Hilbert, Stefan; ...
2016-06-15
Here, we present a tomographic cosmic shear study from the Deep Lens Survey (DLS), which, providing a limiting magnitudemore » $${r}_{\\mathrm{lim}}\\sim 27$$ ($$5\\sigma $$), is designed as a precursor Large Synoptic Survey Telescope (LSST) survey with an emphasis on depth. Using five tomographic redshift bins, we study their auto- and cross-correlations to constrain cosmological parameters. We use a luminosity-dependent nonlinear model to account for the astrophysical systematics originating from intrinsic alignments of galaxy shapes. We find that the cosmological leverage of the DLS is among the highest among existing $$\\gt 10$$ deg2 cosmic shear surveys. Combining the DLS tomography with the 9 yr results of the Wilkinson Microwave Anisotropy Probe (WMAP9) gives $${{\\rm{\\Omega }}}_{m}={0.293}_{-0.014}^{+0.012}$$, $${\\sigma }_{8}={0.833}_{-0.018}^{+0.011}$$, $${H}_{0}={68.6}_{-1.2}^{+1.4}\\;{\\text{km s}}^{-1}\\;{{\\rm{Mpc}}}^{-1}$$, and $${{\\rm{\\Omega }}}_{b}=0.0475\\pm 0.0012$$ for ΛCDM, reducing the uncertainties of the WMAP9-only constraints by ~50%. When we do not assume flatness for ΛCDM, we obtain the curvature constraint $${{\\rm{\\Omega }}}_{k}=-{0.010}_{-0.015}^{+0.013}$$ from the DLS+WMAP9 combination, which, however, is not well constrained when WMAP9 is used alone. The dark energy equation-of-state parameter w is tightly constrained when baryonic acoustic oscillation (BAO) data are added, yielding $$w=-{1.02}_{-0.09}^{+0.10}$$ with the DLS+WMAP9+BAO joint probe. The addition of supernova constraints further tightens the parameter to $$w=-1.03\\pm 0.03$$. Our joint constraints are fully consistent with the final Planck results and also with the predictions of a ΛCDM universe.« less
NASA Astrophysics Data System (ADS)
Rapa, Giulia; Groppo, Chiara; Rolfo, Franco; Petrelli, Maurizio; Mosca, Pietro; Perugini, Diego
2017-11-01
The pressure, temperature, and timing (P-T-t) conditions at which CO2 was produced during the Himalayan prograde metamorphism have been constrained, focusing on the most abundant calc-silicate rock type in the Himalaya. A detailed petrological modeling of a clinopyroxene + scapolite + K-feldspar + plagioclase + quartz ± calcite calc-silicate rock allowed the identification and full characterization - for the first time - of different metamorphic reactions leading to the simultaneous growth of titanite and CO2 production. The results of thermometric determinations (Zr-in-Ttn thermometry) and U-Pb geochronological analyses suggest that, in the studied lithology, most titanite grains grew during two nearly consecutive episodes of titanite formation: a near-peak event at 730-740 °C, 10 kbar, 30-26 Ma, and a peak event at 740-765 °C, 10.5 kbar, 25-20 Ma. Both episodes of titanite growth are correlated with specific CO2-producing reactions and constrain the timing, duration and P-T conditions of the main CO2-producing events, as well as the amounts of CO2 produced (1.4-1.8 wt% of CO2). A first-order extrapolation of such CO2 amounts to the orogen scale provides metamorphic CO2 fluxes ranging between 1.4 and 19.4 Mt/yr; these values are of the same order of magnitude as the present-day CO2 fluxes degassed from spring waters located along the Main Central Thrust. We suggest that these metamorphic CO2 fluxes should be considered in any future attempts of estimating the global budget of non-volcanic carbon fluxes from the lithosphere.
Microprobe monazite geochronology: new techniques for dating deformation and metamorphism
NASA Astrophysics Data System (ADS)
Williams, M.; Jercinovic, M.; Goncalves, P.; Mahan, K.
2003-04-01
High-resolution compositional mapping, age mapping, and precise dating of monazite on the electron microprobe are powerful additions to microstructural and petrologic analysis and important tools for tectonic studies. The in-situ nature and high spatial resolution of the technique offer an entirely new level of structurally and texturally specific geochronologic data that can be used to put absolute time constraints on P-T-D paths, constrain the rates of sedimentary, metamorphic, and deformational processes, and provide new links between metamorphism and deformation. New analytical techniques (including background modeling, sample preparation, and interference analysis) have significantly improved the precision and accuracy of the technique and new mapping and image analysis techniques have increased the efficiency and strengthened the correlation with fabrics and textures. Microprobe geochronology is particularly applicable to three persistent microstructural-microtextural problem areas: (1) constraining the chronology of metamorphic assemblages; (2) constraining the timing of deformational fabrics; and (3) interpreting other geochronological results. In addition, authigenic monazite can be used to date sedimentary basins, and detrital monazite can fingerprint sedimentary source areas, both critical for tectonic analysis. Although some monazite generations can be directly tied to metamorphism or deformation, at present, the most common constraints rely on monazite inclusion relations in porphyroblasts that, in turn, can be tied to the deformation and/or metamorphic history. Examples will be presented from deep-crustal rocks of northern Saskatchewan and from mid-crustal rocks from the southwestern USA. Microprobe monazite geochronology has been used in both regions to deconvolute overprinting deformation and metamorphic events and to clarify the interpretation of other geochronologic data. Microprobe mapping and dating are powerful companions to mass spectroscopic dating techniques. They allow geochronology to be incorporated into the microstructural analytical process, resulting in a new level of integration of time (t) into P-T-D histories.
Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide
Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...
2017-03-01
The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less
NASA Technical Reports Server (NTRS)
Mushotzky, Richard (Technical Monitor); Elvis, Martin
2004-01-01
The aim of the proposal is to investigate the absorption properties of a sample of inter-mediate redshift quasars. The main goals of the project are: Measure the redshift and the column density of the X-ray absorbers; test the correlation between absorption and redshift suggested by ROSAT and ASCA data; constrain the absorber ionization status and metallicity; constrain the absorber dust content and composition through the comparison between the amount of X-ray absorption and optical dust extinction. Unanticipated low energy cut-offs where discovered in ROSAT spectra of quasars and confirmed by ASCA, BeppoSAX and Chandra. In most cases it was not possible to constrain adequately the redshift of the absorber from the X-ray data alone. Two possibilities remain open: a) absorption at the quasar redshift; and b) intervening absorption. The evidences in favour of intrinsic absorption are all indirect. Sensitive XMM observations can discriminate between these different scenarios. If the absorption is at the quasar redshift we can study whether the quasar environment evolves with the Cosmic time.
VLTI-GRAVITY measurements of cool evolved stars
NASA Astrophysics Data System (ADS)
Wittkowski, M.; Rau, G.; Chiavassa, A.; Höfner, S.; Scholz, M.; Wood, P. R.; de Wit, W. J.; Eisenhauer, F.; Haubois, X.; Paumard, T.
2018-06-01
Context. Dynamic model atmospheres of Mira stars predict variabilities in the photospheric radius and in atmospheric molecular layers which are not yet strongly constrained by observations. Aims: Here we measure the variability of the oxygen-rich Mira star R Peg in near-continuum and molecular bands. Methods: We used near-infrared K-band spectro-interferometry with a spectral resolution of about 4000 obtained at four epochs between post-maximum and minimum visual phases employing the newly available GRAVITY beam combiner at the Very Large Telescope Interferometer (VLTI). Results: Our observations show a continuum radius that is anti-correlated with the visual lightcurve. Uniform disc (UD) angular diameters at a near-continuum wavelength of 2.25 μm are steadily increasing with values of 8.7 ± 0.1 mas, 9.4 ± 0.1 mas, 9.8 ± 0.1 mas, and 9.9 ± 0.1 mas at visual phases of 0.15, 0.36, 0,45, 0.53, respectively. UD diameters at a bandpass around 2.05 μm, dominated by water vapour, follow the near-continuum variability at larger UD diameters between 10.7 mas and 11.7 mas. UD diameters at the CO 2-0 bandhead, instead, are correlated with the visual lightcurve and anti-correlated with the near-continuum UD diameters, with values between 12.3 mas and 11.7 mas. Conclusions: The observed anti-correlation between continuum radius and visual lightcurve is consistent with an earlier study of the oxygen-rich Mira S Lac, and with recent 1D CODEX dynamic model atmosphere predictions. The amplitude of the variation is comparable to the earlier observations of S Lac, and smaller than predicted by CODEX models. The wavelength-dependent visibility variations at our epochs can be reproduced by a set of CODEX models at model phases between 0.3 and 0.6. The anti-correlation of water vapour and CO contributions at our epochs suggests that these molecules undergo different processes in the extended atmosphere along the stellar cycle. The newly available GRAVITY instrument is suited to conducting longer time series observations, which are needed to provide strong constraints on the model-predicted intra- and inter-cycle variability. Based on observations made with the VLT Interferometer at Paranal Observatory under programme IDs 60.A-9176 and 098.D-0647.
Dynamic Parameters of the 2015 Nepal Gorkha Mw7.8 Earthquake Constrained by Multi-observations
NASA Astrophysics Data System (ADS)
Weng, H.; Yang, H.
2017-12-01
Dynamic rupture model can provide much detailed insights into rupture physics that is capable of assessing future seismic risk. Many studies have attempted to constrain the slip-weakening distance, an important parameter controlling friction behavior of rock, for several earthquakes based on dynamic models, kinematic models, and direct estimations from near-field ground motion. However, large uncertainties of the values of the slip-weakening distance still remain, mostly because of the intrinsic trade-offs between the slip-weakening distance and fault strength. Here we use a spontaneously dynamic rupture model to constrain the frictional parameters of the 25 April 2015 Mw7.8 Nepal earthquake, by combining with multiple seismic observations such as high-rate cGPS data, strong motion data, and kinematic source models. With numerous tests we find the trade-off patterns of final slip, rupture speed, static GPS ground displacements, and dynamic ground waveforms are quite different. Combining all the seismic constraints we can conclude a robust solution without a substantial trade-off of average slip-weakening distance, 0.6 m, in contrast to previous kinematical estimation of 5 m. To our best knowledge, this is the first time to robustly determine the slip-weakening distance on seismogenic fault from seismic observations. The well-constrained frictional parameters may be used for future dynamic models to assess seismic hazard, such as estimating the peak ground acceleration (PGA) etc. Similar approach could also be conducted for other great earthquakes, enabling broad estimations of the dynamic parameters in global perspectives that can better reveal the intrinsic physics of earthquakes.
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
Herath, B; Dochtermann, N A; Johnson, J I; Leonard, Z; Bowsher, J H
2015-12-01
Many exaggerated and novel traits are strongly influenced by sexual selection. Although sexual selection is a powerful evolutionary force, underlying genetic interactions can constrain evolutionary outcomes. The relative strength of selection vs. constraint has been a matter of debate for the evolution of male abdominal appendages in sepsid flies. These abdominal appendages are involved in courtship and mating, but their function has not been directly tested. We performed mate choice experiments to determine whether sexual selection acts on abdominal appendages in the sepsid Themira biloba. We tested whether appendage bristle length influenced successful insemination by surgically trimming the bristles. Females paired with males that had shortened bristles laid only unfertilized eggs, indicating that long bristles are necessary for successful insemination. We also tested whether the evolution of bristle length was constrained by phenotypic correlations with other traits. Analyses of phenotypic covariation indicated that bristle length was highly correlated with other abdominal appendage traits, but was not correlated with abdominal sternite size. Thus, abdominal appendages are not exaggerated traits like many sexual ornaments, but vary independently from body size. At the same time, strong correlations between bristle length and appendage length suggest that selection on bristle length is likely to result in a correlated increase in appendage length. Bristle length is under sexual selection in T. biloba and has the potential to evolve independently from abdomen size. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Statistical mechanics of budget-constrained auctions
NASA Astrophysics Data System (ADS)
Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.
2009-07-01
Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.
Raabe, Joshua K.; Gardner, Beth; Hightower, Joseph E.
2013-01-01
We developed a spatial capture–recapture model to evaluate survival and activity centres (i.e., mean locations) of tagged individuals detected along a linear array. Our spatially explicit version of the Cormack–Jolly–Seber model, analyzed using a Bayesian framework, correlates movement between periods and can incorporate environmental or other covariates. We demonstrate the model using 2010 data for anadromous American shad (Alosa sapidissima) tagged with passive integrated transponders (PIT) at a weir near the mouth of a North Carolina river and passively monitored with an upstream array of PIT antennas. The river channel constrained migrations, resulting in linear, one-dimensional encounter histories that included both weir captures and antenna detections. Individual activity centres in a given time period were a function of the individual’s previous estimated location and the river conditions (i.e., gage height). Model results indicate high within-river spawning mortality (mean weekly survival = 0.80) and more extensive movements during elevated river conditions. This model is applicable for any linear array (e.g., rivers, shorelines, and corridors), opening new opportunities to study demographic parameters, movement or migration, and habitat use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazan, Jose G.; Luxton, Gary; Kozak, Margaret M.
Purpose: To determine how chemotherapy agents affect radiation dose parameters that correlate with acute hematologic toxicity (HT) in patients treated with pelvic intensity modulated radiation therapy (P-IMRT) and concurrent chemotherapy. Methods and Materials: We assessed HT in 141 patients who received P-IMRT for anal, gynecologic, rectal, or prostate cancers, 95 of whom received concurrent chemotherapy. Patients were separated into 4 groups: mitomycin (MMC) + 5-fluorouracil (5FU, 37 of 141), platinum ± 5FU (Cis, 32 of 141), 5FU (26 of 141), and P-IMRT alone (46 of 141). The pelvic bone was contoured as a surrogate for pelvic bone marrow (PBM) andmore » divided into subsites: ilium, lower pelvis, and lumbosacral spine (LSS). The volumes of each region receiving 5-40 Gy were calculated. The endpoint for HT was grade ≥3 (HT3+) leukopenia, neutropenia or thrombocytopenia. Normal tissue complication probability was calculated using the Lyman-Kutcher-Burman model. Logistic regression was used to analyze association between HT3+ and dosimetric parameters. Results: Twenty-six patients experienced HT3+: 10 of 37 (27%) MMC, 14 of 32 (44%) Cis, 2 of 26 (8%) 5FU, and 0 of 46 P-IMRT. PBM dosimetric parameters were correlated with HT3+ in the MMC group but not in the Cis group. LSS dosimetric parameters were well correlated with HT3+ in both the MMC and Cis groups. Constrained optimization (0« less
Constraining cosmic scatter in the Galactic halo through a differential analysis of metal-poor stars
NASA Astrophysics Data System (ADS)
Reggiani, Henrique; Meléndez, Jorge; Kobayashi, Chiaki; Karakas, Amanda; Placco, Vinicius
2017-12-01
Context. The chemical abundances of metal-poor halo stars are important to understanding key aspects of Galactic formation and evolution. Aims: We aim to constrain Galactic chemical evolution with precise chemical abundances of metal-poor stars (-2.8 ≤ [Fe/H] ≤ -1.5). Methods: Using high resolution and high S/N UVES spectra of 23 stars and employing the differential analysis technique we estimated stellar parameters and obtained precise LTE chemical abundances. Results: We present the abundances of Li, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Zn, Sr, Y, Zr, and Ba. The differential technique allowed us to obtain an unprecedented low level of scatter in our analysis, with standard deviations as low as 0.05 dex, and mean errors as low as 0.05 dex for [X/Fe]. Conclusions: By expanding our metallicity range with precise abundances from other works, we were able to precisely constrain Galactic chemical evolution models in a wide metallicity range (-3.6 ≤ [Fe/H] ≤ -0.4). The agreements and discrepancies found are key for further improvement of both models and observations. We also show that the LTE analysis of Cr II is a much more reliable source of abundance for chromium, as Cr I has important NLTE effects. These effects can be clearly seen when we compare the observed abundances of Cr I and Cr II with GCE models. While Cr I has a clear disagreement between model and observations, Cr II is very well modeled. We confirm tight increasing trends of Co and Zn toward lower metallicities, and a tight flat evolution of Ni relative to Fe. Our results strongly suggest inhomogeneous enrichment from hypernovae. Our precise stellar parameters results in a low star-to-star scatter (0.04 dex) in the Li abundances of our sample, with a mean value about 0.4 dex lower than the prediction from standard Big Bang nucleosynthesis; we also study the relation between lithium depletion and stellar mass, but it is difficult to assess a correlation due to the limited mass range. We find two blue straggler stars, based on their very depleted Li abundances. One of them shows intriguing abundance anomalies, including a possible zinc enhancement, suggesting that zinc may have been also produced by a former AGB companion. Tables A.1-A.6 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A46
NASA Astrophysics Data System (ADS)
Gaztañaga, Enrique; Juszkiewicz, Roman
2001-09-01
We present a new constraint on the biased galaxy formation picture. Gravitational instability theory predicts that the two-point mass density correlation function, ξ(r), has an inflection point at the separation r=r0, corresponding to the boundary between the linear and nonlinear regime of clustering, ξ~=1. We show how this feature can be used to constrain the biasing parameter b2≡ξg(r)/ξ(r) on scales r~=r0, where ξg is the galaxy-galaxy correlation function, which is allowed to differ from ξ. We apply our method to real data: the ξg(r), estimated from the Automatic Plate Measuring (APM) galaxy survey. Our results suggest that the APM galaxies trace the mass at separations r>~5 h-1 Mpc, where h is the Hubble constant in units of 100 km s-1 Mpc-1. The present results agree with earlier studies, based on comparing higher order correlations in the APM with weakly nonlinear perturbation theory. Both approaches constrain the b factor to be within 20% of unity. If the existence of the feature that we identified in the APM ξg(r)-the inflection point near ξg=1-is confirmed by more accurate surveys, we may have discovered gravity's smoking gun: the long-awaited ``shoulder'' in ξ, predicted by Gott and Rees 25 years ago.
Correlation between k-space sampling pattern and MTF in compressed sensing MRSI.
Heikal, A A; Wachowicz, K; Fallone, B G
2016-10-01
To investigate the relationship between the k-space sampling patterns used for compressed sensing MR spectroscopic imaging (CS-MRSI) and the modulation transfer function (MTF) of the metabolite maps. This relationship may allow the desired frequency content of the metabolite maps to be quantitatively tailored when designing an undersampling pattern. Simulations of a phantom were used to calculate the MTF of Nyquist sampled (NS) 32 × 32 MRSI, and four-times undersampled CS-MRSI reconstructions. The dependence of the CS-MTF on the k-space sampling pattern was evaluated for three sets of k-space sampling patterns generated using different probability distribution functions (PDFs). CS-MTFs were also evaluated for three more sets of patterns generated using a modified algorithm where the sampling ratios are constrained to adhere to PDFs. Strong visual correlation as well as high R 2 was found between the MTF of CS-MRSI and the product of the frequency-dependant sampling ratio and the NS 32 × 32 MTF. Also, PDF-constrained sampling patterns led to higher reproducibility of the CS-MTF, and stronger correlations to the above-mentioned product. The relationship established in this work provides the user with a theoretical solution for the MTF of CS MRSI that is both predictable and customizable to the user's needs.