Science.gov

Sample records for amanda observations constrain

  1. Cloudsat Satellite Images of Amanda

    NASA Video Gallery

    NASA's CloudSat satellite flew over Hurricane Amanda on May 25, at 5 p.m. EDT and saw a deep area of moderate to heavy-moderate precipitation below the freezing level (where precipitation changes f...

  2. Constraining the Braneworld with Gravitational Wave Observations

    NASA Technical Reports Server (NTRS)

    McWilliams, Sean T.

    2011-01-01

    Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, L, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining L via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain L at the approximately 1 micron level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of L less than or equal to 5 microns.

  3. Constraining the halo mass function with observations

    NASA Astrophysics Data System (ADS)

    Castro, Tiago; Marra, Valerio; Quartin, Miguel

    2016-12-01

    The abundances of dark matter haloes in the universe are described by the halo mass function (HMF). It enters most cosmological analyses and parametrizes how the linear growth of primordial perturbations is connected to these abundances. Interestingly, this connection can be made approximately cosmology independent. This made it possible to map in detail its near-universal behaviour through large-scale simulations. However, such simulations may suffer from systematic effects, especially if baryonic physics is included. In this paper, we ask how well observations can constrain directly the HMF. The observables we consider are galaxy cluster number counts, galaxy cluster power spectrum and lensing of Type Ia supernovae. Our results show that Dark Energy Survey is capable of putting the first meaningful constraints on the HMF, while both Euclid and J-PAS (Javalambre-Physics of the Accelerated Universe Astrophysical Survey) can give stronger constraints, comparable to the ones from state-of-the-art simulations. We also find that an independent measurement of cluster masses is even more important for measuring the HMF than for constraining the cosmological parameters, and can vastly improve the determination of the HMF. Measuring the HMF could thus be used to cross-check simulations and their implementation of baryon physics. It could even, if deviations cannot be accounted for, hint at new physics.

  4. Constraining Cosmological Models with Different Observations

    NASA Astrophysics Data System (ADS)

    Wei, J. J.

    2016-07-01

    With the observations of Type Ia supernovae (SNe Ia), scientists discovered that the Universe is experiencing an accelerated expansion, and then revealed the existence of dark energy in 1998. Since the amazing discovery, cosmology has became a hot topic in the physical research field. Cosmology is a subject that strongly depends on the astronomical observations. Therefore, constraining different cosmological models with all kinds of observations is one of the most important research works in the modern cosmology. The goal of this thesis is to investigate cosmology using the latest observations. The observations include SNe Ia, Type Ic Super Luminous supernovae (SLSN Ic), Gamma-ray bursts (GRBs), angular diameter distance of galaxy cluster, strong gravitational lensing, and age measurements of old passive galaxies, etc. In Chapter 1, we briefly review the research background of cosmology, and introduce some cosmological models. Then we summarize the progress on cosmology from all kinds of observations in more details. In Chapter 2, we present the results of our studies on the supernova cosmology. The main difficulty with the use of SNe Ia as standard candles is that one must optimize three or four nuisance parameters characterizing SN luminosities simultaneously with the parameters of an expansion model of the Universe. We have confirmed that one should optimize all of the parameters by carrying out the method of maximum likelihood estimation in any situation where the parameters include an unknown intrinsic dispersion. The commonly used method, which estimates the dispersion by requiring the reduced χ^{2} to equal unity, does not take into account all possible variances among the parameters. We carry out such a comparison of the standard ΛCDM cosmology and the R_{h}=ct Universe using the SN Legacy Survey sample of 252 SN events, and show that each model fits its individually reduced data very well. Moreover, it is quite evident that SLSNe Ic may be useful

  5. Constrained Inversion of Enceladus Interaction Observations

    NASA Astrophysics Data System (ADS)

    Herbert, Floyd; Khurana, K. K.

    2007-10-01

    Many detailed and sophisticated ab initio calculations of the electrodynamic interaction of Enceladus' plume with Saturn's corotating magnetospheric plasma flow have been computed. So far, however, all such calculations have been forward models, that assume the properties of the plume and compute perturbations to the magnetic (and in some cases, flow velocity) field. As a complement to the forward calculations, work reported here explores the inverse approach, of using simplified physical models of the interaction for computationally inverting the observed magnetic field perturbations of the interaction, in order to determine the cross-B-field conductivity distribution near Enceladus, and from that, the neutral gas distribution. Direct inversion of magnetic field observations to current systems is, of course, impossible, but adding the additional constraint of the interaction physics greatly reduces the non-uniqueness of the computed result. This approach was successfully used by Herbert (JGR 90:8241, 1985) to constrain the atmospheric distribution on Io and the Io torus mass density at the time of the Voyager encounter. Work so far has derived the expected result that there is a cone-shaped region of enhanced cross-field conductivity south of Enceladus, through which currents are driven by the motional electric field. That is, near Enceladus' south pole the cross-field currents are localized, but more widely spread at greater distance. This cross-field conductivity is presumably both pickup and collisional (Pedersen and Hall). Due to enforcement of current conservation, Alfven-wing-like currents north of the main part of the interaction region seem to close partly around Enceladus (assumed insulating) and also to continue northward with attenuated intensity, as though there were a tenuous global exosphere on Enceladus providing additional cross-field conductivity. FH thanks the NASA Outer Planets Research, Planetary Atmospheres, and Geospace Science Programs for

  6. 3-D TRMM Flyby of Hurricane Amanda

    NASA Video Gallery

    The TRMM satellite flew over Hurricane Amanda on Tuesday, May 27 at 1049 UTC (6:49 a.m. EDT) and captured rainfall rates and cloud height data that was used to create this 3-D simulated flyby. Cred...

  7. Constraining Simulated Photosynthesis with Fluorescence Observations

    NASA Astrophysics Data System (ADS)

    Baker, I. T.; Berry, J. A.; Lee, J.; Frankenberg, C.; Denning, S.

    2012-12-01

    The measurement of chlorophyll fluorescence from satellites is an emerging technology. To date, most applications have compared fluorescence to light use efficiency models of Gross Primary Productivity (GPP). A close correspondence between fluorescence and GPP has been found in these comparisons. Here, we 'go the other way' and calculate fluorescence using an enzyme kinetic photosynthesis model (the Simple Biosphere Model; SiB), and compare to spectral retrievals. We utilize multiple representations for model phenology as a sensitivity test, obtaining leaf area index (LAI) and fraction of photosynthetically active radiation absorbed (fPAR) from both MODIS-derived products as well as a prognostic model of LAI/fPAR based on growing season index (PGSI). We find that bidirectional reflectance distribution function (BRDF), canopy radiative transfer, and leaf-to-canopy scaling all contribute to variability in simulated fluorescence. We use our results to evaluate discrepancies between light use efficiency and enzyme kinetic models across latitudinal, vegetation and climatological gradients. Satellite retrievals of fluorescence will provide insight into photosynthetic process and constrain simulations of the carbon cycle across multiple spatiotemporal scales.

  8. Constraining Numerical Geodynamo Modeling with Surface Observations

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Tangborn, Andrew

    2006-01-01

    Numerical dynamo solutions have traditionally been generated entirely by a set of self-consistent differential equations that govern the spatial-temporal variation of the magnetic field, velocity field and other fields related to dynamo processes. In particular, those solutions are obtained with parameters very different from those appropriate for the Earth s core. Geophysical application of the numerical results therefore depends on correct understanding of the differences (errors) between the model outputs and the true states (truth) in the outer core. Part of the truth can be observed at the surface in the form of poloidal magnetic field. To understand these differences, or errors, we generate new initial model state (analysis) by assimilating sequentially the model outputs with the surface geomagnetic observations using an optimal interpolation scheme. The time evolution of the core state is then controlled by our MoSST core dynamics model. The final outputs (forecasts) are then compared with the surface observations as a means to test the success of the assimilation. We use the surface geomagnetic data back to year 1900 for our studies, with 5-year forecast and 20-year analysis periods. We intend to use the result; to understand time variation of the errors with the assimilation sequences, and the impact of the assimilation on other unobservable quantities, such as the toroidal field and the fluid velocity in the core.

  9. Constraining the Noncommutative Spectral Action via Astrophysical Observations

    SciTech Connect

    Nelson, William; Ochoa, Joseph; Sakellariadou, Mairi

    2010-09-03

    The noncommutative spectral action extends our familiar notion of commutative spaces, using the data encoded in a spectral triple on an almost commutative space. Varying a rather simple action, one can derive all of the standard model of particle physics in this setting, in addition to a modified version of Einstein-Hilbert gravity. In this Letter we use observations of pulsar timings, assuming that no deviation from general relativity has been observed, to constrain the gravitational sector of this theory. While the bounds on the coupling constants remain rather weak, they are comparable to existing bounds on deviations from general relativity in other settings and are likely to be further constrained by future observations.

  10. Spatially distributed observations in constraining inundation modelling uncertainties

    NASA Astrophysics Data System (ADS)

    Werner, Micha; Blazkova, Sarka; Petr, Jiri

    2005-10-01

    The performance of two modelling approaches for predicting floodplain inundation is tested using observed flood extent and 26 distributed floodplain level observations for the 1997 flood event in the town of Usti nad Orlici in the Czech Republic. Although the one-dimensional hydrodynamic model and the integrated one- and two-dimensional model are shown to perform comparably against the flood extent data, the latter shows better performance against the distributed level observations. Comparable performance in predicting the extent of inundation is found to be primarily as a result of the urban reach considered, with flood extent constrained by road and railway embankments. Uncertainty in the elevation model used in both approaches is shown to have little effect on the reliability in predicting flood extent, with a greater impact on the ability in predicting the distributed level observations. These results show that reliability of flood inundation modelling in urban reaches, where flood risk assessment is of more interest than in more rural reaches, can be improved greatly if distributed observations of levels in the floodplain are used in constraining model uncertainties.

  11. Traversable geometric dark energy wormholes constrained by astrophysical observations

    NASA Astrophysics Data System (ADS)

    Wang, Deng; Meng, Xin-he

    2016-09-01

    In this paper, we introduce the astrophysical observations into the wormhole research. We investigate the evolution behavior of the dark energy equation of state parameter ω by constraining the dark energy model, so that we can determine in which stage of the universe wormholes can exist by using the condition ω <-1. As a concrete instance, we study the Ricci dark energy (RDE) traversable wormholes constrained by astrophysical observations. Particularly, we find from Fig. 5 of this work, when the effective equation of state parameter ω _X<-1 (or z<0.109), i.e., the null energy condition (NEC) is violated clearly, the wormholes will exist (open). Subsequently, six specific solutions of statically and spherically symmetric traversable wormhole supported by the RDE fluids are obtained. Except for the case of a constant redshift function, where the solution is not only asymptotically flat but also traversable, the five remaining solutions are all non-asymptotically flat, therefore, the exotic matter from the RDE fluids is spatially distributed in the vicinity of the throat. Furthermore, we analyze the physical characteristics and properties of the RDE traversable wormholes. It is worth noting that, using the astrophysical observations, we obtain the constraints on the parameters of the RDE model, explore the types of exotic RDE fluids in different stages of the universe, limit the number of available models for wormhole research, reduce theoretically the number of the wormholes corresponding to different parameters for the RDE model, and provide a clearer picture for wormhole investigations from the new perspective of observational cosmology.

  12. Constraining the volatile fraction of planets from transit observations

    NASA Astrophysics Data System (ADS)

    Alibert, Y.

    2016-06-01

    Context. The determination of the abundance of volatiles in extrasolar planets is very important as it can provide constraints on transport in protoplanetary disks and on the formation location of planets. However, constraining the internal structure of low-mass planets from transit measurements is known to be a degenerate problem. Aims: Using planetary structure and evolution models, we show how observations of transiting planets can be used to constrain their internal composition, in particular the amount of volatiles in the planetary interior, and consequently the amount of gas (defined in this paper to be only H and He) that the planet harbors. We first explore planets that are located close enough to their star to have lost their gas envelope. We then concentrate on planets at larger distances and show that the observation of transiting planets at different evolutionary ages can provide statistical information on their internal composition, in particular on their volatile fraction. Methods: We computed the evolution of low-mass planets (super-Earths to Neptune-like) for different fractions of volatiles and gas. We used a four-layer model (core, silicate mantle, icy mantle, and gas envelope) and computed the internal structure of planets for different luminosities. With this internal structure model, we computed the internal and gravitational energy of planets, which was then used to derive the time evolution of the planet. Since the total energy of a planet depends on its heat capacity and density distribution and therefore on its composition, planets with different ice fractions have different evolution tracks. Results: We show for low-mass gas-poor planets that are located close to their central star that assuming evaporation has efficiently removed the entire gas envelope, it is possible to constrain the volatile fraction of close-in transiting planets. We illustrate this method on the example of 55 Cnc e and show that under the assumption of the absence of

  13. Thermal evolution of Mercury as constrained by MESSENGER observations

    NASA Astrophysics Data System (ADS)

    Michel, Nathalie C.; Hauck, Steven A.; Solomon, Sean C.; Phillips, Roger J.; Roberts, James H.; Zuber, Maria T.

    2013-05-01

    observations of Mercury by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft provide new constraints on that planet's thermal and interior evolution. Specifically, MESSENGER observations have constrained the rate of radiogenic heat production via measurement of uranium, thorium, and potassium at the surface, and identified a range of surface compositions consistent with high-temperature, high-degree partial melts of the mantle. Additionally, MESSENGER data have placed new limits on the spatial and temporal variation in volcanic and tectonic activity and enabled determination that the planet's core is larger than previously estimated. Because Mercury's mantle layer is also thinner than previously thought, this result gives greater likelihood to the possibility that mantle convection is marginally supercritical or even that the mantle is not convecting. We simulate mantle convection and magma generation within Mercury's mantle under two-dimensional axisymmetry and a broad range of conditions to understand the implications of MESSENGER observations for the thermal evolution of the planet. These models demonstrate that mantle convection can persist in such a thin mantle for a substantial portion of Mercury's history, and often to the present, as long as the mantle is thicker than ~300 km. We also find that magma generation in Mercury's convecting mantle is capable of producing widespread magmas by large-degree partial melting, consistent with MESSENGER observations of the planet's surface chemistry and geology.

  14. Siberian Arctic black carbon sources constrained by model and observation

    NASA Astrophysics Data System (ADS)

    Winiger, Patrik; Andersson, August; Eckhardt, Sabine; Stohl, Andreas; Semiletov, Igor P.; Dudarev, Oleg V.; Charkin, Alexander; Shakhova, Natalia; Klimont, Zbigniew; Heyes, Chris; Gustafsson, Örjan

    2017-02-01

    Black carbon (BC) in haze and deposited on snow and ice can have strong effects on the radiative balance of the Arctic. There is a geographic bias in Arctic BC studies toward the Atlantic sector, with lack of observational constraints for the extensive Russian Siberian Arctic, spanning nearly half of the circum-Arctic. Here, 2 y of observations at Tiksi (East Siberian Arctic) establish a strong seasonality in both BC concentrations (8 ngṡm‑3 to 302 ngṡm‑3) and dual-isotope–constrained sources (19 to 73% contribution from biomass burning). Comparisons between observations and a dispersion model, coupled to an anthropogenic emissions inventory and a fire emissions inventory, give mixed results. In the European Arctic, this model has proven to simulate BC concentrations and source contributions well. However, the model is less successful in reproducing BC concentrations and sources for the Russian Arctic. Using a Bayesian approach, we show that, in contrast to earlier studies, contributions from gas flaring (6%), power plants (9%), and open fires (12%) are relatively small, with the major sources instead being domestic (35%) and transport (38%). The observation-based evaluation of reported emissions identifies errors in spatial allocation of BC sources in the inventory and highlights the importance of improving emission distribution and source attribution, to develop reliable mitigation strategies for efficient reduction of BC impact on the Russian Arctic, one of the fastest-warming regions on Earth.

  15. Siberian Arctic black carbon sources constrained by model and observation.

    PubMed

    Winiger, Patrik; Andersson, August; Eckhardt, Sabine; Stohl, Andreas; Semiletov, Igor P; Dudarev, Oleg V; Charkin, Alexander; Shakhova, Natalia; Klimont, Zbigniew; Heyes, Chris; Gustafsson, Örjan

    2017-02-14

    Black carbon (BC) in haze and deposited on snow and ice can have strong effects on the radiative balance of the Arctic. There is a geographic bias in Arctic BC studies toward the Atlantic sector, with lack of observational constraints for the extensive Russian Siberian Arctic, spanning nearly half of the circum-Arctic. Here, 2 y of observations at Tiksi (East Siberian Arctic) establish a strong seasonality in both BC concentrations (8 ng⋅m(-3) to 302 ng⋅m(-3)) and dual-isotope-constrained sources (19 to 73% contribution from biomass burning). Comparisons between observations and a dispersion model, coupled to an anthropogenic emissions inventory and a fire emissions inventory, give mixed results. In the European Arctic, this model has proven to simulate BC concentrations and source contributions well. However, the model is less successful in reproducing BC concentrations and sources for the Russian Arctic. Using a Bayesian approach, we show that, in contrast to earlier studies, contributions from gas flaring (6%), power plants (9%), and open fires (12%) are relatively small, with the major sources instead being domestic (35%) and transport (38%). The observation-based evaluation of reported emissions identifies errors in spatial allocation of BC sources in the inventory and highlights the importance of improving emission distribution and source attribution, to develop reliable mitigation strategies for efficient reduction of BC impact on the Russian Arctic, one of the fastest-warming regions on Earth.

  16. Fast Emission Estimates in China Constrained by Satellite Observations (Invited)

    NASA Astrophysics Data System (ADS)

    Mijling, B.; van der A, R.

    2013-12-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for an emerging economy such as China, where rapid economic growth changes emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. Constraining emissions from concentration measurements is, however, computationally challenging. Within the GlobEmission project of the European Space Agency (ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China, using the CHIMERE model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e.g. shipping emissions). The new emission estimates result in a better

  17. New Seismic Observables Constrain Structure within the Continental Lithosphere

    NASA Astrophysics Data System (ADS)

    Cunningham, E. E.; Lekic, V.

    2014-12-01

    The origin and stability of the continental lithosphere play a fundamental role in plate tectonics and enable the survival of Archean crust over billions of years. Recent advances in seismic data and imaging have revealed a velocity drop with depth within continental cratons too shallow to be interpreted as the lithosphere asthenosphere boundary (Rychert and Shearer 2009). The significance of this "mid lithospheric discontinuity" (MLD) - or multiple MLDs as suggested recently (Lekic & Fischer, 2013) - is not fully understood, and its implications for continental formation and stability are only beginning to be explored. Discrepancies call for both improving the constraints on the nature of the MLD, and relating these observations to tectonic setting and deformation history. The extensive coverage of the EarthScope USArray presents an unprecedented opportunity to systematically map the structure of the continental lithosphere. We use receiver functions (RFs) to isolate converted phases (Ps or Sp) produced across velocity discontinuities beneath a seismometer, and thereby constrain vertical density and seismic velocity variations. We show that at some stations, the apparent velocity contrast across the MLD demonstrates a dependence on seismic wave frequency, being greater at low frequencies than at high frequencies. This suggests that the MLD - at least in certain locations - is distributed across tens of kilometers in depth. The gradient of the MLD fingerprints physical process at play; a weak gradient indicates thermal origin, while an abrupt discontinuity implicates change in composition or partial melting. Furthermore, we map the strength, depth, and ratio of amplitudes of waves converted across the MLD and the Moho throughout the US. Because these receiver function based measurements only reveal relative velocity variations with depth, we combine them with frequency-dependent measurements of apparent incidence angles of P and S waves. Doing so allows us to

  18. A Pn Spreading Model Constrained with Observed Amplitudes in Asia

    DTIC Science & Technology

    2011-09-01

    of observed Pn amplitudes from the tectonically active regions of Asia to evaluate the performance of Y2007 and to develop new, observation-based...a set of observed Pn amplitudes from the tectonically active regions of Asia to evaluate the performance of Y2007 and to develop new observation-based...tomographic inversions to map the lateral Pn attenuation variation. RESEARCH ACCOMPLISHED Introduction It has long been recognized that the

  19. Geochemical record of high emperor penguin populations during the Little Ice Age at Amanda Bay, Antarctica.

    PubMed

    Huang, Tao; Yang, Lianjiao; Chu, Zhuding; Sun, Liguang; Yin, Xijie

    2016-09-15

    Emperor penguins (Aptenodytes forsteri) are sensitive to the Antarctic climate change because they breed on the fast sea ice. Studies of paleohistory for the emperor penguin are rare, due to the lack of archives on land. In this study, we obtained an emperor penguin ornithogenic sediment profile (PI) and performed geochronological, geochemical and stable isotope analyses on the sediments and feather remains. Two radiocarbon dates of penguin feathers in PI indicate that emperor penguins colonized Amanda Bay as early as CE 1540. By using the bio-elements (P, Se, Hg, Zn and Cd) in sediments and stable isotope values (δ(15)N and δ(13)C) in feathers, we inferred relative population size and dietary change of emperor penguins during the period of CE 1540-2008, respectively. An increase in population size with depleted N isotope ratios for emperor penguins on N island at Amanda Bay during the Little Ice Age (CE 1540-1866) was observed, suggesting that cold climate affected the penguin's breeding habitat, prey availability and thus their population and dietary composition.

  20. Constraining Galaxy Evolution Using Observed UV-Optical Spectra

    NASA Technical Reports Server (NTRS)

    Heap, Sally

    2007-01-01

    Our understanding of galaxy evolution depends on model spectra of stellar populations, and the models are only as good as the observed spectra and stellar parameters that go into them. We are therefore evaluating modem UV-optical model spectra using Hubble's Next Generation Spectral Library (NGSL) as the reference standard. The NGSL comprises intermediate-resolution (R is approximately 1000) STIS spectra of 378 stars having a wide range in metallicity and age. Unique features of the NGSL include its broad wavelength coverage (1,800-10,100 A) and high-S/N, absolute spectrophotometry. We will report on a systematic comparison of model and observed UV-blue spectra, describe where on the HR diagram significant differences occur, and comment on current approaches to correct the models for these differences.

  1. HEATING OF FLARE LOOPS WITH OBSERVATIONALLY CONSTRAINED HEATING FUNCTIONS

    SciTech Connect

    Qiu Jiong; Liu Wenjuan; Longcope, Dana W.

    2012-06-20

    We analyze high-cadence high-resolution observations of a C3.2 flare obtained by AIA/SDO on 2010 August 1. The flare is a long-duration event with soft X-ray and EUV radiation lasting for over 4 hr. Analysis suggests that magnetic reconnection and formation of new loops continue for more than 2 hr. Furthermore, the UV 1600 Angstrom-Sign observations show that each of the individual pixels at the feet of flare loops is brightened instantaneously with a timescale of a few minutes, and decays over a much longer timescale of more than 30 minutes. We use these spatially resolved UV light curves during the rise phase to construct empirical heating functions for individual flare loops, and model heating of coronal plasmas in these loops. The total coronal radiation of these flare loops are compared with soft X-ray and EUV radiation fluxes measured by GOES and AIA. This study presents a method to observationally infer heating functions in numerous flare loops that are formed and heated sequentially by reconnection throughout the flare, and provides a very useful constraint to coronal heating models.

  2. Using observations of distant quasars to constrain quantum gravity

    NASA Astrophysics Data System (ADS)

    Perlman, E. S.; Ng, Y. J.; Floyd, D. J. E.; Christiansen, W. A.

    2011-11-01

    Aims: The small-scale nature of spacetime can be tested with observations of distant quasars. We comment on a recent paper by Tamburini et al. (2011, A&A, 533, A71) which claims that Hubble Space Telescope (HST) observations of the most distant quasars place severe constraints on models of foamy spacetime. Methods: If space is foamy on the Planck scale, photons emitted from distant objects will accumulate uncertainties in distance and propagation directions thus affecting the expected angular size of a compact object as a function of redshift. We discuss the geometry of foamy spacetime, and the appropriate distance measure for calculating the expected angular broadening. We also address the mechanics of carrying out such a test. We draw upon our previously published work on this subject, which carried out similar tests as Tamburini et al. and also went considerably beyond their work in several respects. Results: When calculating the path taken by photons as they travel from a distant source to Earth, one must use the comoving distance rather than the luminosity distance. This then also becomes the appropriate distance to use when calculating the angular broadening expected in a distant source. The use of the wrong distance measure causes Tamburini et al. to overstate the constraints that can be placed on models of spacetime foam. In addition, we consider the impact of different ways of parametrizing and measuring the effects of spacetime foam. Given the variation of the shape of the point-spread function on the chip, as well as observation-specific factors, it is important to select carefully - and document - the comparison stars used as well as the methods used to compute the Strehl ratio.

  3. Observationally constrained estimates of carbonaceous aerosol radiative forcing.

    PubMed

    Chung, Chul E; Ramanathan, V; Decremer, Damien

    2012-07-17

    Carbonaceous aerosols (CA) emitted by fossil and biomass fuels consist of black carbon (BC), a strong absorber of solar radiation, and organic matter (OM). OM scatters as well as absorbs solar radiation. The absorbing component of OM, which is ignored in most climate models, is referred to as brown carbon (BrC). Model estimates of the global CA radiative forcing range from 0 to 0.7 Wm(-2), to be compared with the Intergovernmental Panel on Climate Change's estimate for the pre-Industrial to the present net radiative forcing of about 1.6 Wm(-2). This study provides a model-independent, observationally based estimate of the CA direct radiative forcing. Ground-based aerosol network data is integrated with field data and satellite-based aerosol observations to provide a decadal (2001 through 2009) global view of the CA optical properties and direct radiative forcing. The estimated global CA direct radiative effect is about 0.75 Wm(-2) (0.5 to 1.0). This study identifies the global importance of BrC, which is shown to contribute about 20% to 550-nm CA solar absorption globally. Because of the inclusion of BrC, the net effect of OM is close to zero and the CA forcing is nearly equal to that of BC. The CA direct radiative forcing is estimated to be about 0.65 (0.5 to about 0.8) Wm(-2), thus comparable to or exceeding that by methane. Caused in part by BrC absorption, CAs have a net warming effect even over open biomass-burning regions in Africa and the Amazon.

  4. Observationally constrained estimates of carbonaceous aerosol radiative forcing

    PubMed Central

    Chung, Chul E.; Ramanathan, V.; Decremer, Damien

    2012-01-01

    Carbonaceous aerosols (CA) emitted by fossil and biomass fuels consist of black carbon (BC), a strong absorber of solar radiation, and organic matter (OM). OM scatters as well as absorbs solar radiation. The absorbing component of OM, which is ignored in most climate models, is referred to as brown carbon (BrC). Model estimates of the global CA radiative forcing range from 0 to 0.7 Wm-2, to be compared with the Intergovernmental Panel on Climate Change’s estimate for the pre-Industrial to the present net radiative forcing of about 1.6 Wm-2. This study provides a model-independent, observationally based estimate of the CA direct radiative forcing. Ground-based aerosol network data is integrated with field data and satellite-based aerosol observations to provide a decadal (2001 through 2009) global view of the CA optical properties and direct radiative forcing. The estimated global CA direct radiative effect is about 0.75 Wm-2 (0.5 to 1.0). This study identifies the global importance of BrC, which is shown to contribute about 20% to 550-nm CA solar absorption globally. Because of the inclusion of BrC, the net effect of OM is close to zero and the CA forcing is nearly equal to that of BC. The CA direct radiative forcing is estimated to be about 0.65 (0.5 to about 0.8) Wm-2, thus comparable to or exceeding that by methane. Caused in part by BrC absorption, CAs have a net warming effect even over open biomass-burning regions in Africa and the Amazon. PMID:22753522

  5. Observationally-constrained estimates of global small-mode AOD

    NASA Astrophysics Data System (ADS)

    Lee, K.; Chung, C. E.

    2012-12-01

    Small aerosols are mostly anthropogenic, and an area average of the small-mode aerosol optical depth (sAOD) is a powerful and independent measure of anthropogenic aerosol emission. We estimate AOD and sAOD globally on a monthly time scale from 2001 to 2010 by integrating satellite-based (MODIS and MISR) and ground-based (AERONET) observations. For sAOD, three integration methods were developed to maximize the influence of AERONET data and ensure consistency between MODIS, MISR and AERONET sAOD data. We evaluated each method by applying the technique with fewer AERONET data and comparing its output with the unused AERONET data. The best performing method gives an overall error of 13 ± 2%, compared with an overall error of 62% in simply using MISR sAOD, and this method takes advantage of an empirical relationship between the Ångström exponent (AE) and fine mode fraction (FMF). This relationship is obtained by analyzing AERONET data. Using our integrated data, we find that the global 2001-2010 average of 500 nm AOD and sAOD is 0.17 and 0.094, respectively. sAOD over eastern China is several times as large as the global average. The linear trend from 2001 to 2010 is found to be slightly negative in global AOD or global sAOD. In India and eastern China combined, however, sAOD increased by more than 4% against a backdrop of decreasing AOD and large-mode AOD. On the contrary to India and China, the west (Western Europe and US/Canada combined) is found to have a sAOD reduction of -20%. These results quantify the overall anthropogenic aerosol emission reduction in the west, and rapidly deteriorating conditions in Asia. Moreover, our results in the west are consistent with the so-called surface brightening phenomenon in the recent decades.

  6. Global bioenergy capacity as constrained by observed biospheric productivity rates

    NASA Astrophysics Data System (ADS)

    Smith, W. K.; Zhao, M.; Running, S. W.

    2011-12-01

    Virtually all global energy forecasts include an expectation that bioenergy will be a substantial energy source for the future. Multiple current estimates of global bioenergy potential (GBP) range from 500-1,500 EJ yr-1 or 100-300% of 2009 global primary energy consumption (GPEC09), suggesting bioenergy could conceivably replace fossil fuels entirely. However, these estimates are based on extrapolation of plot-level production rates which largely neglect complex global climatic and land-use constraints. We estimated GBP using satellite-derived, observed global primary productivity data from 2000-2006, which integrates global climate data and detects seasonal vegetation dynamics. Land-use constraints were then applied to account for current crop and forestry harvest requirements, human-controlled pasturelands, remote regions, and nature conservation areas. We show GBP is limited to 52-248 EJ yr-1 or 10-49% of GPEC09, a range lower than many current GBP estimates by a factor of four. Even attaining the low-end of this range requires utilization of all harvest residues over 31 million km2 (Mkm2), while the high-end requires additional harvest over 41 Mkm2, an area roughly three times current global cropland extent. Although, exploitation of pasture and remote land could significantly contribute to GBP, the availability of these land areas remains controversial due to critical concerns regarding indirect land-use change and carbon debt. Future energy policy is of unparalleled importance to humanity, and our results are critical in estimating quantitative limitations on the overall potential for global bioenergy production.

  7. Measurements of atmospheric muons using AMANDA with emphasis on the prompt component

    NASA Astrophysics Data System (ADS)

    Ganugapati, Raghunath

    The main aim of AMANDA neutrino telescope is to detect diffuse extra- terrestrial neutrinos. While atmospheric muons can be easily filtered out atmospheric neutrinos are an irreducible back-ground for diffuse extra- terrestrial neutrino fluxes. At GeV energies the atmospheric neutrino fluxes are dominated by conventional neutrinos. However with increasing energy (> 100TeV), the harder "prompt" neutrinos that arise through semi-leptonic decays of hadrons containing heavy quarks, most notably charm, become dominant. Estimates of the magnitude of the prompt atmospheric fluxes differ by almost two orders of magnitude. The main principle in this thesis is that it is possible to overcome the theoretical uncertainty in the magnitude of the prompt neutrino fluxes by deriving their intensity from a measurement of the down- going prompt muon flux . An attempt to constrain this flux using this principle was made and an analysis of the down-going muon data was performed to constrain the RPQM model of prompt muons by a factor of 3.67 under a strict set of simplifying assumptions.

  8. Healing in forgiveness: A discussion with Amanda Lindhout and Katherine Porterfield, PhD

    PubMed Central

    Porterfield, Katherine A.; Lindhout, Amanda

    2014-01-01

    In 2008, Amanda Lindhout was kidnapped by a group of extremists while traveling as a freelance journalist in Somalia. She and a colleague were held captive for more than 15 months, released only after their families paid a ransom. In this interview, Amanda discusses her experiences in captivity and her ongoing recovery from this experience with Katherine Porterfield, Ph.D. a clinical psychologist at the Bellevue/NYU Program for Survivors of Torture. Specifically, Amanda describes the childhood experiences that shaped her thirst for travel and knowledge, the conditions of her kidnapping, and her experiences after she was released from captivity. Amanda outlines the techniques that she employed to survive in the early aftermath of her capture, and how these coping strategies changed as her captivity lengthened. She reflects on her transition home, her recovery process, and her experiences with mental health professionals. Amanda's insights provide an example of resilience in the face of severe, extended trauma to researchers, clinicians, and survivors alike. The article ends with an discussion of the ways that Amanda's coping strategies and recovery process are consistent with existing resilience literature. Amanda's experiences as a hostage, her astonishing struggle for physical and mental survival, and her life after being freed are documented in her book, co-authored with Sara Corbett, A House in the Sky. PMID:25317259

  9. Constraining parameters of white-dwarf binaries using gravitational-wave and electromagnetic observations

    SciTech Connect

    Shah, Sweta; Nelemans, Gijs

    2014-08-01

    The space-based gravitational wave (GW) detector, evolved Laser Interferometer Space Antenna (eLISA) is expected to observe millions of compact Galactic binaries that populate our Milky Way. GW measurements obtained from the eLISA detector are in many cases complimentary to possible electromagnetic (EM) data. In our previous papers, we have shown that the EM data can significantly enhance our knowledge of the astrophysically relevant GW parameters of Galactic binaries, such as the amplitude and inclination. This is possible due to the presence of some strong correlations between GW parameters that are measurable by both EM and GW observations, for example, the inclination and sky position. In this paper, we quantify the constraints in the physical parameters of the white-dwarf binaries, i.e., the individual masses, chirp mass, and the distance to the source that can be obtained by combining the full set of EM measurements such as the inclination, radial velocities, distances, and/or individual masses with the GW measurements. We find the following 2σ fractional uncertainties in the parameters of interest. The EM observations of distance constrain the chirp mass to ∼15%-25%, whereas EM data of a single-lined spectroscopic binary constrain the secondary mass and the distance with factors of two to ∼40%. The single-line spectroscopic data complemented with distance constrains the secondary mass to ∼25%-30%. Finally, EM data on double-lined spectroscopic binary constrain the distance to ∼30%. All of these constraints depend on the inclination and the signal strength of the binary systems. We also find that the EM information on distance and/or the radial velocity are the most useful in improving the estimate of the secondary mass, inclination, and/or distance.

  10. An Observationally Constrained Evaluation of the Oxidative Capacity in the Tropical Western Pacific Troposphere

    NASA Technical Reports Server (NTRS)

    Nicely, Julie M.; Anderson, Daniel C.; Canty, Timothy P.; Salawitch, Ross J.; Wolfe, Glenn M.; Apel, Eric C.; Arnold, Steve R.; Atlas, Elliot L.; Blake, Nicola J.; Bresch, James F.; Campos, Teresa L.; Dickerson, Russell R.; Duncan, Bryan; Emmons, Louisa K.; Evans, Mathew J.; Fernandez, Rafael P.; Flemming, Johannes; Hall, Samuel R.; Hanisco, Thomas F.; Honomichl, Shawn B.; Hornbrook, Rebecca S.; Huijnen, Vincent; Kaser, Lisa; Kinnison, Douglas E.; Lamarque, Jean-Francois; Mao, Jingqui; Monks, Sarah A.; Montzka, Denise D.; Pan, Laura L.; Riemer, Daniel D.; Saiz-Lopez, Alfonso; Steenrod, Stephen D.; Stell, Meghan H.; Tilmes, Simone; Turquety, Solene; Ullmann, Kirk; Weinheimer, Andrew J.

    2016-01-01

    Hydroxyl radical (OH) is the main daytime oxidant in the troposphere and determines the atmospheric lifetimes of many compounds. We use aircraft measurements of O3, H2O, NO, and other species from the Convective Transport of Active Species in the Tropics (CONTRAST) field campaign, which occurred in the tropical western Pacific (TWP) during January-February 2014, to constrain a photochemical box model and estimate concentrations of OH throughout the troposphere. We find that tropospheric column OH (OHCOL) inferred from CONTRAST observations is 12 to 40% higher than found in chemical transport models (CTMs), including CAM-chem-SD run with 2014 meteorology as well as eight models that participated in POLMIP (2008 meteorology). Part of this discrepancy is due to a clear-sky sampling bias that affects CONTRAST observations; accounting for this bias and also for a small difference in chemical mechanism results in our empirically based value of OHCOL being 0 to 20% larger than found within global models. While these global models simulate observed O3 reasonably well, they underestimate NOx (NO +NO2) by a factor of 2, resulting in OHCOL approx.30% lower than box model simulations constrained by observed NO. Underestimations by CTMs of observed CH3CHO throughout the troposphere and of HCHO in the upper troposphere further contribute to differences between our constrained estimates of OH and those calculated by CTMs. Finally, our calculations do not support the prior suggestion of the existence of a tropospheric OH minimum in the TWP, because during January-February 2014 observed levels of O3 and NO were considerably larger than previously reported values in the TWP.

  11. An observationally constrained evaluation of the oxidative capacity in the tropical western Pacific troposphere

    NASA Astrophysics Data System (ADS)

    Nicely, Julie M.; Anderson, Daniel C.; Canty, Timothy P.; Salawitch, Ross J.; Wolfe, Glenn M.; Apel, Eric C.; Arnold, Steve R.; Atlas, Elliot L.; Blake, Nicola J.; Bresch, James F.; Campos, Teresa L.; Dickerson, Russell R.; Duncan, Bryan; Emmons, Louisa K.; Evans, Mathew J.; Fernandez, Rafael P.; Flemming, Johannes; Hall, Samuel R.; Hanisco, Thomas F.; Honomichl, Shawn B.; Hornbrook, Rebecca S.; Huijnen, Vincent; Kaser, Lisa; Kinnison, Douglas E.; Lamarque, Jean-Francois; Mao, Jingqiu; Monks, Sarah A.; Montzka, Denise D.; Pan, Laura L.; Riemer, Daniel D.; Saiz-Lopez, Alfonso; Steenrod, Stephen D.; Stell, Meghan H.; Tilmes, Simone; Turquety, Solene; Ullmann, Kirk; Weinheimer, Andrew J.

    2016-06-01

    Hydroxyl radical (OH) is the main daytime oxidant in the troposphere and determines the atmospheric lifetimes of many compounds. We use aircraft measurements of O3, H2O, NO, and other species from the Convective Transport of Active Species in the Tropics (CONTRAST) field campaign, which occurred in the tropical western Pacific (TWP) during January-February 2014, to constrain a photochemical box model and estimate concentrations of OH throughout the troposphere. We find that tropospheric column OH (OHCOL) inferred from CONTRAST observations is 12 to 40% higher than found in chemical transport models (CTMs), including CAM-chem-SD run with 2014 meteorology as well as eight models that participated in POLMIP (2008 meteorology). Part of this discrepancy is due to a clear-sky sampling bias that affects CONTRAST observations; accounting for this bias and also for a small difference in chemical mechanism results in our empirically based value of OHCOL being 0 to 20% larger than found within global models. While these global models simulate observed O3 reasonably well, they underestimate NOx (NO + NO2) by a factor of 2, resulting in OHCOL ~30% lower than box model simulations constrained by observed NO. Underestimations by CTMs of observed CH3CHO throughout the troposphere and of HCHO in the upper troposphere further contribute to differences between our constrained estimates of OH and those calculated by CTMs. Finally, our calculations do not support the prior suggestion of the existence of a tropospheric OH minimum in the TWP, because during January-February 2014 observed levels of O3 and NO were considerably larger than previously reported values in the TWP.

  12. CONSTRAINING THE DARK ENERGY EQUATION OF STATE USING LISA OBSERVATIONS OF SPINNING MASSIVE BLACK HOLE BINARIES

    SciTech Connect

    Petiteau, Antoine; Babak, Stanislav; Sesana, Alberto

    2011-05-10

    Gravitational wave (GW) signals from coalescing massive black hole (MBH) binaries could be used as standard sirens to measure cosmological parameters. The future space-based GW observatory Laser Interferometer Space Antenna (LISA) will detect up to a hundred of those events, providing very accurate measurements of their luminosity distances. To constrain the cosmological parameters, we also need to measure the redshift of the galaxy (or cluster of galaxies) hosting the merger. This requires the identification of a distinctive electromagnetic event associated with the binary coalescence. However, putative electromagnetic signatures may be too weak to be observed. Instead, we study here the possibility of constraining the cosmological parameters by enforcing statistical consistency between all the possible hosts detected within the measurement error box of a few dozen of low-redshift (z < 3) events. We construct MBH populations using merger tree realizations of the dark matter hierarchy in a {Lambda}CDM universe, and we use data from the Millennium simulation to model the galaxy distribution in the LISA error box. We show that, assuming that all the other cosmological parameters are known, the parameter w describing the dark energy equation of state can be constrained to a 4%-8% level (2{sigma} error), competitive with current uncertainties obtained by type Ia supernovae measurements, providing an independent test of our cosmological model.

  13. Constraining the Dark Energy Equation of State Using LISA Observations of Spinning Massive Black Hole Binaries

    NASA Astrophysics Data System (ADS)

    Petiteau, Antoine; Babak, Stanislav; Sesana, Alberto

    2011-05-01

    Gravitational wave (GW) signals from coalescing massive black hole (MBH) binaries could be used as standard sirens to measure cosmological parameters. The future space-based GW observatory Laser Interferometer Space Antenna (LISA) will detect up to a hundred of those events, providing very accurate measurements of their luminosity distances. To constrain the cosmological parameters, we also need to measure the redshift of the galaxy (or cluster of galaxies) hosting the merger. This requires the identification of a distinctive electromagnetic event associated with the binary coalescence. However, putative electromagnetic signatures may be too weak to be observed. Instead, we study here the possibility of constraining the cosmological parameters by enforcing statistical consistency between all the possible hosts detected within the measurement error box of a few dozen of low-redshift (z < 3) events. We construct MBH populations using merger tree realizations of the dark matter hierarchy in a ΛCDM universe, and we use data from the Millennium simulation to model the galaxy distribution in the LISA error box. We show that, assuming that all the other cosmological parameters are known, the parameter w describing the dark energy equation of state can be constrained to a 4%-8% level (2σ error), competitive with current uncertainties obtained by type Ia supernovae measurements, providing an independent test of our cosmological model.

  14. Neutrino Data from IceCube and its Predecessor at the South Pole, the Antarctic Muon and Neutrino Detector Array (AMANDA)

    DOE Data Explorer

    Abbasi, R.

    IceCube is a neutrino observatory for astrophysics with parts buried below the surface of the ice at the South Pole and an air-shower detector array exposed above. The international group of sponsors, led by the National Science Foundation (NSF), that designed and implemented the experiment intends for IceCube to operate and provide data for 20 years. IceCube records the interactions produced by astrophysical neutrinos with energies above 100 GeV, observing the Cherenkov radiation from charged particles produced in neutrino interactions. Its goal is to discover the sources of high-energy cosmic rays. These sources may be active galactic nuclei (AGNs) or massive, collapsed stars where black holes have formed.[Taken from http://www.icecube.wisc.edu/] The data from IceCube's predecessor experiment and detector, AMANDA, IceCube’s predecessor detector and experiment is also available at this website. AMANDA pioneered neutrino detection in ice. Over a period of years in the 1990s, detecting “strings” were buried and activated and by 2000, AMANDA was successfully recording an average of 1,000 neutrino events per year. This site also makes available many images and video from the two experiments.

  15. Constraining storm-scale forecasts of deep convective initiation with surface weather observations

    NASA Astrophysics Data System (ADS)

    Madaus, Luke

    Successfully forecasting when and where individual convective storms will form remains an elusive goal for short-term numerical weather prediction. In this dissertation, the convective initiation (CI) challenge is considered as a problem of insufficiently resolved initial conditions and dense surface weather observations are explored as a possible solution. To better quantify convective-scale surface variability in numerical simulations of discrete convective initiation, idealized ensemble simulations of a variety of environments where CI occurs in response to boundary-layer processes are examined. Coherent features 1-2 hours prior to CI are found in all surface fields examined. While some features were broadly expected, such as positive temperature anomalies and convergent winds, negative temperature anomalies due to cloud shadowing are the largest surface anomaly seen prior to CI. Based on these simulations, several hypotheses about the required characteristics of a surface observing network to constrain CI forecasts are developed. Principally, these suggest that observation spacings of less than 4---5 km would be required, based on correlation length scales. Furthermore, it is anticipated that 2-m temperature and 10-m wind observations would likely be more relevant for effectively constraining variability than surface pressure or 2-m moisture observations based on the magnitudes of observed anomalies relative to observation error. These hypotheses are tested with a series of observing system simulation experiments (OSSEs) using a single CI-capable environment. The OSSE results largely confirm the hypotheses, and with 4-km and particularly 1-km surface observation spacing, skillful forecasts of CI are possible, but only within two hours of CI time. Several facets of convective-scale assimilation, including the need for properly-calibrated localization and problems from non-Gaussian ensemble estimates of the cloud field are discussed. Finally, the characteristics

  16. MULTI-WAVELENGTH OBSERVATIONS OF SOLAR FLARES WITH A CONSTRAINED PEAK X-RAY FLUX

    SciTech Connect

    Bowen, Trevor A.; Testa, Paola; Reeves, Katharine K.

    2013-06-20

    We present an analysis of soft X-ray (SXR) and extreme-ultraviolet (EUV) observations of solar flares with an approximate C8 Geostationary Operational Environmental Satellite (GOES) class. Our constraint on peak GOES SXR flux allows for the investigation of correlations between various flare parameters. We show that the duration of the decay phase of a flare is proportional to the duration of its rise phase. Additionally, we show significant correlations between the radiation emitted in the flare rise and decay phases. These results suggest that the total radiated energy of a given flare is proportional to the energy radiated during the rise phase alone. This partitioning of radiated energy between the rise and decay phases is observed in both SXR and EUV wavelengths. Though observations from the EUV Variability Experiment show significant variation in the behavior of individual EUV spectral lines during different C8 events, this work suggests that broadband EUV emission is well constrained. Furthermore, GOES and Atmospheric Imaging Assembly data allow us to determine several thermal parameters (e.g., temperature, volume, density, and emission measure) for the flares within our sample. Analysis of these parameters demonstrate that, within this constrained GOES class, the longer duration solar flares are cooler events with larger volumes capable of emitting vast amounts of radiation. The shortest C8 flares are typically the hottest events, smaller in physical size, and have lower associated total energies. These relationships are directly comparable with several scaling laws and flare loop models.

  17. Future sea level rise constrained by observations and long-term commitment

    PubMed Central

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-01-01

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28–56 cm, 37–77 cm, and 57–131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The “constrained extrapolation” approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections. PMID:26903648

  18. Constraining Aerosol Distributions in Asia by Integrating Models with Multi-sensor Observations (Invited)

    NASA Astrophysics Data System (ADS)

    Carmichael, G. R.; Kulkarni, S.; Chung, C. E.; Ramanathan, V.

    2010-12-01

    Aerosols are ubiquitous components of the atmosphere that are linked to various adverse impacts including increased health risks, visibility degradation, alteration of cloud properties and changing climate patterns on local, regional and global scales. The past decade has witnessed an alarming growth in Asian emissions thereby causing a great concern for global air quality. The Chemical Transport Models (CTM’s) provide a means to link the emissions with observations and greatly assist in policy-making decisions. The underlying uncertainties associated with emissions, meteorology and various chemical processes in CTMs can be greatly reduced by constraining them with observations. In this regard, satellite borne observations provide unprecedented data due to their continuous global coverage. In particular, the AOD measurements available from the MODIS and MISR instruments onboard the NASA TERRA satellites are being increasingly used for both CTM evaluation and as input to aerosol data assimilation studies. In this study, we present a regional scale modeling analysis over Asia constrained with surface AERONET and MODIS AODs via data assimilation using optimal interpolation. The MODIS AOD retrieved by different methods including Deep Blue algorithm over land was used in this study. The climatic effects of absorbing aerosols were studied by testing constraints provided by AERONET, MISR and OMI absorption AOD measurements.

  19. Future sea level rise constrained by observations and long-term commitment.

    PubMed

    Mengel, Matthias; Levermann, Anders; Frieler, Katja; Robinson, Alexander; Marzeion, Ben; Winkelmann, Ricarda

    2016-03-08

    Sea level has been steadily rising over the past century, predominantly due to anthropogenic climate change. The rate of sea level rise will keep increasing with continued global warming, and, even if temperatures are stabilized through the phasing out of greenhouse gas emissions, sea level is still expected to rise for centuries. This will affect coastal areas worldwide, and robust projections are needed to assess mitigation options and guide adaptation measures. Here we combine the equilibrium response of the main sea level rise contributions with their last century's observed contribution to constrain projections of future sea level rise. Our model is calibrated to a set of observations for each contribution, and the observational and climate uncertainties are combined to produce uncertainty ranges for 21st century sea level rise. We project anthropogenic sea level rise of 28-56 cm, 37-77 cm, and 57-131 cm in 2100 for the greenhouse gas concentration scenarios RCP26, RCP45, and RCP85, respectively. Our uncertainty ranges for total sea level rise overlap with the process-based estimates of the Intergovernmental Panel on Climate Change. The "constrained extrapolation" approach generalizes earlier global semiempirical models and may therefore lead to a better understanding of the discrepancies with process-based projections.

  20. Constraining mantle viscosity structure for a thermochemical mantle using the geoid observation

    NASA Astrophysics Data System (ADS)

    Liu, Xi; Zhong, Shijie

    2016-03-01

    Long-wavelength geoid anomalies provide important constraints on mantle dynamics and viscosity structure. Previous studies have successfully reproduced the observed geoid using seismically inferred buoyancy in whole-mantle convection models. However, it has been suggested that large low shear velocity provinces (LLSVPs) underneath Pacific and Africa in the lower mantle are chemically distinct and are likely denser than the ambient mantle. We formulate instantaneous flow models based on seismic tomographic models to compute the geoid and constrain mantle viscosity by assuming both thermochemical and whole-mantle convection. Geoid modeling for the thermochemical model is performed by considering the compensation effect of dense thermochemical piles and removing buoyancy structure of the compensation layer in the lower mantle. Thermochemical models well reproduce the observed geoid, thus reconciling the geoid with the interpretation of LLSVPs as dense thermochemical piles. The viscosity structure inverted for thermochemical models is nearly identical to that of whole-mantle models. In the preferred model, the lower mantle viscosity is ˜10 times higher than the upper mantle viscosity that is ˜10 times higher than the transition zone viscosity. The weak transition zone is consistent with the proposed high water content there. The geoid in thermochemical mantle models is sensitive to seismic structure at midmantle depths, suggesting a need to improve seismic imaging resolution there. The geoid modeling constrains the vertical extent of dense and stable chemical piles to be within ˜500 km above CMB. Our results have implications for mineral physics, seismic tomographic studies, and mantle convection modeling.

  1. EQUATION OF STATE AND NEUTRON STAR PROPERTIES CONSTRAINED BY NUCLEAR PHYSICS AND OBSERVATION

    SciTech Connect

    Hebeler, K.; Lattimer, J. M.; Pethick, C. J.; Schwenk, A.

    2013-08-10

    Microscopic calculations of neutron matter based on nuclear interactions derived from chiral effective field theory, combined with the recent observation of a 1.97 {+-} 0.04 M{sub Sun} neutron star, constrain the equation of state of neutron-rich matter at sub- and supranuclear densities. We discuss in detail the allowed equations of state and the impact of our results on the structure of neutron stars, the crust-core transition density, and the nuclear symmetry energy. In particular, we show that the predicted range for neutron star radii is robust. For use in astrophysical simulations, we provide detailed numerical tables for a representative set of equations of state consistent with these constraints.

  2. Global fine-mode aerosol radiative effect, as constrained by comprehensive observations

    NASA Astrophysics Data System (ADS)

    Chung, Chul E.; Chu, Jung-Eun; Lee, Yunha; van Noije, Twan; Jeoung, Hwayoung; Ha, Kyung-Ja; Marks, Marguerite

    2016-07-01

    Aerosols directly affect the radiative balance of the Earth through the absorption and scattering of solar radiation. Although the contributions of absorption (heating) and scattering (cooling) of sunlight have proved difficult to quantify, the consensus is that anthropogenic aerosols cool the climate, partially offsetting the warming by rising greenhouse gas concentrations. Recent estimates of global direct anthropogenic aerosol radiative forcing (i.e., global radiative forcing due to aerosol-radiation interactions) are -0.35 ± 0.5 W m-2, and these estimates depend heavily on aerosol simulation. Here, we integrate a comprehensive suite of satellite and ground-based observations to constrain total aerosol optical depth (AOD), its fine-mode fraction, the vertical distribution of aerosols and clouds, and the collocation of clouds and overlying aerosols. We find that the direct fine-mode aerosol radiative effect is -0.46 W m-2 (-0.54 to -0.39 W m-2). Fine-mode aerosols include sea salt and dust aerosols, and we find that these natural aerosols result in a very large cooling (-0.44 to -0.26 W m-2) when constrained by observations. When the contribution of these natural aerosols is subtracted from the fine-mode radiative effect, the net becomes -0.11 (-0.28 to +0.05) W m-2. This net arises from total (natural + anthropogenic) carbonaceous, sulfate and nitrate aerosols, which suggests that global direct anthropogenic aerosol radiative forcing is less negative than -0.35 W m-2.

  3. Observational techniques for constraining hydraulic and hydrologic models for use in catchment scale flood impact assessment

    NASA Astrophysics Data System (ADS)

    Owen, Gareth; Wilkinson, Mark; Nicholson, Alex; Quinn, Paul; O'Donnell, Greg

    2015-04-01

    There is an increase in the use of Natural Flood Management (NFM) schemes to tackle excessive runoff in rural catchments, but direct evidence of their functioning during extreme events is often lacking. With the availability of low cost sensors, a dense nested monitoring network can be established to provide near continuous optical and physical observations of hydrological processes. This paper will discuss findings for a number of catchments in the North of England where land use management and NFM have been implemented for flood risk reduction; and show how these observations have been used to inform both a hydraulic and a rainfall-runoff model. The value of observations in understanding how measures function is of fundamental importance and is becoming increasingly viable and affordable. Open source electronic platforms such as Arduino and Raspberry Pi are being used with cheap sensors to perform these tasks. For example, a level gauge has been developed for approximately €110 and cameras capable of capturing still or moving pictures are available for approximately €120; these are being used to better understand the behaviour of NFM features such as ponds and woody debris. There is potential for networks of these instruments to be configured and data collected through Wi-Fi or other wireless networks. The potential to expand informative networks of data that can constrain models is now possible. The functioning of small scale runoff attenuation features, such as offline ponds, has been demonstrated at the local scale. Specifically, through the measurement of both instream and in-pond water levels, it has been possible to calculate the impact of storing/attenuating flood flows on the adjacent river flow. This information has been encapsulated in a hydraulic model that allows the extrapolation of impacts to the larger catchment scale, contributing to understanding of the scalability of such features. Using a dense network of level gauges located along the main

  4. Constraining the Epoch of Reionization from the Observed Properties of the High-z Universe

    NASA Astrophysics Data System (ADS)

    Salvador-Solé, Eduard; Manrique, Alberto; Guzman, Rafael; Rodríguez Espinosa, José Miguel; Gallego, Jesús; Herrero, Artemio; Mas-Hesse, J. Miguel; Marín Franch, Antonio

    2017-01-01

    We combine observational data on a dozen independent cosmic properties at high-z with the information on reionization drawn from the spectra of distant luminous sources and the cosmic microwave background (CMB) to constrain the interconnected evolution of galaxies and the intergalactic medium since the dark ages. The only acceptable solutions are concentrated in two narrow sets. In one of them reionization proceeds in two phases: a first one driven by Population III stars, completed at z∼ 10, and after a short recombination period a second one driven by normal galaxies, completed at z∼ 6. In the other set both kinds of sources work in parallel until full reionization at z∼ 6. The best solution with double reionization gives excellent fits to all the observed cosmic histories, but the CMB optical depth is 3σ larger than the recent estimate from the Planck data. Alternatively, the best solution with single reionization gives less good fits to the observed star formation rate density and cold gas mass density histories, but the CMB optical depth is consistent with that estimate. We make several predictions, testable with future observations, that should discriminate between the two reionization scenarios. As a byproduct our models provide a natural explanation to some characteristic features of the cosmic properties at high-z, as well as to the origin of globular clusters.

  5. MODIS Aerosol Observations used to Constrain Dust Distributions and Lifecycle in the NASA GEOS-5 Model

    NASA Technical Reports Server (NTRS)

    Colarco, P.; Nowottnick, E.; daSilva, A.

    2007-01-01

    Approximately 240 Tg of mineral dust aerosol are transported annually from Saharan Africa to the Atlantic Ocean. Dust affects the Earth radiation budget, and plays direct (through scattering and absorption of radiation) and indirect (through modification of cloud properties and environment) roles in climate. Deposition of dust to the surface provides an important nutrient source to terrestrial and oceanic ecosystems. Dust is additionally a contributor to adverse air quality. Among the tools toward understanding the lifecycle and impacts of mineral dust aerosols are numerical models. Important constraints on these models come from quantitative satellite observations, like those from the space-based Moderate Resolution Imaging Spectroradiometer (MODIS). In particular, Kauhan et al. [2005] used MODIS aerosol observations to infer transport and deposition fluxes of Saharan dust over the Atlantic, Caribbean, and Amazonian basins. Those observations are used here to constrain the transport of dust and its interannual variability simulated in the NASA GEOS-5 general circulation model and data assimilation system. Significant uncertainty exists in the MODIS-derived fluxes, however, due to uncertainty in the wind fields provided by meteorological analyses in this region. That same uncertainty in the wind fields is manifest in our GEOS-5 simulations of dust distributions. Here we use MODIS observations to investigate the seasonality and location of the Saharan dust plume and explore through sensitivity analysis of our model the meteorological controls on the dust distribution, including dust direct radiative effects and sub-gridscale source and sink processes.

  6. Constraining future terrestrial carbon cycle projections using observation-based water and carbon flux estimates.

    PubMed

    Mystakidis, Stefanos; Davin, Edouard L; Gruber, Nicolas; Seneviratne, Sonia I

    2016-06-01

    The terrestrial biosphere is currently acting as a sink for about a third of the total anthropogenic CO2  emissions. However, the future fate of this sink in the coming decades is very uncertain, as current earth system models (ESMs) simulate diverging responses of the terrestrial carbon cycle to upcoming climate change. Here, we use observation-based constraints of water and carbon fluxes to reduce uncertainties in the projected terrestrial carbon cycle response derived from simulations of ESMs conducted as part of the 5th phase of the Coupled Model Intercomparison Project (CMIP5). We find in the ESMs a clear linear relationship between present-day evapotranspiration (ET) and gross primary productivity (GPP), as well as between these present-day fluxes and projected changes in GPP, thus providing an emergent constraint on projected GPP. Constraining the ESMs based on their ability to simulate present-day ET and GPP leads to a substantial decrease in the projected GPP and to a ca. 50% reduction in the associated model spread in GPP by the end of the century. Given the strong correlation between projected changes in GPP and in NBP in the ESMs, applying the constraints on net biome productivity (NBP) reduces the model spread in the projected land sink by more than 30% by 2100. Moreover, the projected decline in the land sink is at least doubled in the constrained ensembles and the probability that the terrestrial biosphere is turned into a net carbon source by the end of the century is strongly increased. This indicates that the decline in the future land carbon uptake might be stronger than previously thought, which would have important implications for the rate of increase in the atmospheric CO2 concentration and for future climate change.

  7. Combining Observations of Shock-induced Minerals with Calculations to Constrain the Shock History of Meteorites.

    NASA Astrophysics Data System (ADS)

    de Carli, P. S.; Xie, Z.; Sharp, T. G.

    2007-12-01

    All available evidence from shock Hugoniot and release adiabat measurements and from shock recovery experiments supports the hypothesis that the conditions for shock-induced phase transitions are similar to the conditions under which quasistatic phase transitions are observed. Transitions that require high temperatures under quasistatic pressures require high temperatures under shock pressures. The high-pressure phases found in shocked meteorites are almost invariably associated with shock melt veins. A shock melt vein is analogous to a pseudotachylite, a sheet of locally melted material that was quenched by conduction to surrounding cooler material. The mechanism by which shock melt veins form is not known; possible mechanisms include shock collisions, shock interactions with cracks and pores, and adiabatic shear. If one assumes that the phases within the vein crystallized in their stability fields, then available static high-pressure data constrain the shock pressure range over which the vein solidified. Since the veins have a sheet-like geometry, one may use one-dimensional heat flow calculations to constrain the cooling and crystallization history of the veins (Langenhorst and Poirier, 2000). Although the formation mechanism of a melt vein may involve transient pressure excursions, pressure equilibration of a mm-wide vein will be complete within about a microsecond, whereas thermal equilibration will require seconds. Some of our melt vein studies have indicated that the highly-shocked L chondrite meteorites were exposed to a narrow range of shock pressures, e.g., 18-25 GPa, over a minimum duration of the order of a second. We have used the Autodyn(TM) wave propagation code to calculate details of plausible impacts on the L-chondrite parent body for a variety of possible parent body stratigraphies. We infer that some meteorites probably represent material that was shocked at a depth of >10 km in their parent bodies.

  8. Constraining cloud lifetime effects of aerosols using A-Train satellite observations

    NASA Astrophysics Data System (ADS)

    Wang, Minghuai; Ghan, Steven; Liu, Xiaohong; L'Ecuyer, Tristan S.; Zhang, Kai; Morrison, Hugh; Ovchinnikov, Mikhail; Easter, Richard; Marchand, Roger; Chand, Duli; Qian, Yun; Penner, Joyce E.

    2012-08-01

    Aerosol indirect effects have remained the largest uncertainty in estimates of the radiative forcing of past and future climate change. Observational constraints on cloud lifetime effects are particularly challenging since it is difficult to separate aerosol effects from meteorological influences. Here we use three global climate models, including a multi-scale aerosol-climate model PNNL-MMF, to show that the dependence of the probability of precipitation on aerosol loading, termed the precipitation frequency susceptibility (Spop), is a good measure of the liquid water path response to aerosol perturbation (λ), as both Spop and λ strongly depend on the magnitude of autoconversion, a model representation of precipitation formation via collisions among cloud droplets. This provides a method to use satellite observations to constrain cloud lifetime effects in global climate models. Spop in marine clouds estimated from CloudSat, MODIS and AMSR-E observations is substantially lower than that from global climate models and suggests a liquid water path increase of less than 5% from doubled cloud condensation nuclei concentrations. This implies a substantially smaller impact on shortwave cloud radiative forcing over ocean due to aerosol indirect effects than simulated by current global climate models (a reduction by one-third for one of the conventional aerosol-climate models). Further work is needed to quantify the uncertainties in satellite-derived estimates of Spop and to examine Spop in high-resolution models.

  9. Predicting the future by explaining the past: constraining carbon-climate feedback using contemporary observations

    NASA Astrophysics Data System (ADS)

    Denning, S.

    2014-12-01

    The carbon-climate community has an historic opportunity to make a step-function improvement in climate prediction by using regional constraints to improve mechanistic model representation of carbon cycle processes. Interactions among atmospheric CO2, global biogeochemistry, and physical climate constitute leading sources of uncertainty in future climate. First-order differences among leading models of these processes produce differences in climate as large as differences in aerosol-cloud-radiation interactions and fossil fuel combustion. Emergent constraints based on global observations of interannual variations provide powerful constraints on model parameterizations. Additional constraints can be defined at regional scales. Organized intercomparison experiments have shown that uncertainties in future carbon-climate feedback arise primarily from model representations of the dependence of photosynthesis on CO2 and drought stress and the dependence of decomposition on temperature. Just as representations of net carbon fluxes have benefited from eddy flux, ecosystem manipulations, and atmospheric CO2, component carbon fluxes (photosynthesis, respiration, decomposition, disturbance) can be constrained at regional scales using new observations. Examples include biogeochemical tracers such as isotopes and carbonyl sulfide as well as remotely-sensed parameters such as chlorophyll fluorescence and biomass. Innovative model evaluation experiments will be needed to leverage the information content of new observations to improve process representations as well as to provide accurate initial conditions for coupled climate model simulations. Successful implementation of a comprehensive benchmarking program could have a huge impact on understanding and predicting future climate change.

  10. Constraining the Solar Coronal Magnetic Field Strength using Split-band Type II Radio Burst Observations

    NASA Astrophysics Data System (ADS)

    Kishore, P.; Ramesh, R.; Hariharan, K.; Kathiravan, C.; Gopalswamy, N.

    2016-11-01

    We report on low-frequency radio (85-35 MHz) spectral observations of four different type II radio bursts, which exhibited fundamental-harmonic emission and split-band structure. Each of the bursts was found to be closely associated with a whitelight coronal mass ejection (CME) close to the Sun. We estimated the coronal magnetic field strength from the split-band characteristics of the bursts, by assuming a model for the coronal electron density distribution. The choice of the model was constrained, based on the following criteria: (1) when the radio burst is observed simultaneously in the upper and lower bands of the fundamental component, the location of the plasma level corresponding to the frequency of the burst in the lower band should be consistent with the deprojected location of the leading edge (LE) of the associated CME; (2) the drift speed of the type II bursts derived from such a model should agree closely with the deprojected speed of the LE of the corresponding CMEs. With the above conditions, we find that: (1) the estimated field strengths are unique to each type II burst, and (2) the radial variation of the field strength in the different events indicate a pattern. It is steepest for the case where the heliocentric distance range over which the associated burst is observed is closest to the Sun, and vice versa.

  11. Fast emission estimates in China and South Africa constrained by satellite observations

    NASA Astrophysics Data System (ADS)

    Mijling, Bas; van der A, Ronald

    2013-04-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for emerging economies such as China and South Africa, where rapid economic growth change emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. However, constraining emissions from observations of concentrations is computationally challenging. Within the GlobEmission project (part of the Data User Element programme of ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China and South Africa, using the CHIMERE chemical transport model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e

  12. Using modern stellar observables to constrain stellar parameters and the physics of the stellar interior

    NASA Astrophysics Data System (ADS)

    van Saders, Jennifer L.

    2014-05-01

    The current state and future evolution of a star is, in principle, specified by a only a few physical quantities: the mass, age, hydrogen, helium, and metal abundance. These same fundamental quantities are crucial for reconstructing the history of stellar systems ranging in scale from planetary systems to galaxies. However, the fundamental parameters are rarely directly observable, and we are forced to use proxies that are not always sensitive or unique functions of the stellar parameters we wish to determine. Imprecise or inaccurate determinations of the fundamental parameters often limit our ability to draw inferences about a given system. As new technologies, instruments, and observing techniques become available, the list of viable stellar observables increases, and we can explore new links between the observables and fundamental quantities in an effort to better characterize stellar systems. In the era of missions such as Kepler, time-domain observables such as the stellar rotation period and stellar oscillations are now available for an unprecedented number of stars, and future missions promise to further expand the sample. Furthermore, despite the successes of stellar evolution models, the processes and detailed structure of the deep stellar interior remains uncertain. Even in the case of well-measured, well understood stellar observables, the link to the underlying parameters contains uncertainties due to our imperfect understanding of stellar interiors. Model uncertainties arise from sources such as the treatment of turbulent convection, transport of angular momentum and mixing, and assumptions about the physical conditions of stellar matter. By carefully examining the sensitivity of stellar observables to physical processes operating within the star and model assumptions, we can design observational tests for the theory of stellar interiors. I propose a series of tools based on new or revisited stellar observables that can be used both to constrain

  13. Constraining a Martian general circulation model with the MAVEN/IUVS observations in the thermosphere

    NASA Astrophysics Data System (ADS)

    Moeckel, Chris; Medvedev, Alexander; Nakagawa, Hiromu; Evans, Scott; Kuroda, Takeshi; Hartogh, Paul; Yiğit, Erdal; Jain, Sonal; Lo, Daniel; Schneider, Nicholas M.; Jakosky, Bruce

    2016-10-01

    The recent measurements of the number density of atomic oxygen by Mars Atmosphere and Volatile EvolutioN/ Imaging UltraViolet Spectrograph (MAVEN/IUVS) have been implemented for the first time into a global circulation model to quantify the effect on the Martian thermosphere. The number density has been converted to 1D volume mixing ratio and this profile is compared to the atomic oxygen scenarios based on chemical models. Simulations were performed with the Max Planck Institute Martian General Circulation Model (MPI-MGCM). The simulations closely emulate the conditions at the time of observations. The results are compared to the IUVS-measured CO2 number density and temperature above 130 km to gain knowledge of the processes in the upper atmosphere and further constrain them in MGCMs. The presentation will discuss the role and importance in the thermosphere of the following aspects: (a) impact of the observed atomic oxygen, (b) 27-day solar cycle variations, (c) varying dust load in the lower atmosphere, and (d) gravity waves.

  14. Constraining the Evolution of Galaxy Properties in Interacting Systems with UV-FIR Observations and Simulations

    NASA Astrophysics Data System (ADS)

    Lanz, Lauranne; Zezas, A.; Brassington, N.; Smith, H. A.; Ashby, M. N.; da Cunha, E.; Hayward, C. C.; Jonsson, P.; Hernquist, L. E.; Fazio, G. G.

    2013-01-01

    The evolution of galaxies is greatly influenced by their interactions. As part of the Spitzer Interacting Galaxy Survey (SIGS), we imaged 48 nearby systems with Spitzer. We measured and modeled the spectral energy distributions (SEDs) at wavelengths from the ultraviolet (UV) to the far-infrared (FIR) of the set of these galaxies with publicly available Herschel SPIRE observations. We fit these SEDs with the Bayesian SED-fitting program MAGPHYS developed by da Cunha et al. (2008). In order to determine the reliability of the parameters extracted, we determined how well MAGPHYS recovers parameters of hydrodynamic simulations run with GADGET (Springel et al. 2005) and for which simulated photometry was calculated using the SUNRISE radiative transfer code (Jonsson et al. 2010). We present our conclusions on the variations with interaction stage of galaxy properties including: star formation histories; dust luminosities, temperatures, and masses; and stellar masses. We discuss how successfully MAGPHYS recovers galaxy properties and which instruments are most crucial for constraining masses, star formation histories, and dust properties. We compare the simulations directly to the observations, examining how unique a diagnostic an interacting galaxy SED can be. Finally, we compare and discuss how well the many simple star formation estimating relations (using 24 micron flux, for example) succeed and why.

  15. Amanda Seed: Award for Distinguished Scientific Early Career Contributions to Psychology.

    PubMed

    2014-11-01

    APA's Awards for Distinguished Scientific Early Career Contributions to Psychology recognize excellent young psychologists who have not held a doctoral degree for more than nine years. One of the 2014 award winners is Amanda Seed, for "incisive and innovative contributions to comparative cognition." Seed's award citation, biography, and a selected bibliography are presented here.

  16. Catching the fish - Constraining stellar parameters for TX Piscium using spectro-interferometric observations

    NASA Astrophysics Data System (ADS)

    Klotz, D.; Paladini, C.; Hron, J.; Aringer, B.; Sacuto, S.; Marigo, P.; Verhoelst, T.

    2013-02-01

    Context. Stellar parameter determination is a challenging task when dealing with galactic giant stars. The combination of different investigation techniques has proven to be a promising approach. Aims: We analyse archive spectra obtained with the Short Wavelength Spectrometer (SWS) onboard ISO, and new interferometric observations from the Very Large Telescope MID-infrared Interferometric instrument (VLTI/MIDI) of a very well studied carbon-rich giant: TX Psc. The aim of this work is to determine stellar parameters using spectroscopy and interferometry. Methods: The observations are used to constrain the model atmosphere, and eventually the stellar evolutionary model in the region where the tracks map the beginning of the carbon star sequence. Two different approaches are used to determine stellar parameters: (i) the "classic" interferometric approach where the effective temperature is fixed by using the angular diameter in the N-band (from interferometry) and the apparent bolometric magnitude; (ii) parameters are obtained by fitting a grid of state-of-the-art hydrostatic models to spectroscopic and interferometric observations. Results: We find good agreement between the parameters of the two methods. The effective temperature and luminosity clearly place TX Psc in the carbon-rich AGB star domain in the H-R-diagram. Current evolutionary tracks suggest that TX Psc became a C-star just recently, which means that the star is still in a "quiet" phase compared to the subsequent strong-wind regime. This agrees with the C/O ratio being only slightly greater than one. Based on observations made with ESO telescopes at Paranal Observatory under program IDs 74.D-0601, 60.A-9224, 77.C-0440, 60.A-9006, 78.D-0112, 84.D-0805.

  17. Constraining the low-cloud optical depth feedback at middle and high latitudes using satellite observations

    DOE PAGES

    Terai, C. R.; Klein, S. A.; Zelinka, M. D.

    2016-08-26

    The increase in cloud optical depth with warming at middle and high latitudes is a robust cloud feedback response found across all climate models. This study builds on results that suggest the optical depth response to temperature is timescale invariant for low-level clouds. The timescale invariance allows one to use satellite observations to constrain the models' optical depth feedbacks. Three passive-sensor satellite retrievals are compared against simulations from eight models from the Atmosphere Model Intercomparison Project (AMIP) of the 5th Coupled Model Intercomparison Project (CMIP5). This study confirms that the low-cloud optical depth response is timescale invariant in the AMIPmore » simulations, generally at latitudes higher than 40°. Compared to satellite estimates, most models overestimate the increase in optical depth with warming at the monthly and interannual timescales. Many models also do not capture the increase in optical depth with estimated inversion strength that is found in all three satellite observations and in previous studies. The discrepancy between models and satellites exists in both hemispheres and in most months of the year. A simple replacement of the models' optical depth sensitivities with the satellites' sensitivities reduces the negative shortwave cloud feedback by at least 50% in the 40°–70°S latitude band and by at least 65% in the 40°–70°N latitude band. Furthermore, based on this analysis of satellite observations, we conclude that the low-cloud optical depth feedback at middle and high latitudes is likely too negative in climate models.« less

  18. Constraining the low-cloud optical depth feedback at middle and high latitudes using satellite observations

    SciTech Connect

    Terai, C. R.; Klein, S. A.; Zelinka, M. D.

    2016-08-26

    The increase in cloud optical depth with warming at middle and high latitudes is a robust cloud feedback response found across all climate models. This study builds on results that suggest the optical depth response to temperature is timescale invariant for low-level clouds. The timescale invariance allows one to use satellite observations to constrain the models' optical depth feedbacks. Three passive-sensor satellite retrievals are compared against simulations from eight models from the Atmosphere Model Intercomparison Project (AMIP) of the 5th Coupled Model Intercomparison Project (CMIP5). This study confirms that the low-cloud optical depth response is timescale invariant in the AMIP simulations, generally at latitudes higher than 40°. Compared to satellite estimates, most models overestimate the increase in optical depth with warming at the monthly and interannual timescales. Many models also do not capture the increase in optical depth with estimated inversion strength that is found in all three satellite observations and in previous studies. The discrepancy between models and satellites exists in both hemispheres and in most months of the year. A simple replacement of the models' optical depth sensitivities with the satellites' sensitivities reduces the negative shortwave cloud feedback by at least 50% in the 40°–70°S latitude band and by at least 65% in the 40°–70°N latitude band. Furthermore, based on this analysis of satellite observations, we conclude that the low-cloud optical depth feedback at middle and high latitudes is likely too negative in climate models.

  19. GRACE gravity observations constrain Weichselian ice thickness in the Barents Sea

    NASA Astrophysics Data System (ADS)

    Root, B. C.; Tarasov, L.; Wal, W.

    2015-05-01

    The Barents Sea is subject to ongoing postglacial uplift since the melting of the Weichselian ice sheet that covered it. The regional ice sheet thickness history is not well known because there is only data at the periphery due to the locations of Franz Joseph Land, Svalbard, and Novaya Zemlya surrounding this paleo ice sheet. We show that the linear trend in the gravity rate derived from a decade of observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission can constrain the volume of the ice sheet after correcting for current ice melt, hydrology, and far-field gravitational effects. Regional ice-loading models based on new geologically inferred ice margin chronologies show a significantly better fit to the GRACE data than that of ICE-5G. The regional ice models contain less ice in the Barents Sea than present in ICE-5G (5-6.3 m equivalent sea level versus 8.5 m), which increases the ongoing difficulty in closing the global sea level budget at the Last Glacial Maximum.

  20. Global estimate of submarine groundwater discharge based on an observationally constrained radium isotope model

    NASA Astrophysics Data System (ADS)

    Kwon, Eun Young; Kim, Guebuem; Primeau, Francois; Moore, Willard S.; Cho, Hyung-Mi; DeVries, Timothy; Sarmiento, Jorge L.; Charette, Matthew A.; Cho, Yang-Ki

    2014-12-01

    Along the continental margins, rivers and submarine groundwater supply nutrients, trace elements, and radionuclides to the coastal ocean, supporting coastal ecosystems and, increasingly, causing harmful algal blooms and eutrophication. While the global magnitude of gauged riverine water discharge is well known, the magnitude of submarine groundwater discharge (SGD) is poorly constrained. Using an inverse model combined with a global compilation of 228Ra observations, we show that the SGD integrated over the Atlantic and Indo-Pacific Oceans between 60°S and 70°N is (12 ± 3) × 1013 m3 yr-1, which is 3 to 4 times greater than the freshwater fluxes into the oceans by rivers. Unlike the rivers, where more than half of the total flux is discharged into the Atlantic, about 70% of SGD flows into the Indo-Pacific Oceans. We suggest that SGD is the dominant pathway for dissolved terrestrial materials to the global ocean, and this necessitates revisions for the budgets of chemical elements including carbon.

  1. Bioenergy potential of the United States constrained by satellite observations of existing productivity

    USGS Publications Warehouse

    Reed, Sasha C.; Smith, William K.; Cleveland, Cory C.; Miller, Norman L.; Running, Steven W.

    2012-01-01

    Background/Question/Methods Currently, the United States (U.S.) supplies roughly half the world’s biofuel (secondary bioenergy), with the Energy Independence and Security Act of 2007 (EISA) stipulating an additional three-fold increase in annual production by 2022. Implicit in such energy targets is an associated increase in annual biomass demand (primary bioenergy) from roughly 2.9 to 7.4 exajoules (EJ; 1018 Joules). Yet, many of the factors used to estimate future bioenergy potential are relatively unresolved, bringing into question the practicality of the EISA’s ambitious bioenergy targets. Here, our objective was to constrain estimates of primary bioenergy potential (PBP) for the conterminous U.S. using satellite-derived net primary productivity (NPP) data (measured for every 1 km2 of the 7.2 million km2 of vegetated land in the conterminous U.S) as the most geographically explicit measure of terrestrial growth capacity. Results/Conclusions We show that the annual primary bioenergy potential (PBP) of the conterminous U.S. realistically ranges from approximately 5.9 (± 1.4) to 22.2 (± 4.4) EJ, depending on land use. The low end of this range represents current harvest residuals, an attractive potential energy source since no additional harvest land is required. In contrast, the high end represents an annual harvest over an additional 5.4 million km2 or 75% of vegetated land in the conterminous U.S. While we identify EISA energy targets as achievable, our results indicate that meeting such targets using current technology would require either an 80% displacement of current croplands or the conversion of 60% of total rangelands. Our results differ from previous evaluations in that we use high resolution, satellite-derived NPP as an upper-envelope constraint on bioenergy potential, which removes the need for extrapolation of plot-level observed yields over large spatial areas. Establishing realistically constrained estimates of bioenergy potential seems a

  2. CONSTRAINING HIGH-SPEED WINDS IN EXOPLANET ATMOSPHERES THROUGH OBSERVATIONS OF ANOMALOUS DOPPLER SHIFTS DURING TRANSIT

    SciTech Connect

    Miller-Ricci Kempton, Eliza; Rauscher, Emily

    2012-06-01

    Three-dimensional (3D) dynamical models of hot Jupiter atmospheres predict very strong wind speeds. For tidally locked hot Jupiters, winds at high altitude in the planet's atmosphere advect heat from the day side to the cooler night side of the planet. Net wind speeds on the order of 1-10 km s{sup -1} directed towards the night side of the planet are predicted at mbar pressures, which is the approximate pressure level probed by transmission spectroscopy. These winds should result in an observed blueshift of spectral lines in transmission on the order of the wind speed. Indeed, Snellen et al. recently observed a 2 {+-} 1 km s{sup -1} blueshift of CO transmission features for HD 209458b, which has been interpreted as a detection of the day-to-night (substellar to anti-stellar) winds that have been predicted by 3D atmospheric dynamics modeling. Here, we present the results of a coupled 3D atmospheric dynamics and transmission spectrum model, which predicts the Doppler-shifted spectrum of a hot Jupiter during transit resulting from winds in the planet's atmosphere. We explore four different models for the hot Jupiter atmosphere using different prescriptions for atmospheric drag via interaction with planetary magnetic fields. We find that models with no magnetic drag produce net Doppler blueshifts in the transmission spectrum of {approx}2 km s{sup -1} and that lower Doppler shifts of {approx}1 km s{sup -1} are found for the higher drag cases, results consistent with-but not yet strongly constrained by-the Snellen et al. measurement. We additionally explore the possibility of recovering the average terminator wind speed as a function of altitude by measuring Doppler shifts of individual spectral lines and spatially resolving wind speeds across the leading and trailing terminators during ingress and egress.

  3. Change Semantic Constrained Online Data Cleaning Method for Real-Time Observational Data Stream

    NASA Astrophysics Data System (ADS)

    Ding, Yulin; Lin, Hui; Li, Rongrong

    2016-06-01

    to large estimation error. In order to achieve the best generalization error, it is an important challenge for the data cleaning methodology to be able to characterize the behavior of data stream distributions and adaptively update a model to include new information and remove old information. However, the complicated data changing property invalidates traditional data cleaning methods, which rely on the assumption of a stationary data distribution, and drives the need for more dynamic and adaptive online data cleaning methods. To overcome these shortcomings, this paper presents a change semantics constrained online filtering method for real-time observational data. Based on the principle that the filter parameter should vary in accordance to the data change patterns, this paper embeds semantic description, which quantitatively depicts the change patterns in the data distribution to self-adapt the filter parameter automatically. Real-time observational water level data streams of different precipitation scenarios are selected for testing. Experimental results prove that by means of this method, more accurate and reliable water level information can be available, which is prior to scientific and prompt flood assessment and decision-making.

  4. Constraining atmospheric ammonia emissions through new observations with an open-path, laser-based sensor

    NASA Astrophysics Data System (ADS)

    Sun, Kang

    emission estimates. Finally, NH3 observations from the TES instrument on NASA Aura satellite were validated with mobile measurements and aircraft observations. Improved validations will help to constrain NH3 emissions at continental to global scales. Ultimately, these efforts will improve the understanding of NH3 emissions from all scales, with implications on the global nitrogen cycle and atmospheric chemistry-climate interactions.

  5. Potential sea-level rise from Antarctic ice-sheet instability constrained by observations.

    PubMed

    Ritz, Catherine; Edwards, Tamsin L; Durand, Gaël; Payne, Antony J; Peyaud, Vincent; Hindmarsh, Richard C A

    2015-12-03

    Large parts of the Antarctic ice sheet lying on bedrock below sea level may be vulnerable to marine-ice-sheet instability (MISI), a self-sustaining retreat of the grounding line triggered by oceanic or atmospheric changes. There is growing evidence that MISI may be underway throughout the Amundsen Sea embayment (ASE), which contains ice equivalent to more than a metre of global sea-level rise. If triggered in other regions, the centennial to millennial contribution could be several metres. Physically plausible projections are challenging: numerical models with sufficient spatial resolution to simulate grounding-line processes have been too computationally expensive to generate large ensembles for uncertainty assessment, and lower-resolution model projections rely on parameterizations that are only loosely constrained by present day changes. Here we project that the Antarctic ice sheet will contribute up to 30 cm sea-level equivalent by 2100 and 72 cm by 2200 (95% quantiles) where the ASE dominates. Our process-based, statistical approach gives skewed and complex probability distributions (single mode, 10 cm, at 2100; two modes, 49 cm and 6 cm, at 2200). The dependence of sliding on basal friction is a key unknown: nonlinear relationships favour higher contributions. Results are conditional on assessments of MISI risk on the basis of projected triggers under the climate scenario A1B (ref. 9), although sensitivity to these is limited by theoretical and topographical constraints on the rate and extent of ice loss. We find that contributions are restricted by a combination of these constraints, calibration with success in simulating observed ASE losses, and low assessed risk in some basins. Our assessment suggests that upper-bound estimates from low-resolution models and physical arguments (up to a metre by 2100 and around one and a half by 2200) are implausible under current understanding of physical mechanisms and potential triggers.

  6. Constraining nova observables: Direct measurements of resonance strengths in 33S(p,γ)34Cl

    NASA Astrophysics Data System (ADS)

    Fallis, J.; Parikh, A.; Bertone, P. F.; Bishop, S.; Buchmann, L.; Chen, A. A.; Christian, G.; Clark, J. A.; D'Auria, J. M.; Davids, B.; Deibel, C. M.; Fulton, B. R.; Greife, U.; Guo, B.; Hager, U.; Herlitzius, C.; Hutcheon, D. A.; José, J.; Laird, A. M.; Li, E. T.; Li, Z. H.; Lian, G.; Liu, W. P.; Martin, L.; Nelson, K.; Ottewell, D.; Parker, P. D.; Reeve, S.; Rojas, A.; Ruiz, C.; Setoodehnia, K.; Sjue, S.; Vockenhuber, C.; Wang, Y. B.; Wrede, C.

    2013-10-01

    The 33S(p,γ)34Cl reaction is important for constraining predictions of certain isotopic abundances in oxygen-neon novae. Models currently predict as much as 150 times the solar abundance of 33S in oxygen-neon nova ejecta. This overproduction factor may vary by orders of magnitude due to uncertainties in the 33S(p,γ)34Cl reaction rate at nova peak temperatures. Depending on this rate, 33S could potentially be used as a diagnostic tool for classifying certain types of presolar grains. Better knowledge of the 33S(p,γ)34Cl rate would also aid in interpreting nova observations over the S-Ca mass region and contribute to the firm establishment of the maximum endpoint of nova nucleosynthesis. Additionally, the total S elemental abundance which is affected by this reaction has been proposed as a thermometer to study the peak temperatures of novae. Previously, the 33S(p,γ)34Cl reaction rate had only been studied directly down to resonance energies of 432 keV. However, for nova peak temperatures of 0.2-0.4 GK there are seven known states in 34Cl both below the 432-keV resonance and within the Gamow window that could play a dominant role. Direct measurements of the resonance strengths of these states were performed using the DRAGON (Detector of Recoils And Gammas of Nuclear reactions) recoil separator at TRIUMF. Additionally two new states within this energy region are reported. Several hydrodynamic simulations have been performed, using all available experimental information for the 33S(p,γ)34Cl rate, to explore the impact of the remaining uncertainty in this rate on nucleosynthesis in nova explosions. These calculations give a range of ≈20-150 for the expected 33S overproduction factor, and a range of ≈100-450 for the 32S/33S ratio expected in ONe novae.

  7. How wild is your model fire? Constraining WRF-Chem wildfire smoke simulations with satellite observations

    NASA Astrophysics Data System (ADS)

    Fischer, E. V.; Ford, B.; Lassman, W.; Pierce, J. R.; Pfister, G.; Volckens, J.; Magzamen, S.; Gan, R.

    2015-12-01

    Exposure to high concentrations of particulate matter (PM) present during acute pollution events is associated with adverse health effects. While many anthropogenic pollution sources are regulated in the United States, emissions from wildfires are difficult to characterize and control. With wildfire frequency and intensity in the western U.S. projected to increase, it is important to more precisely determine the effect that wildfire emissions have on human health, and whether improved forecasts of these air pollution events can mitigate the health risks associated with wildfires. One of the challenges associated with determining health risks associated with wildfire emissions is that the low spatial resolution of surface monitors means that surface measurements may not be representative of a population's exposure, due to steep concentration gradients. To obtain better estimates of ambient exposure levels for health studies, a chemical transport model (CTM) can be used to simulate the evolution of a wildfire plume as it travels over populated regions downwind. Improving the performance of a CTM would allow the development of a new forecasting framework that could better help decision makers estimate and potentially mitigate future health impacts. We use the Weather Research and Forecasting model with online chemistry (WRF-Chem) to simulate wildfire plume evolution. By varying the model resolution, meteorology reanalysis initial conditions, and biomass burning inventories, we are able to explore the sensitivity of model simulations to these various parameters. Satellite observations are used first to evaluate model skill, and then to constrain the model results. These data are then used to estimate population-level exposure, with the aim of better characterizing the effects that wildfire emissions have on human health.

  8. Paleoproterozoic Collisional Structures in the Hudson Bay Lithosphere Constrained by Multi-Observable Probabilistic Inversion

    NASA Astrophysics Data System (ADS)

    Darbyshire, F. A.; Afonso, J. C.; Porritt, R. W.

    2015-12-01

    The Paleozoic Hudson Bay intracratonic basin conceals a Paleoproterozoic Himalayan-scale continental collision, the Trans-Hudson Orogen (THO), which marks an important milestone in the assembly of the Canadian Shield. The geometry of the THO is complex due to the double-indentor geometry of the collision between the Archean Superior and Western Churchill cratons. Seismic observations at regional scale show a thick, seismically fast lithospheric keel beneath the entire region; an intriguing feature of recent models is a 'curtain' of slightly lower wavespeeds trending NE-SW beneath the Bay, which may represent the remnants of more juvenile material trapped between the two Archean continental cores. The seismic models alone, however, cannot constrain the nature of this anomaly. We investigate the thermal and compositional structure of the Hudson Bay lithosphere using a multi-observable probabilistic inversion technique. This joint inversion uses Rayleigh wave phase velocity data from teleseismic earthquakes and ambient noise, geoid anomalies, surface elevation and heat flow to construct a pseudo-3D model of the crust and upper mantle. Initially a wide range of possible mantle compositions is permitted, and tests are carried out to ascertain whether the lithosphere is stratified with depth. Across the entire Hudson Bay region, low temperatures and a high degree of chemical depletion characterise the mantle lithosphere. Temperature anomalies within the lithosphere are modest, as may be expected from a tectonically-stable region. The base of the thermal lithosphere lies at depths of >250 km, reaching to ~300 km depth in the centre of the Bay. Lithospheric stratification, with a more-depleted upper layer, is best able to explain the geophysical data sets and surface observables. Some regions, where intermediate-period phase velocities are high, require stronger mid-lithospheric depletion. In addition, a narrow region of less-depleted material extends NE-SW across the Bay

  9. Constraining the parameters of the EAP sea ice rheology from satellite observations and discrete element model

    NASA Astrophysics Data System (ADS)

    Tsamados, Michel; Heorton, Harry; Feltham, Daniel; Muir, Alan; Baker, Steven

    2016-04-01

    The new elastic-plastic anisotropic (EAP) rheology that explicitly accounts for the sub-continuum anisotropy of the sea ice cover has been implemented into the latest version of the Los Alamos sea ice model CICE. The EAP rheology is widely used in the climate modeling scientific community (i.e. CPOM stand alone, RASM high resolution regional ice-ocean model, MetOffice fully coupled model). Early results from sensitivity studies (Tsamados et al, 2013) have shown the potential for an improved representation of the observed main sea ice characteristics with a substantial change of the spatial distribution of ice thickness and ice drift relative to model runs with the reference visco-plastic (VP) rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. Observations from high resolution satellite SAR imagery as well as numerical simulation results from a discrete element model (DEM, see Wilchinsky, 2010) have shown that these individual floes can organize under external wind and thermal forcing to form an emergent isotropic sea ice state (via thermodynamic healing, thermal cracking) or an anisotropic sea ice state (via Coulombic failure lines due to shear rupture). In this work we use for the first time in the context of sea ice research a mathematical metric, the Tensorial Minkowski functionals (Schroeder-Turk, 2010), to measure quantitatively the degree of anisotropy and alignment of the sea ice at different scales. We apply the methodology on the GlobICE Envisat satellite deformation product (www.globice.info), on a prototype modified version of GlobICE applied on Sentinel-1 Synthetic Aperture Radar (SAR) imagery and on the DEM ice floe aggregates. By comparing these independent measurements of the sea ice anisotropy as well as its temporal evolution against the EAP model we are able to constrain the

  10. Mercury's thermo-chemical evolution from numerical models constrained by Messenger observations

    NASA Astrophysics Data System (ADS)

    Tosi, N.; Breuer, D.; Plesa, A. C.; Wagner, F.; Laneuville, M.

    2012-04-01

    The Messenger spacecraft, in orbit around Mercury for almost one year, has been delivering a great deal of new information that is changing dramatically our understanding of the solar system's innermost planet. Tracking data of the Radio Science experiment yielded improved estimates of the first coefficients of the gravity field that permit to determine the normalized polar moment of inertia of the planet (C/MR2) and the ratio of the moment of inertia of the mantle to that of the whole planet (Cm/C). These two parameters provide a strong constraint on the internal mass distribution and, in particular, on the core mass fraction. With C/MR2 = 0.353 and Cm/C = 0.452 [1], interior structure models predict a core radius as large as 2000 km [2], leaving room for a silicate mantle shell with a thickness of only ~ 400 km, a value significantly smaller than that of 600 km usually assumed in parametrized [3] as well as in numerical models of Mercury's mantle dynamics and evolution [4]. Furthermore, the Gamma-Ray Spectrometer measured the surface abundance of radioactive elements, revealing, besides uranium and thorium, the presence of potassium. The latter, being moderately volatile, rules out traditional formation scenarios from highly refractory materials, favoring instead a composition not much dissimilar from a chondritic model. Considering a 400 km thick mantle, we carry out a large series of 2D and 3D numerical simulations of the thermo-chemical evolution of Mercury's mantle. We model in a self-consistent way the formation of crust through partial melting using Lagrangian tracers to account for the partitioning of radioactive heat sources between mantle and crust and variations of thermal conductivity. Assuming the relative surface abundance of radiogenic elements observed by Messenger to be representative of the bulk mantle composition, we attempt at constraining the degree to which uranium, thorium and potassium are concentrated in the silicate mantle through a broad

  11. The 2014 Napa valley earthquake constrained by InSAR and GNSS observations

    NASA Astrophysics Data System (ADS)

    Polcari, Marco; Fernández, José; Palano, Mimmo; Albano, Matteo; Samsonov, Sergey; Stramondo, Salvatore; Zerbini, Susanna

    2015-04-01

    loosely constrained station coordinates, and other parameters, along with the associated variance-covariance matrices. These solutions were used as quasi observations in a Kalman filter to estimate a consistent set of daily coordinates (i.e. time-series) for all sites involved. The resulting time-series were aligned to a North American fixed reference frame. The visual inspection of the time-series for the stations closely located to the epicentral area of the seismic event allowed detecting a significant offset related to a coseismic deformation. Both data sets have been integrated to determine the 3D displacement field produced by the earthquake. It shows clear characteristics of a strike slip event with an approximately NW striking fault plane.

  12. Searches for neutrinos from gamma ray bursts with the AMANDA-II and IceCube detectors

    NASA Astrophysics Data System (ADS)

    Strahler, Erik Albert

    2009-11-01

    Gamma-ray bursts (GRBs) are the most energetic phenomenon in the universe, releasing isotropic equivalent energies of [Special characters omitted.] ergs over short time scales. While it is possible to wholly explain the keV-GeV observed photons by purely electromagnetic processes, it is natural to consider the implications of concurrent hadronic (proton) acceleration in these sources. Such processes make GRBs one of the leading candidates for the sources of the ultra high-energy cosmic rays as well as sources of associated high energy (TeV-PeV) neutrinos. We have performed searches for such neutrinos from 85 northern sky GRBs with the AMANDA-II neutrino detector. No signal is observed and upper limits are set on the emission from these sources. Additionally, we have performed a search for 41 northern sky GRBs using the 22-string configuration of the IceCube neutrino telescope, employing an unbinned maximum- likelihood method and individual modeling of the predicted emission from each burst. This search is consistent with the background-only hypothesis and we set upper limits on the emission.

  13. Local SAR, Global SAR, and Power-Constrained Large-Flip-Angle Pulses with Optimal Control and Virtual Observation Points

    PubMed Central

    Vinding, Mads S.; Guérin, Bastien; Vosegaard, Thomas; Nielsen, Niels Chr.

    2016-01-01

    Purpose To present a constrained optimal-control (OC) framework for designing large-flip-angle parallel-transmit (pTx) pulses satisfying hardware peak-power as well as regulatory local and global specific-absorption-rate (SAR) limits. The application is 2D and 3D spatial-selective 90° and 180° pulses. Theory and Methods The OC gradient-ascent-pulse-engineering method with exact gradients and the limited-memory Broyden-Fletcher-Goldfarb-Shanno method is proposed. Local SAR is constrained by the virtual-observation-points method. Two numerical models facilitated the optimizations, a torso at 3 T and a head at 7 T, both in eight-channel pTx coils and acceleration-factors up to 4. Results The proposed approach yielded excellent flip-angle distributions. Enforcing the local-SAR constraint, as opposed to peak power alone, reduced the local SAR 7 and 5-fold with the 2D torso excitation and inversion pulse, respectively. The root-mean-square errors of the magnetization profiles increased less than 5% with the acceleration factor of 4. Conclusion A local and global SAR, and peak-power constrained OC large-flip-angle pTx pulse design was presented, and numerically validated for 2D and 3D spatial-selective 90° and 180° pulses at 3 T and 7 T. PMID:26715084

  14. An offline constrained data assimilation technique for aerosols: Improving GCM simulations over South Asia using observations from two satellite sensors

    NASA Astrophysics Data System (ADS)

    Baraskar, Ankit; Bhushan, Mani; Venkataraman, Chandra; Cherian, Ribu

    2016-05-01

    Aerosol properties simulated by general circulation models (GCMs) exhibit large uncertainties due to biases in model processes and inaccuracies in aerosol emission inputs. In this work, we propose an offline, constrained optimization based procedure to improve these simulations by assimilating them with observational data. The proposed approach explicitly incorporates the non-negativity constraint on the aerosol optical depth (AOD) which is a key metric to quantify aerosol distributions. The resulting optimization problem is quadratic programming in nature and can be easily solved by available optimization routines. The utility of the approach is demonstrated by performing offline assimilation of GCM simulated aerosol optical properties and radiative forcing over South Asia (40-120 E, 5-40 N), with satellite AOD measurements from two sensors, namely Moderate Resolution Imaging SpectroRadiometer (MODIS) and Multi-Angle Imaging SpectroRadiometer (MISR). Uncertainty in observational data used in the assimilation is computed by developing different error bands around regional AOD observations, based on their quality assurance flags. The assimilation, evaluated on monthly and daily scales, compares well with Aerosol Robotic Network (AERONET) observations as determined by goodness of fit statistics. Assimilation increased both model predicted atmospheric absorption and clear sky radiative forcing by factors consistent with recent estimates in literature. Thus, the constrained assimilation algorithm helps in systematically reducing uncertainties in aerosol simulations.

  15. Constraining Very High-Energy Gamma Ray Sources Using IceCube Neutrino Observations

    NASA Astrophysics Data System (ADS)

    Vance, Gregory; Feintzeig, J.; Karle, A.; IceCube Collaboration

    2014-01-01

    Modern gamma ray astronomy has revealed the most violent, energetic objects in the known universe, from nearby supernova remnants to distant active galactic nuclei. In an effort to discover more about the fundamental nature of such objects, we present searches for astrophysical neutrinos in coincidence with known gamma ray sources. Searches were conducted using data from IceCube Neutrino Observatory, a cubic-kilometer neutrino detector that is sensitive to astrophysical particles with energies above 1 TeV. The detector is situated at the South Pole, and uses more than 5,000 photomultiplier tubes to detect Cherenkov light from the interactions of particles within the ice. Existing models of proton-proton interactions allow us to link gamma ray fluxes to the production of high-energy neutrinos, so neutrino data from IceCube can be used to constrain the mechanisms by which gamma ray sources create such energetic photons. For a few particularly bright sources, such as the blazar Markarian 421, IceCube is beginning to reach the point where actual constraints can be made. As more years of data are analyzed, the limits will improve and stronger constraints will become possible. This work was supported in part by the National Science Foundation's REU Program through NSF Award AST-1004881 to the University of Wisconsin-Madison.

  16. Bioenergy potential of the United States constrained by satellite observations of existing productivity

    USGS Publications Warehouse

    Smith, W. Kolby; Cleveland, Cory C.; Reed, Sasha C.; Miller, Norman L.; Running, Steven W.

    2012-01-01

    United States (U.S.) energy policy includes an expectation that bioenergy will be a substantial future energy source. In particular, the Energy Independence and Security Act of 2007 (EISA) aims to increase annual U.S. biofuel (secondary bioenergy) production by more than 3-fold, from 40 to 136 billion liters ethanol, which implies an even larger increase in biomass demand (primary energy), from roughly 2.9 to 7.4 EJ yr–1. However, our understanding of many of the factors used to establish such energy targets is far from complete, introducing significgant uncertainty into the feasibility of current estimates of bioenergy potential. Here, we utilized satellite-derived net primary productivity (NPP) data—measured for every 1 km2 of the 7.2 million km2 of vegetated land in the conterminous U.S.—to estimate primary bioenergy potential (PBP). Our results indicate that PBP of the conterminous U.S. ranges from roughly 5.9 to 22.2 EJ yr–1, depending on land use. The low end of this range represents the potential when harvesting residues only, while the high end would require an annual biomass harvest over an area more than three times current U.S. agricultural extent. While EISA energy targets are theoretically achievable, we show that meeting these targets utilizing current technology would require either an 80% displacement of current crop harvest or the conversion of 60% of rangeland productivity. Accordingly, realistically constrained estimates of bioenergy potential are critical for effective incorporation of bioenergy into the national energy portfolio.

  17. Constraining kinetics of metastable olivine in the Marianas slab from seismic observations and dynamic models

    NASA Astrophysics Data System (ADS)

    Quinteros, Javier; Sobolev, Stephan V.

    2012-03-01

    Transformation kinetics associated with the presence of a metastable olivine wedge in old and fast subducting slabs has been the subject of many studies in the last years. Even with improvements in kinetics models, many of the parameters are still not well constrained. In particular, there is no consensus on the blocking temperature that could inhibit the transformation from olivine to spinel. Recently, based on anomalous later phases in the P wave coda and differential P wave slowness, a wedge of metastable olivine was detected in the Marianas subduction zone, that is approximately 25 km wide, at a depth of 590 km and is truncated at 630 km. In this work, a thermomechanical model was used to mimic the subduction in the Marianas and try different blocking temperatures for the olivine/spinel transformation. The model includes, among other features, non-linear elasto-visco-plastic rheology based on laboratory data, phase transformations, latent heat, proper coupling between stress and thermal state of the slab and force balance of the system. The results show a positive correlation between the blocking temperature, depth of the wedge and its distance from the trench (or subduction angle). We compare these results to the situation in the Marianas and suggest that a blocking temperature for the olivine/spinel transformation of approximately 725 °C would be the most likely. The volume of the wedge presents some oscillations that we relate to a runaway effect of the transformation kinetics in the mantle transition zone. Namely, the interaction between latent heat release and the advection of the isotherms due to the subduction velocity. The inclusion of shear heating in the model was fundamental to modeling such a subduction zone. Without shear heating, the slab shows a higher level of internal stress and the necessary bending to mimic the Marianas subduction zone cannot be reached.

  18. Bioenergy potential of the United States constrained by satellite observations of existing productivity.

    PubMed

    Smith, W Kolby; Cleveland, Cory C; Reed, Sasha C; Miller, Norman L; Running, Steven W

    2012-03-20

    United States (U.S.) energy policy includes an expectation that bioenergy will be a substantial future energy source. In particular, the Energy Independence and Security Act of 2007 (EISA) aims to increase annual U.S. biofuel (secondary bioenergy) production by more than 3-fold, from 40 to 136 billion liters ethanol, which implies an even larger increase in biomass demand (primary energy), from roughly 2.9 to 7.4 EJ yr(-1). However, our understanding of many of the factors used to establish such energy targets is far from complete, introducing significgant uncertainty into the feasibility of current estimates of bioenergy potential. Here, we utilized satellite-derived net primary productivity (NPP) data-measured for every 1 km(2) of the 7.2 million km(2) of vegetated land in the conterminous U.S.-to estimate primary bioenergy potential (PBP). Our results indicate that PBP of the conterminous U.S. ranges from roughly 5.9 to 22.2 EJ yr(-1), depending on land use. The low end of this range represents the potential when harvesting residues only, while the high end would require an annual biomass harvest over an area more than three times current U.S. agricultural extent. While EISA energy targets are theoretically achievable, we show that meeting these targets utilizing current technology would require either an 80% displacement of current crop harvest or the conversion of 60% of rangeland productivity. Accordingly, realistically constrained estimates of bioenergy potential are critical for effective incorporation of bioenergy into the national energy portfolio.

  19. Constraining Atmospheric Particle Size in Gale Crater Using REMS UV Measurements and Mastcam Observations at 440 and 880 nm

    NASA Astrophysics Data System (ADS)

    Mason, E. L.; Lemmon, M. T.; de la Torre-Juárez, M.; Vicente-Retortillo, A.; Martinez, G.

    2015-12-01

    Optical depth measured in Gale crater has been shown to vary seasonally, and this variation is potentially linked to a change in dust size visible from the surface. The Mast Camera (Mastcam) on the Mars Science Laboratory (MSL) has performed cross-sky brightness surveys similar to those obtained at the Phoenix Lander site. Since particle size can be constrained by observing airborne dust across multiple wavelengths and angles, surveys at 440 and 880 nm can be used to characterize atmospheric dust within and above the crater. In addition, Rover Environmental Monitoring Station (REMS) on MSL provides downward radiation flux from 250 nm (UVD) to 340 nm (UVA), which would further constrain aerosol properties. The dust, which is not spherical and likely contains irregular particles, can be modeled using randomly oriented triaxial ellipsoids with predetermined microphysical optical properties and fit to sky survey observations to retrieve an effective radius. This work provides a discussion on the constraints of particle size distribution using REMS measurements as well as shape of the particle in Gale crater in comparison to Mastcam at the specified wavelengths.

  20. Gravitational Wave Observations can Constrain Gamma-Ray Busrt Models: The Case of GW 150914 - GBM

    NASA Astrophysics Data System (ADS)

    Veres, P.; Preece, R. D.; Goldstein, A.; Meszaros, P.; Burns, E.; Connaughton, V.

    2016-10-01

    Assuming a common origin for the GW150914 gravitational wave and the GW150914-GBM event, we present the implications of joint observations on leading gamma-ray burst models (photospheric, internal- and external shocks).

  1. MS Amanda, a Universal Identification Algorithm Optimized for High Accuracy Tandem Mass Spectra

    PubMed Central

    2014-01-01

    Today’s highly accurate spectra provided by modern tandem mass spectrometers offer considerable advantages for the analysis of proteomic samples of increased complexity. Among other factors, the quantity of reliably identified peptides is considerably influenced by the peptide identification algorithm. While most widely used search engines were developed when high-resolution mass spectrometry data were not readily available for fragment ion masses, we have designed a scoring algorithm particularly suitable for high mass accuracy. Our algorithm, MS Amanda, is generally applicable to HCD, ETD, and CID fragmentation type data. The algorithm confidently explains more spectra at the same false discovery rate than Mascot or SEQUEST on examined high mass accuracy data sets, with excellent overlap and identical peptide sequence identification for most spectra also explained by Mascot or SEQUEST. MS Amanda, available at http://ms.imp.ac.at/?goto=msamanda, is provided free of charge both as standalone version for integration into custom workflows and as a plugin for the Proteome Discoverer platform. PMID:24909410

  2. Search for Ultra High-Energy Neutrinos with AMANDA-II

    SciTech Connect

    IceCube Collaboration; Klein, Spencer; Ackermann, M.

    2007-11-19

    A search for diffuse neutrinos with energies in excess of 10{sup 5} GeV is conducted with AMANDA-II data recorded between 2000 and 2002. Above 10{sup 7} GeV, the Earth is essentially opaque to neutrinos. This fact, combined with the limited overburden of the AMANDA-II detector (roughly 1.5 km), concentrates these ultra high-energy neutrinos at the horizon. The primary background for this analysis is bundles of downgoing, high-energy muons from the interaction of cosmic rays in the atmosphere. No statistically significant excess above the expected background is seen in the data, and an upper limit is set on the diffuse all-flavor neutrino flux of E{sup 2} {Phi}{sub 90%CL} < 2.7 x 10{sup -7} GeV cm{sup -2}s{sup -1} sr{sup -1} valid over the energy range of 2 x 10{sup 5} GeV to 10{sup 9} GeV. A number of models which predict neutrino fluxes from active galactic nuclei are excluded at the 90% confidence level.

  3. Observations of geometry and ages constrain relative motion of Hawaii and Louisville plumes

    NASA Astrophysics Data System (ADS)

    Wessel, Paul; Kroenke, Loren W.

    2009-07-01

    The classic view of linear island chains as volcanic expressions of interactions between changing plate tectonic motions and fixed mantle plumes has come under renewed scrutiny. In particular, observed paleolatitudes from the Emperor seamounts imply that the Hawaii hotspot was > 5-15° further north during formation of these seamounts and that rapid retardation of its southward migration was the primary agent forming the angular Hawaii-Emperor bend. Supporting this view are predictions from fluid dynamic experiments that suggest the general mantle circulation may displace narrow mantle plumes; consequently the surface locations of hotspots are not fixed and may have varied considerably in the past. However, the locations and ages of available rock samples place fundamental limits on the relative motion between the Hawaii and Louisville hotspots. Here we use such data to estimate empirical age progression curves for separate chains and calculate the continuous variations in hotspot separations through time. While the data are sparse, the inferred inter-hotspot motion for ages > 55 Myr appears significant but the observed relative motion is only about half of what is predicted by mantle dynamics models. To reconcile the observed paleolatitudes with our observed relative motion requires either a larger contemporaneous southward motion of the Louisville hotspot than previously suggested or a moderate component of true polar wander.

  4. Constraining Middle Atmospheric Moisture in GEOS-5 Using EOS-MLS Observations

    NASA Technical Reports Server (NTRS)

    Jin, Jianjun; Pawson, Steven; =Wargan, Krzysztof; Livesey, Nathaniel

    2012-01-01

    Middle atmospheric water vapor plays an important role in climate and atmospheric chemistry. In the middle atmosphere, water vapor, after ozone and carbon dioxide, is an important radiatively active gas that impacts climate forcing and the energy balance. It is also the source of the hydroxyl radical (OH) whose abundances affect ozone and other constituents. The abundance of water vapor in the middle atmosphere is determined by upward transport of dehydrated air through the tropical tropopause layer, by the middle atmospheric circulation, production by the photolysis of methane (CH4), and other physical and chemical processes in the stratosphere and mesosphere. The Modern-Era Retrospective analysis for Research and Applications (MERRA) reanalysis with GEOS-5 did not assimilate any moisture observations in the middle atmosphere. The plan is to use such observations, available sporadically from research satellites, in future GEOS-5 reanalyses. An overview will be provided of the progress to date with assimilating the EOS-Aura Microwave Limb Sounder (MLS) moisture retrievals, alongside ozone and temperature, into GEOS-5. Initial results demonstrate that the MLS observations can significantly improve the middle atmospheric moisture field in GEOS-5, although this result depends on introducing a physically meaningful representation of background error covariances for middle atmospheric moisture into the system. High-resolution features in the new moisture field will be examined, and their relationships with ozone, in a two-year assimilation experiment with GEOS-5. Discussion will focus on how Aura MLS moisture observations benefit the analyses.

  5. Constraining the Sources and Sinks of Atmospheric Methane Using Stable Isotope Observations and Chemistry Climate Modeling

    NASA Astrophysics Data System (ADS)

    Feinberg, A.; Coulon, A.; Stenke, A.; Peter, T.

    2015-12-01

    Methane acts as both a greenhouse gas and a driver of atmospheric chemistry. There is a lack of consensus for the explanation behind the atmospheric methane trend in recent years (1980-2010). High uncertainties are associated with the magnitudes of individual methane source and sink processes. Methane isotopes have the potential to distinguish between the different methane fluxes, as each flux is characterized by an isotopic signature. Methane emissions from each source category are expressed explicitly in a chemistry climate model SOCOL, including wetlands, rice paddies, biomass burning, industry, etc. The model includes 48 methane tracers based on source type and geographical origin in order to track methane after it has been emitted. SOCOL simulations for the years 1980-2010 are performed in "nudged mode", so that model dynamics reflect observed meteorology. Available database estimates of the various surface emission fluxes are inputted into SOCOL. The model diagnostic methane tracers are compared to methane isotope observations from measurement networks. Inconsistencies between the model results and observations point to deficiencies in the available emission estimates or model sink processes. Because of their dependence on the OH sink, deuterated methane observations and methyl chloroform tracers are used to investigate the variability of OH mixing ratios in the model and the real world. The analysis examines the validity of the methane source and sink category estimates over the last 30 years.

  6. Observationally-constrained estimates of aerosol optical depths (AODs) over East Asia via data assimilation techniques

    NASA Astrophysics Data System (ADS)

    Lee, K.; Lee, S.; Song, C. H.

    2015-12-01

    Not only aerosol's direct effect on climate by scattering and absorbing the incident solar radiation, but also they indirectly perturbs the radiation budget by influencing microphysics and dynamics of clouds. Aerosols also have a significant adverse impact on human health. With an importance of aerosols in climate, considerable research efforts have been made to quantify the amount of aerosols in the form of the aerosol optical depth (AOD). AOD is provided with ground-based aerosol networks such as the Aerosol Robotic NETwork (AERONET), and is derived from satellite measurements. However, these observational datasets have a limited areal and temporal coverage. To compensate for the data gaps, there have been several studies to provide AOD without data gaps by assimilating observational data and model outputs. In this study, AODs over East Asia simulated with the Community Multi-scale Air Quality (CMAQ) model and derived from the Geostationary Ocean Color Imager (GOCI) observation are interpolated via different data assimilation (DA) techniques such as Cressman's method, Optimal Interpolation (OI), and Kriging for the period of the Distributed Regional Aerosol Gridded Observation Networks (DRAGON) Campaign (March - May 2012). Here, the interpolated results using the three DA techniques are validated intensively by comparing with AERONET AODs to examine the optimal DA method providing the most reliable AODs over East Asia.

  7. Chemical Nature Of Titan’s Organic Aerosols Constrained from Spectroscopic and Mass Spectrometric Observations

    NASA Astrophysics Data System (ADS)

    Imanaka, Hiroshi; Cruikshank, D. P.

    2012-10-01

    The Cassini-Huygens observations greately extend our knowledge about Titan’s organic aerosols. The Cassini INMS and CAPS observations clearly demonstrate the formation of large organic molecules in the ionosphere [1, 2]. The VIMS and CIRS instruments have revealed spectral features of the haze covering the mid-IR and far-IR wavelengths [3, 4, 5, 6]. This study attempts to speculate the possible chemical nature of Titan’s aerosols by comparing the currently available observations with our laboratory study. We have conducted a series of cold plasma experiment to investigate the mass spectrometric and spectroscopic properties of laboratory aerosol analogs [7, 8]. Titan tholins and C2H2 plasma polymer are generated with cold plasma irradiations of N2/CH4 and C2H2, respectively. Laser desorption mass spectrum of the C2H2 plasma polymer shows a reasonable match with the CAPS positive ion mass spectrum. Furthermore, spectroscopic features of the the C2H2 plasma polymer in mid-IR and far-IR wavelegths qualitatively show reasonable match with the VIMS and CIRS observations. These results support that the C2H2 plasma polymer is a good candidate material for Titan’s aerosol particles at the altitudes sampled by the observations. We acknowledge funding supports from the NASA Cassini Data Analysis Program, NNX10AF08G, and from the NASA Exobiology Program, NNX09AM95G, and the Cassini Project. [1] Waite et al. (2007) Science 316, 870-875. [2] Crary et al. (2009) Planet. Space Sci. 57, 1847-1856. [3] Bellucci et al. (2009) Icarus 201, 198-216. [4] Anderson and Samuelson (2011) Icarus 212, 762-778. [5] Vinatier et al. (2010) Icarus 210, 852-866. [6] Vinatier et al. (2012) Icarus 219, 5-12. [7] Imanaka et al. (2004) Icarus 168, 344-366. [8] Imanaka et al. (2012) Icarus 218, 247-261.

  8. Constraining Methane Emissions from Natural Gas Production in Northeastern Pennsylvania Using Aircraft Observations and Mesoscale Modeling

    NASA Astrophysics Data System (ADS)

    Barkley, Z.; Davis, K.; Lauvaux, T.; Miles, N.; Richardson, S.; Martins, D. K.; Deng, A.; Cao, Y.; Sweeney, C.; Karion, A.; Smith, M. L.; Kort, E. A.; Schwietzke, S.

    2015-12-01

    Leaks in natural gas infrastructure release methane (CH4), a potent greenhouse gas, into the atmosphere. The estimated fugitive emission rate associated with the production phase varies greatly between studies, hindering our understanding of the natural gas energy efficiency. This study presents a new application of inverse methodology for estimating regional fugitive emission rates from natural gas production. Methane observations across the Marcellus region in northeastern Pennsylvania were obtained during a three week flight campaign in May 2015 performed by a team from the National Oceanic and Atmospheric Administration (NOAA) Global Monitoring Division and the University of Michigan. In addition to these data, CH4 observations were obtained from automobile campaigns during various periods from 2013-2015. An inventory of CH4 emissions was then created for various sources in Pennsylvania, including coalmines, enteric fermentation, industry, waste management, and unconventional and conventional wells. As a first-guess emission rate for natural gas activity, a leakage rate equal to 2% of the natural gas production was emitted at the locations of unconventional wells across PA. These emission rates were coupled to the Weather Research and Forecasting model with the chemistry module (WRF-Chem) and atmospheric CH4 concentration fields at 1km resolution were generated. Projected atmospheric enhancements from WRF-Chem were compared to observations, and the emission rate from unconventional wells was adjusted to minimize errors between observations and simulation. We show that the modeled CH4 plume structures match observed plumes downwind of unconventional wells, providing confidence in the methodology. In all cases, the fugitive emission rate was found to be lower than our first guess. In this initial emission configuration, each well has been assigned the same fugitive emission rate, which can potentially impair our ability to match the observed spatial variability

  9. Comparing Simulations and Observations of Galaxy Evolution: Methods for Constraining the Nature of Stellar Feedback

    NASA Astrophysics Data System (ADS)

    Hummels, Cameron

    Computational hydrodynamical simulations are a very useful tool for understanding how galaxies form and evolve over cosmological timescales not easily revealed through observations. However, they are only useful if they reproduce the sorts of galaxies that we see in the real universe. One of the ways in which simulations of this sort tend to fail is in the prescription of stellar feedback, the process by which nascent stars return material and energy to their immediate environments. Careful treatment of this interaction in subgrid models, so-called because they operate on scales below the resolution of the simulation, is crucial for the development of realistic galaxy models. Equally important is developing effective methods for comparing simulation data against observations to ensure galaxy models which mimic reality and inform us about natural phenomena. This thesis examines the formation and evolution of galaxies and the observable characteristics of the resulting systems. We employ extensive use of cosmological hydrodynamical simulations in order to simulate and interpret the evolution of massive spiral galaxies like our own Milky Way. First, we create a method for producing synthetic photometric images of grid-based hydrodynamical models for use in a direct comparison against observations in a variety of filter bands. We apply this method to a simulation of a cluster of galaxies to investigate the nature of the red-sequence/blue-cloud dichotomy in the galaxy color-magnitude diagram. Second, we implement several subgrid models governing the complex behavior of gas and stars on small scales in our galaxy models. Several numerical simulations are conducted with similar initial conditions, where we systematically vary the subgrid models, afterward assessing their efficacy through comparisons of their internal kinematics with observed systems. Third, we generate an additional method to compare observations with simulations, focusing on the tenuous circumgalactic

  10. Constraining Methane Flux Estimates Using Atmospheric Observations of Methane and 1^3C in Methane

    NASA Astrophysics Data System (ADS)

    Mikaloff Fletcher, S. E.; Tans, P. P.; Miller, J. B.; Bruhwiler, L. M.

    2002-12-01

    Understanding the budget of methane is crucial to predicting climate change and managing earth's carbon reservoirs. Methane is responsible for approximately 15% of the anthropogenic greenhouse forcing and has a large impact on the oxidative capacity of Earth's atmosphere due to its reaction with hydroxyl radical. At present, many of the sources and sinks of methane are poorly understood due in part to the large spatial and temporal variability of the methane flux. Model simulations of methane mixing ratios using most process-based source estimates typically over-predict the latitudinal gradient of atmospheric methane relative to the observations; however, the specific source processes responsible for this discrepancy have not been identified definitively. The aim of this work is to use the isotopic signatures of the sources to attribute these discrepancies to a source process or group of source processes and create global and regional budget estimates that are in agreement with both the atmospheric observations of methane and 1^3C in methane. To this end, observations of isotopic ratios of 1^3C in methane and isotopic signatures of methane source processes are used in conjunction with an inverse model of the methane budget. Inverse modeling is a top-down approach which uses observations of trace gases in the atmosphere, an estimate of the spatial pattern of trace gas fluxes, and a model of atmospheric transport to estimate the sources and sinks. The atmospheric transport was represented by the TM3 three-dimensional transport model. The GLOBALVIEW 2001 methane observations were used along with flask measurements of 1^3C in methane at six of the CMDL-NOAA stations by INSTAAR. Initial results imply interesting differences from previous methane budget estimates. For example, the 1^3C isotope observations in methane call for an increase in southern hemisphere sources with a bacterial isotopic signature such as wetlands, rice paddies, termites, and ruminant animals. The

  11. Toward observationally constrained high space and time resolution CO2 urban emission inventories

    NASA Astrophysics Data System (ADS)

    Maness, H.; Teige, V. E.; Wooldridge, P. J.; Weichsel, K.; Holstius, D.; Hooker, A.; Fung, I. Y.; Cohen, R. C.

    2013-12-01

    The spatial patterns of greenhouse gas (GHG) emission and sequestration are currently studied primarily by sensor networks and modeling tools that were designed for global and continental scale investigations of sources and sinks. In urban contexts, by design, there has been very limited investment in observing infrastructure, making it difficult to demonstrate that we have an accurate understanding of the mechanism of emissions or the ability to track processes causing changes in those emissions. Over the last few years, our team has built a new high-resolution observing instrument to address urban CO2 emissions, the BErkeley Atmospheric CO2 Observing Network (BEACON). The 20-node network is constructed on a roughly 2 km grid, permitting direct characterization of the internal structure of emissions within the San Francisco East Bay. Here we present a first assessment of BEACON's promise for evaluating the effectiveness of current and upcoming local emissions policy. Within the next several years, a variety of locally important changes are anticipated--including widespread electrification of the motor vehicle fleet and implementation of a new power standard for ships at the port of Oakland. We describe BEACON's expected performance for detecting these changes, based on results from regional forward modeling driven by a suite of projected inventories. We will further describe the network's current change detection capabilities by focusing on known high temporal frequency changes that have already occurred; examples include a week of significant freeway traffic congestion following the temporary shutdown of the local commuter rail (the Bay Area Rapid Transit system).

  12. Constraining magnetic fields morphologies using mid-IR polarization: observations and modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Li, Dan; Pantin, Eric; Telesco, Charles M.

    2016-01-01

    Polarization arises from aligned dust grains in magnetic fields, and thus the direction of polarization can trace the direction of B fields. We present the mid-IR imaging and spectropolarimetry observations made with the GTC's CanariCam of the Herbig Ae star WL 16. WL 16 is embedded in/behind the ρ Ophiuchus molecular cloud with visual extinction of ~31 mag. It exhibits large and extended (~900 AU) emission, which is believed to come from the emission of PAHs and very small dust grains. Uniform polarization vectors from imaging polarization and the absorption-dominated polarization profile from spectropolarimetry consistently indicate a uniform foreground magnetic field oriented at about 30 deg from the North.We also model the predicted polarization patterns expected to arise from different magnetic field morphologies, which can be distinguished by high-resolution observations. As an example, we present the mid-IR polarization modeling of AB Aur, a well-studied Herbig Ae star. We incorporate polarization from dichroic absorption, emission and scattering in the modeling. The observed polarization structures are well reproduced by two components: emissive polarization arising from a poloidal B field and scattering polarization by 0.01-1 μm dust grains.

  13. Constraining the bulk Dust to Ice Ratio and Compressive Strength for Comet Churyumov Gerasimenko Using CONSERT Radar Observations

    NASA Astrophysics Data System (ADS)

    Heggy, E.; Shafie, A.; Herique, A.; Lasue, J.; Kofman, W. W.; Levasseur-Regourd, A. C.

    2015-12-01

    Using CONSERT most recent Bistatic observations in the Post-Philae landing phase we estimate the variability in the subsurface Dust to Ice ratios for comet Churyumov Gerasimenko under different dielectric hypotheses inverted from the 90 MHz radar observations and constrained by both the COSIMA and the Radio science experiments. In particular we constrain the comet dust type and ratios to the ice mass in the nucleus body. Additionally we estimat the subsurface density and porosity from CONSERT dielectric inversion and compare it to the values estimated for the upper crust from the Philae landing dynamics. Our preliminary results suggest that the comet dielectric properties are consistent with carbonated chondrites dust and water crystalline ice mixtures with very low dust concentration in the comet deep subsurface. Additionally we develop an empirical model that correlates the surface and subsurface compressive strengths to the dielectric properties. The compressive strength of both the surface and the subsurface are explored using this model using the dielectric properties inverted from the CONSERT observations. Our preliminary results suggest that the average surface compressive strength at 67P surface range from 2 kPa to 1 MPa, for a mean surface temperature of -70° C. We also analyzed the OSIRIS images of the Philae lander first impact footprints which are suggested to be ~15 cm deep into the upper regolith and hence suggesting a low surface compressive strength close to 2 kPa. The comet subsurface compressive strength of subsurface is estimated to be < 1kPa. We will discuss the implications of our results for understanding cometary formation and future sampling experiments.

  14. Panchromatic Observations of the Textbook GRB 110205A: Constraining Physical Mechanisms of Prompt Emission and Afterglow

    NASA Technical Reports Server (NTRS)

    Zheng, W.; Shen, R. F.; Sakamoto, T.; Beardmore, A. P.; De Pasquale, M.; Wu, X. F.; Gorosabel, J.; Urata, Y.; Sugita, S.; Zhang, B.; Pozanenko, A.; Nissinen, M.; Sahu, D. K.; Im, M.; Ukwatta, T. N.; Andreev, M.; Klunko, E.; Volnova, A.; Akerlof, C. W.; Anto, P.; Barthelmy, S. D.; Breeveld, A.; Carsenty, U.; Gehrels, N.; Sonbas, E.

    2011-01-01

    We present a comprehensive analysis of a bright, long duration (T(sub 90) approx. 257 s) GRB 110205A at redshift z = 2.22. The optical prompt emission was detected by Swift/UVOT, ROTSE-IIIb and BOOTES telescopes when the GRB was still radiating in the gamma-ray band. Thanks to its long duration, nearly 200 s of observations were obtained simultaneously from optical, X-ray to gamma-ray (1 eV - 5 MeV), which makes it one of the exceptional cases to study the broadband spectral energy distribution across 6 orders of magnitude in energy during the prompt emission phase. In particular, by fitting the time resolved prompt spectra, we clearly identify, for the first time, an interesting two-break energy spectrum, roughly consistent with the standard GRB synchrotron emission model in the fast cooling regime. Although the prompt optical emission is brighter than the extrapolation of the best fit X/ -ray spectra, it traces the -ray light curve shape, suggesting a relation to the prompt high energy emission. The synchrotron + synchrotron self-Compton (SSC) scenario is disfavored by the data, but the models invoking a pair of internal shocks or having two emission regions can interpret the data well. Shortly after prompt emission (approx. 1100 s), a bright (R = 14.0) optical emission hump with very steep rise ( alpha approx. 5.5) was observed which we interpret as the emission from the reverse shock. It is the first time that the rising phase of a reverse shock component has been closely observed.

  15. PANCHROMATIC OBSERVATIONS OF THE TEXTBOOK GRB 110205A: CONSTRAINING PHYSICAL MECHANISMS OF PROMPT EMISSION AND AFTERGLOW

    SciTech Connect

    Zheng, W.; Shen, R. F.; Sakamoto, T.; Beardmore, A. P.; De Pasquale, M.; Wu, X. F.; Zhang, B.; Gorosabel, J.; Urata, Y.; Sugita, S.; Pozanenko, A.; Sahu, D. K.; Im, M.; Ukwatta, T. N.; Andreev, M.; Klunko, E. E-mail: rfshen@astro.utoronto.ca; and others

    2012-06-01

    We present a comprehensive analysis of a bright, long-duration (T{sub 90} {approx} 257 s) GRB 110205A at redshift z = 2.22. The optical prompt emission was detected by Swift/UVOT, ROTSE-IIIb, and BOOTES telescopes when the gamma-ray burst (GRB) was still radiating in the {gamma}-ray band, with optical light curve showing correlation with {gamma}-ray data. Nearly 200 s of observations were obtained simultaneously from optical, X-ray, to {gamma}-ray (1 eV to 5 MeV), which makes it one of the exceptional cases to study the broadband spectral energy distribution during the prompt emission phase. In particular, we clearly identify, for the first time, an interesting two-break energy spectrum, roughly consistent with the standard synchrotron emission model in the fast cooling regime. Shortly after prompt emission ({approx}1100 s), a bright (R = 14.0) optical emission hump with very steep rise ({alpha} {approx} 5.5) was observed, which we interpret as the reverse shock (RS) emission. It is the first time that the rising phase of an RS component has been closely observed. The full optical and X-ray afterglow light curves can be interpreted within the standard reverse shock (RS) + forward shock (FS) model. In general, the high-quality prompt and afterglow data allow us to apply the standard fireball model to extract valuable information, including the radiation mechanism (synchrotron), radius of prompt emission (R{sub GRB} {approx} 3 Multiplication-Sign 10{sup 13} cm), initial Lorentz factor of the outflow ({Gamma}{sub 0} {approx} 250), the composition of the ejecta (mildly magnetized), the collimation angle, and the total energy budget.

  16. Constraining the Origin of Basaltic Volcanic Rocks Observed by Opportunity Along the Rim of Endeavour Crater

    NASA Technical Reports Server (NTRS)

    Bouchard, M. C.; Jolliff, B. L.; Farrand, W. H.; Mittlefehldt, D. W.

    2017-01-01

    The Mars Exploration Rover (MER) Opportunity continues its exploration along the rim of Endeavour Crater. While the primary focus for investigation has been to seek evidence of aqueous alteration, Opportunity has observed a variety of rock types, including some that are hard and relatively unaltered. These rocks tend to occur most commonly as "float rocks" or "erratics" where the geologic setting does not clearly reveal their origin. Along the rim of Endeavour crater (Fig. 1), such rocks, commonly noted in Panoramic Camera (Pancam) left eye composites as "blue rocks", are abundant components of some of the Endeavour crater rim deposits, scree slopes, and colluvium deposits. In this abstract, we examine the similarity of several of these rocks analyzed using Opportunity's Alpha Particle X-Ray Spectrometer (APXS), images and color from the Pancam, and textures observed with the Microscopic Imager (MI. At issue is the blue rocks origin; are they impact melt or volcanic, what is their age relative to Endeavour crater, and how they are related to each other?

  17. Black carbon sources constrained by observations in the Russian high Arctic.

    PubMed

    Popovicheva, Olga Borsovna; Evangeliou, Nikolaos; Eleftheriadis, Konstantinos; Kalogridis, Athina Cerise; Movchan, Vadim Vadimovich; Sitnikov, Nikolay; Eckhardt, Sabine; Makshtas, Alexander; Stohl, Andreas

    2017-02-24

    Understanding the role of short-lived climate forcers like black carbon (BC) at high northern latitudes in climate change is hampered by the scarcity of surface observations in the Russian Arctic. In this study, highly time resolved Equivalent BC (EBC) measurements during a ship campaign in the White, Barents and Kara Seas in October 2015 are presented. The measured EBC concentrations are compared with BC concentrations simulated with a Lagrangian particle dispersion model coupled with a recently completed global emission inventory to quantify the origin of the Arctic BC. EBC showed increased values (100-400 ng m-3) in the Kara Strait, Kara Sea, and Kola Peninsula, and an extremely high concentration (1000 ng m-3) in the White Sea. Assessment of BC origin throughout the expedition showed that gas flaring emissions from the Yamal/Khanty-Mansiysk and Nenets/Komi regions contributed the most when the ship was close to the Kara Strait, north of 70˚N. Near Arkhangelsk (White Sea), biomass burning in mid-latitudes, surface transportation, and residential and commercial combustion from Central and Eastern Europe were found to be important BC sources. The model reproduced observed EBC concentrations efficiently, building credibility in the emission inventory for BC emissions at high northern latitudes.

  18. Observationally Constrained Metal Signatures of Galaxy Evolution in the Stars and Gas of Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Corlies, Lauren N.

    The halos of galaxies - consisting of gas, stars, and satellite galaxies - are formed and shaped by the most fundamental processes: hierarchical merging and the flow of gas into and out of galaxies. While these processes are hard to disentangle, metals are tied to the gas that fuels star formation and entrained in the wind that the deaths of these stars generate. As such, they can act as important indicators of the star formation, the chemical enrichment, and the outflow histories of galaxies. Thus, this thesis aims to take advantage of such metal signatures in the stars and gas to place observational constraints on current theories of galaxy evolution as implemented in cosmological simulations. The first two chapters consider the metallicities of stars in the stellar halo of the Milky Way and its surviving satellite dwarf galaxies. Chapter 2 pairs an N-body simulation with a semi-analytic model for supernova-driven winds to examine the early environment of a Milky Way-like galaxy. At z = 10, progenitors of surviving z = 0 satellite galaxies are found to sit preferentially on the outskirts of progenitor halos of the eventual main halo. The consequence of these positions is that main halo progenitors are found to more effectively cross-pollute each other than satellite progenitors. Thus, inhomogeneous cross-pollution as a result of different high-z spatial locations of different progenitors can help to explain observed differences in abundance patterns measured today. Chapter 3 expands this work into the analysis of a cosmological, hydrodynamical simulation of dwarf galaxies in the early universe. We find that simple assumptions for modeling the extent of supernova-driven winds used in Chapter 2 agree well with the simulation whereas the presence of inhomogeneous mixing in the simulation has a large effect on the stellar metallicities. Furthermore, the star-forming halos show both bursty and continuous SFHs, two scenarios proposed by stellar metallicity data

  19. A New Method to Constrain Supernova Fractions Using X-ray Observations of Clusters of Galaxies

    NASA Technical Reports Server (NTRS)

    Bulbul, Esra; Smith, Randall K.; Loewenstein, Michael

    2012-01-01

    Supernova (SN) explosions enrich the intracluster medium (ICM) both by creating and dispersing metals. We introduce a method to measure the number of SNe and relative contribution of Type Ia supernovae (SNe Ia) and core-collapse supernovae (SNe cc) by directly fitting X-ray spectral observations. The method has been implemented as an XSPEC model called snapec. snapec utilizes a single-temperature thermal plasma code (apec) to model the spectral emission based on metal abundances calculated using the latest SN yields from SN Ia and SN cc explosion models. This approach provides a self-consistent single set of uncertainties on the total number of SN explosions and relative fraction of SN types in the ICM over the cluster lifetime by directly allowing these parameters to be determined by SN yields provided by simulations. We apply our approach to XMM-Newton European Photon Imaging Camera (EPIC), Reflection Grating Spectrometer (RGS), and 200 ks simulated Astro-H observations of a cooling flow cluster, A3112.We find that various sets of SN yields present in the literature produce an acceptable fit to the EPIC and RGS spectra of A3112. We infer that 30.3% plus or minus 5.4% to 37.1% plus or minus 7.1% of the total SN explosions are SNe Ia, and the total number of SN explosions required to create the observed metals is in the range of (1.06 plus or minus 0.34) x 10(exp 9), to (1.28 plus or minus 0.43) x 10(exp 9), fromsnapec fits to RGS spectra. These values may be compared to the enrichment expected based on well-established empirically measured SN rates per star formed. The proportions of SNe Ia and SNe cc inferred to have enriched the ICM in the inner 52 kiloparsecs of A3112 is consistent with these specific rates, if one applies a correction for the metals locked up in stars. At the same time, the inferred level of SN enrichment corresponds to a star-to-gas mass ratio that is several times greater than the 10% estimated globally for clusters in the A3112 mass range.

  20. Constraining Models Of The Solar Chromosphere Using An X2 Flare Observed By SDO/EVE

    NASA Astrophysics Data System (ADS)

    Venkataramanasastry, A.; Murphy, N. A.; Avrett, E.

    2013-12-01

    The GOES X2 solar flare of Feb 15, 2011 is analyzed to draw observational constraints in constructing a model of the chromosphere of the Sun during a solar flare, using the Pandora computer program [1]. Spectra from the MEGS-A&B component of EVE [2] on board the Solar Dynamics Observatory are used to analyze the lines and continuum [3]. The irradiances before and after the flare are used for modeling the time-evolution of the impulsive and decay phases of the flare. Significant increase in the intensities of multiple coronal and chromospheric emission lines (H, He, C, N, O, Si etc.) is seen. The observed increase in intensities will serve as constraints to the model program. Pandora performs iterative calculations for non-LTE radiative transfer with multiple ions and atoms. It includes the effects of particle diffusion and flow velocities in the equations of radiative transfer and ionization equilibrium. The fraction of the area on the Sun contributing to the chromospheric flare emission is presented. The upper limit for the intensity in the Lyman continuum due to the flare is accounted to be approximately 7% of that due to the entire surface area. The Lyman, He II and He I continua provide strong constraints for characterizing the chromosphere. The emission lines from the CHIANTI atomic database in these wavelength ranges are considered in order to avoid using optically thin emission lines from the corona. The behavior of changes in line features with time is analyzed. The light curves of different lines that contribute substantially to the flare spectra are studied. The temperatures at the peak of the flare with respect to that at the quiet Sun is estimated at different continuum wavelengths. The pre-flare and post-flare values from these light-curves are adapted to construct the model during the rise and decay phases. The effective intensity due to the lines and the relative times at which these lines peak are presented. The observed irradiance values for pre

  1. A NEW METHOD TO CONSTRAIN SUPERNOVA FRACTIONS USING X-RAY OBSERVATIONS OF CLUSTERS OF GALAXIES

    SciTech Connect

    Bulbul, Esra; Smith, Randall K.; Loewenstein, Michael

    2012-07-01

    Supernova (SN) explosions enrich the intracluster medium (ICM) both by creating and dispersing metals. We introduce a method to measure the number of SNe and relative contribution of Type Ia supernovae (SNe Ia) and core-collapse supernovae (SNe cc) by directly fitting X-ray spectral observations. The method has been implemented as an XSPEC model called snapec. snapec utilizes a single-temperature thermal plasma code (apec) to model the spectral emission based on metal abundances calculated using the latest SN yields from SN Ia and SN cc explosion models. This approach provides a self-consistent single set of uncertainties on the total number of SN explosions and relative fraction of SN types in the ICM over the cluster lifetime by directly allowing these parameters to be determined by SN yields provided by simulations. We apply our approach to XMM-Newton European Photon Imaging Camera (EPIC), Reflection Grating Spectrometer (RGS), and 200 ks simulated Astro-H observations of a cooling flow cluster, A3112. We find that various sets of SN yields present in the literature produce an acceptable fit to the EPIC and RGS spectra of A3112. We infer that 30.3% {+-} 5.4% to 37.1% {+-} 7.1% of the total SN explosions are SNe Ia, and the total number of SN explosions required to create the observed metals is in the range of (1.06 {+-} 0.34) Multiplication-Sign 10{sup 9} to (1.28 {+-} 0.43) Multiplication-Sign 10{sup 9}, from snapec fits to RGS spectra. These values may be compared to the enrichment expected based on well-established empirically measured SN rates per star formed. The proportions of SNe Ia and SNe cc inferred to have enriched the ICM in the inner 52 kpc of A3112 is consistent with these specific rates, if one applies a correction for the metals locked up in stars. At the same time, the inferred level of SN enrichment corresponds to a star-to-gas mass ratio that is several times greater than the 10% estimated globally for clusters in the A3112 mass range.

  2. Deep Uranus Cloud Structure and Methane Mixing Ratio as Constrained by Keck AO Imaging Observations.

    NASA Astrophysics Data System (ADS)

    Sromovsky, Lawrence A.; Fry, P. M.

    2006-09-01

    Keck AO imaging of Uranus in 2004 with H and H-continuum filters provide deep views of scattered light in the Uranian atmosphere with different sensitivities to methane absorption and collision-induced absorption by Hydrogen. After deconvolution, these images provide accurate low-latitude center-to-limb (east-west) profiles out to view angles of nearly 80 degrees, permitting solutions for both cloud properties and the methane mixing ratio. After accounting for a very small high-altitude haze contribution, the observed central disk I/F values for H and H-continuum filters can be modeled using an opaque semi-infinite cloud of very low albedo (near 0.04), a broken cloud of high albedo (fractional coverage near 0.04-.06), or a continuous cloud of low optical depth (0.2-1.0) containing particles of high single-scattering albedo. For low methane mixing ratios (0.5-1 percent) the central disk I/F values require a deep cloud (near 8 bars), while for the high methane mixing ratios (2-4 percent) a higher altitude solution is possible (near 3 bars). However, the observed slightly limb-brightened and relatively flat center-to-limb H-continuum profile is only consistent with an optically thin cloud. The best-fit solution is a low methane mixing ratio (0.75-1.0 percent vmr), and a deep low opacity cloud (optical depth ranging from 0.2 to 0.4 for scattering asymmetry parameters ranging from 0 to 0.3). This CH4 mixing ratio is slightly below the lower limit of the Baines et al. (1995, Icarus 114, 328-340) result of 1.6(+0.7/-0.5) percent. This work was supported by NASA's Planetary Astronomy and Planetary Atmospheres programs and the W.M. Keck Observatory. We thank those of Hawaiian ancestry whose generous hospitality in allowing use of their sacred mountain made the observations possible.

  3. Constraining hot plasma in a non-flaring solar active region with FOXSI hard X-ray observations

    NASA Astrophysics Data System (ADS)

    Ishikawa, Shin-nosuke; Glesener, Lindsay; Christe, Steven; Ishibashi, Kazunori; Brooks, David H.; Williams, David R.; Shimojo, Masumi; Sako, Nobuharu; Krucker, Säm

    2014-12-01

    We present new constraints on the high-temperature emission measure of a non-flaring solar active region using observations from the recently flown Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload. FOXSI has performed the first focused hard X-ray (HXR) observation of the Sun in its first successful flight on 2012 November 2. Focusing optics, combined with small strip detectors, enable high-sensitivity observations with respect to previous indirect imagers. This capability, along with the sensitivity of the HXR regime to high-temperature emission, offers the potential to better characterize high-temperature plasma in the corona as predicted by nanoflare heating models. We present a joint analysis of the differential emission measure (DEM) of active region 11602 using coordinated observations by FOXSI, Hinode/XRT, and Hinode/EIS. The Hinode-derived DEM predicts significant emission measure between 1 MK and 3 MK, with a peak in the DEM predicted at 2.0-2.5 MK. The combined XRT and EIS DEM also shows emission from a smaller population of plasma above 8 MK. This is contradicted by FOXSI observations that significantly constrain emission above 8 MK. This suggests that the Hinode DEM analysis has larger uncertainties at higher temperatures and that > 8 MK plasma above an emission measure of 3 × 1044 cm-3 is excluded in this active region.

  4. Iapetus' near surface thermal emission modeled and constrained using Cassini RADAR Radiometer microwave observations

    NASA Astrophysics Data System (ADS)

    Le Gall, A.; Leyrat, C.; Janssen, M. A.; Keihm, S.; Wye, L. C.; West, R.; Lorenz, R. D.; Tosi, F.

    2014-10-01

    Since its arrival at Saturn, the Cassini spacecraft has had only a few opportunities to observe Iapetus, Saturn's most distant regular satellite. These observations were all made from long ranges (>100,000 km) except on September 10, 2007, during Cassini orbit 49, when the spacecraft encountered the two-toned moon during its closest flyby so far. In this pass it collected spatially resolved data on the object's leading side, mainly over the equatorial dark terrains of Cassini Regio (CR). In this paper, we examine the radiometry data acquired by the Cassini RADAR during both this close-targeted flyby (referred to as IA49-3) and the distant Iapetus observations. In the RADAR's passive mode, the receiver functions as a radiometer to record the thermal emission from planetary surfaces at a wavelength of 2.2-cm. On the cold icy surfaces of Saturn's moons, the measured brightness temperatures depend both on the microwave emissivity and the physical temperature profile below the surface down to a depth that is likely to be tens of centimeters or even a few meters. Combined with the concurrent active data, passive measurements can shed light on the composition, structure and thermal properties of planetary regoliths and thus on the processes from which they have formed and evolved. The model we propose for Iapetus' microwave thermal emission is fitted to the IA49-3 observations and reveals that the thermal inertias sensed by the Cassini Radiometer over both CR and the bright mid-to-high latitude terrains, namely Ronceveaux Terra (RT) in the North and Saragossa Terra (ST) in the South, significantly exceed those measured by Cassini's CIRS (Composite Infrared Spectrometer), which is sensitive to much smaller depths, generally the first few millimeters of the surface. This implies that the subsurface of Iapetus sensed at 2.2-cm wavelength is more consolidated than the uppermost layers of the surface. In the case of CR, a thermal inertia of at least 50 J m-2 K-1 s-1/2, and

  5. Asteroid Properties from Photometric Observations: Constraining Non-Gravitational Processes in Asteroids

    NASA Astrophysics Data System (ADS)

    Pravec, P.

    2013-05-01

    From October 2012 we run our NEOSource project on the Danish 1.54-m telescope on La Silla. The primary aim of the project is to study non-gravitational processes in asteroids near the Earth and in their source regions in the main asteroidal belt. In my talk, I will give a brief overview of our current knowledge of the asteroidal non- gravitational processes and how we study them with photometric observations. I will talk especially about binary and paired asteroids that appear to be formed by rotational fission, about detecting the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) and BYORP (binary YORP) effects of anisotropic thermal emission from asteroids that change their spins and satellite orbits, and about non-principal axis rotators (the so called "tumblers") among the smallest, super-critically rotating asteroids with sizes < 100 meters.

  6. Using Two-Ribbon Flare Observations and MHD Simulations to Constrain Flare Properties

    NASA Astrophysics Data System (ADS)

    Kazachenko, Maria D.; Lynch, Benjamin J.; Welsch, Brian

    2016-05-01

    Flare ribbons are emission structures that are frequently observed during flares in transition-region and chromospheric radiation. These typically straddle a polarity inversion line (PIL) of the radial magnetic field at the photosphere, and move apart as the flare progresses. The ribbon flux - the amount of unsigned photospheric magnetic flux swept out by flare ribbons - is thought to be related to the amount coronal magnetic reconnection, and hence provides a key diagnostic tool for understanding the physical processes at work in flares and CMEs. Previous measurements of the magnetic flux swept out by flare ribbons required time-consuming co-alignment between magnetograph and intensity data from different instruments, explaining why those studies only analyzed, at most, a few events. The launch of the Helioseismic and Magnetic Imager (HMI) and the Atmospheric Imaging Assembly (AIA), both aboard the Solar Dynamics Observatory (SDO), presented a rare opportunity to compile a much larger sample of flare-ribbon events than could readily be assembled before. We created a dataset of 363 events of both flare ribbon positions and fluxes, as a function of time, for all C9.-class and greater flares within 45 degrees of disk center observed by SDO from June 2010 till April 2015. For this purpose, we used vector magnetograms (2D magnetic field maps) from HMI and UV images from AIA. A critical problem with using unprocessed AIA data is the existence of spurious intensities in AIA data associated with strong flare emission, most notably "blooming" (spurious smearing of saturated signal into neighboring pixels, often in streaks). To overcome this difficulty, we have developed an algorithmic procedure that effectively excludes artifacts like blooming. We present our database and compare statistical properties of flare ribbons, e.g. evolutions of ribbon reconnection fluxes, reconnection flux rates and vertical currents with the properties from MHD simulations.

  7. Transient Earth system responses to cumulative carbon dioxide emissions: linearities, uncertainties, and probabilities in an observation-constrained model ensemble

    NASA Astrophysics Data System (ADS)

    Steinacher, M.; Joos, F.

    2016-02-01

    Information on the relationship between cumulative fossil CO2 emissions and multiple climate targets is essential to design emission mitigation and climate adaptation strategies. In this study, the transient response of a climate or environmental variable per trillion tonnes of CO2 emissions, termed TRE, is quantified for a set of impact-relevant climate variables and from a large set of multi-forcing scenarios extended to year 2300 towards stabilization. An ˜ 1000-member ensemble of the Bern3D-LPJ carbon-climate model is applied and model outcomes are constrained by 26 physical and biogeochemical observational data sets in a Bayesian, Monte Carlo-type framework. Uncertainties in TRE estimates include both scenario uncertainty and model response uncertainty. Cumulative fossil emissions of 1000 Gt C result in a global mean surface air temperature change of 1.9 °C (68 % confidence interval (c.i.): 1.3 to 2.7 °C), a decrease in surface ocean pH of 0.19 (0.18 to 0.22), and a steric sea level rise of 20 cm (13 to 27 cm until 2300). Linearity between cumulative emissions and transient response is high for pH and reasonably high for surface air and sea surface temperatures, but less pronounced for changes in Atlantic meridional overturning, Southern Ocean and tropical surface water saturation with respect to biogenic structures of calcium carbonate, and carbon stocks in soils. The constrained model ensemble is also applied to determine the response to a pulse-like emission and in idealized CO2-only simulations. The transient climate response is constrained, primarily by long-term ocean heat observations, to 1.7 °C (68 % c.i.: 1.3 to 2.2 °C) and the equilibrium climate sensitivity to 2.9 °C (2.0 to 4.2 °C). This is consistent with results by CMIP5 models but inconsistent with recent studies that relied on short-term air temperature data affected by natural climate variability.

  8. Constraining the History of the Sagittarius Dwarf Galaxy Using Observations of Its Tidal Debris

    NASA Astrophysics Data System (ADS)

    Johnston, K. V.; Majewski, S. R.; Siegel, M. H.; Reid, I. N.; Kunkel, W. E.

    1999-10-01

    We present a comparison of semianalytic models of the phase-space structure of tidal debris with measurements of average distances, velocities, and surface densities of stars associated with the Sagittarius dwarf galaxy, compiled from all observations reported since its discovery in 1994. We find that several interesting features in the data can be explained by these models. The properties of stars about +/-10 deg-15 deg away from the center of Sgr-in particular, the orientation of material perpendicular to Sgr's orbit and the kink in the velocity gradient-are consistent with those expected for unbound material stripped during the most recent pericentric passage ~50 Myr ago. The break in the slope of the surface density seen by Mateo, Olszewski, & Morrison at b~-35 deg can be understood as marking the end of this material. However, the detections beyond this point are unlikely to represent debris in a trailing streamer, torn from Sgr during the immediately preceding passage ~0.7 Gyr ago, as the surface density of this streamer would be too low compared with observations in these regions. The low-b detections are more plausibly explained by a leading streamer of material that was lost more that 1 Gyr ago and has wrapped all the way around the Galaxy to intercept the line of sight. The distance and velocity measurements at b=-40 deg reported by Majewski et al. in a companion paper also support this hypothesis. We determine debris models with these properties on orbits that are consistent with the currently known positions and velocities of Sgr in Galactic potentials with halo components that have circular velocities v_circ=140-200 km s^-1. In all cases, the orbits oscillate between ~12 and ~40 kpc from the Galactic center with radial time periods of 0.55-0.75 Gyr. The best match to the data is obtained in models where Sgr currently has a mass of ~10^9 M_solar and has orbited the Galaxy for at least the last 1 Gyr, during which time it has reduced its mass by a factor

  9. Leveraging atmospheric CO2 observations to constrain the climate sensitivity of terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Keppel-Aleks, G.

    2015-12-01

    A significant challenge in understanding, and therefore modeling, the response of terrestrial carbon cycling to climate and environmental drivers is that vegetation varies on spatial scales of order a few kilometers whereas Earth system models (ESMs) are run with characteristic length scales of order 100 km. Atmospheric CO2 provides a constraint on carbon fluxes at spatial scales compatible with the resolution of ESMs due to the fact that atmospheric mixing renders a single site representative of fluxes within a large spatial footprint. The variations in atmospheric CO2 at both seasonal and interannual timescales largely reflect terrestrial influence. I discuss the use of atmospheric CO2 observations to benchmark model carbon fluxes over a range of spatial scales. I also discuss how simple models can be used to test functional relationships between the CO2 growth rate and climate variations. In particular, I show how atmospheric CO2 provides constraints on ecosystem sensitivity to climate drivers in the tropics, where tropical forests and semi-arid ecosystems are thought to account for much of the variability in the contemporary carbon sink.

  10. Interannual variability in Australia's terrestrial carbon cycle constrained by multiple observation types

    NASA Astrophysics Data System (ADS)

    Trudinger, Cathy M.; Haverd, Vanessa; Briggs, Peter R.; Canadell, Josep G.

    2016-11-01

    Recent studies have shown that semi-arid ecosystems in Australia may be responsible for a significant part of the interannual variability in the global concentration of atmospheric carbon dioxide. Here we use a multiple constraints approach to calibrate a land surface model of Australian terrestrial carbon and water cycles, with a focus on interannual variability. We use observations of carbon and water fluxes at 14 OzFlux sites, as well as data on carbon pools, litterfall and streamflow. We include calibration of the function describing the response of heterotrophic respiration to soil moisture. We also explore the effect on modelled interannual variability of parameter equifinality, whereby multiple combinations of parameters can give an equally acceptable fit to the calibration data. We estimate interannual variability of Australian net ecosystem production (NEP) of 0.12-0.21 PgC yr-1 (1σ) over 1982-2013, with a high anomaly of 0.43-0.67 PgC yr-1 in 2011 relative to this period associated with exceptionally wet conditions following a prolonged drought. The ranges are due to the effect on calculated NEP anomaly of parameter equifinality, with similar contributions from equifinality in parameters associated with net primary production (NPP) and heterotrophic respiration. Our range of results due to parameter equifinality demonstrates how errors can be underestimated when a single parameter set is used.

  11. Interannual and Seasonal Variability of Biomass Burning Emissions Constrained by Satellite Observations

    NASA Technical Reports Server (NTRS)

    Duncan, Bryan N.; Martin, Randall V.; Staudt, Amanda C.; Yevich, Rosemarie; Logan, Jennifer A.

    2003-01-01

    We present a methodology for estimating the seasonal and interannual variation of biomass burning designed for use in global chemical transport models. The average seasonal variation is estimated from 4 years of fire-count data from the Along Track Scanning Radiometer (ATSR) and 1-2 years of similar data from the Advanced Very High Resolution Radiometer (AVHRR) World Fire Atlases. We use the Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) data product as a surrogate to estimate interannual variability in biomass burning for six regions: Southeast Asia, Indonesia and Malaysia, Brazil, Central America and Mexico, Canada and Alaska, and Asiatic Russia. The AI data set is available from 1979 to the present with an interruption in satellite observations from mid-1993 to mid-1996; this data gap is filled where possible with estimates of area burned from the literature for different regions. Between August 1996 and July 2000, the ATSR fire-counts are used to provide specific locations of emissions and a record of interannual variability throughout the world. We use our methodology to estimate mean seasonal and interannual variations for emissions of carbon monoxide from biomass burning, and we find that no trend is apparent in these emissions over the last two decades, but that there is significant interannual variability.

  12. A model of earthquake triggering probabilities and application to dynamic deformations constrained by ground motion observations

    USGS Publications Warehouse

    Gomberg, J.; Felzer, K.

    2008-01-01

    We have used observations from Felzer and Brodsky (2006) of the variation of linear aftershock densities (i.e., aftershocks per unit length) with the magnitude of and distance from the main shock fault to derive constraints on how the probability of a main shock triggering a single aftershock at a point, P(r, D), varies as a function of distance, r, and main shock rupture dimension, D. We find that P(r, D) becomes independent of D as the triggering fault is approached. When r ??? D P(r, D) scales as Dm where m-2 and decays with distance approximately as r-n with n = 2, with a possible change to r-(n-1) at r > h, where h is the closest distance between the fault and the boundaries of the seismogenic zone. These constraints may be used to test hypotheses about the types of deformations and mechanisms that trigger aftershocks. We illustrate this using dynamic deformations (i.e., radiated seismic waves) and a posited proportionality with P(r, D). Deformation characteristics examined include peak displacements, peak accelerations and velocities (proportional to strain rates and strains, respectively), and two measures that account for cumulative deformations. Our model indicates that either peak strains alone or strain rates averaged over the duration of rupture may be responsible for aftershock triggering.

  13. The Herschel Orion Protostar Survey: Constraining Protostellar Models with Near- to Far-Infrared Observations

    NASA Astrophysics Data System (ADS)

    Furlan, Elise; Ali, Babar; Fischer, Will; Tobin, John; Stutz, Amy; Megeath, Tom; Allen, Lori; HOPS Team

    2013-07-01

    During the protostellar stage of star formation, a young star is surrounded by a large infalling envelope of dust and gas; the material falls onto a circumstellar disk and is eventually accreted by the central star. The dust in the disk and envelope emits prominently at mid- to far-infrared wavelengths; at 10 micron, absorption by small silicate grains typically causes a broad absorption feature. By modeling the near- to far-IR spectral energy distributions (SEDs) of protostars, properties of their disks and envelopes can be derived. As part of the Herschel Orion Protostar Survey (HOPS; PI: S. T. Megeath), we have observed a large sample of protostars in the Orion star-forming complex at 70 and 160 micron with the PACS instrument on the Herschel Space Observatory. For most objects, we also have photometry in the near-IR (2MASS), mid-IR (Spitzer/ IRAC and MIPS), at 100 micron (PACS data from the Gould Belt Survey), sub-mm (APEX/SABOCA and LABOCA), and mid-infrared spectra (Spitzer/IRS). For the interpretation of the SEDs, we have constructed a large grid of protostellar models using a Monte Carlo radiative transfer code. Here we present our SED fitting techniques to determine the best-fit model for each object. We show the importance of including IRS spectra with appropriate weights, in addition to the constraints provided by the PACS measurements, which probe the peak of the SED. The 10 micron silicate absorption feature and the mid- to far-IR SED slope provide key constraints for the inclination angle of the object and its envelope density, with a deep absorption feature and steep SED slope for the most embedded and highly inclined objects. We show a few examples that illustrate our SED fitting method and present some preliminary results from our fits.

  14. Deep source model for Nevado del Ruiz Volcano, Colombia, constrained by interferometric synthetic aperture radar observations

    NASA Astrophysics Data System (ADS)

    Lundgren, P.; Samsonov, S. V.; López, C. M.; Ordoñez, M.

    2015-12-01

    Nevado del Ruiz (NRV) is part of a large volcano complex in the northern Andes of Colombia with a large glacier that erupted in 1985, generating a lahar killing over 23,000 people in the city of Armero and 2,000 people in the town of Chinchina. NRV is the most active volcano in Colombia and since 2012 has generated small eruptions, with no casualties, and constant gas and ash emissions. Interferometric synthetic aperture radar (InSAR) observations from ascending and descending track RADARSAT-2 data show a large (>20 km) wide inflation pattern apparently starting in late 2011 to early 2012 and continuing to the time of this study in early 2015 at a LOS rate of over 3-4 cm/yr (Fig. 1). Volcano pressure volume models for both a point source (Mogi) and a spheroidal (Yang) source find solutions over 14 km beneath the surface, or 10 km below sea level, and centered 10 km to the SW of Nevado del Ruiz volcano. The spheroidal source has a roughly horizontal long axis oriented parallel to the Santa Isabel - Nevado del Ruiz volcanic line and perpendicular to the ambient compressive stress direction. Its solution provides a statistically significant improvement in fit compared to the point source, though consideration of spatially correlated noise sources may diminish this significance. Stress change computations do not favor one model over the other but show that propagating dikes would become trapped in sills, leading to a more complex pathway to the surface and possibly explaining the significant lateral distance between the modeled sources and Nevado del Ruiz volcano.

  15. Constraining the variation of the fine-structure constant with observations of narrow quasar absorption lines

    SciTech Connect

    Songaila, A.; Cowie, L. L.

    2014-10-01

    The unequivocal demonstration of temporal or spatial variability in a fundamental constant of nature would be of enormous significance. Recent attempts to measure the variability of the fine-structure constant α over cosmological time, using high-resolution spectra of high-redshift quasars observed with 10 m class telescopes, have produced conflicting results. We use the many multiplet (MM) method with Mg II and Fe II lines on very high signal-to-noise, high-resolution (R = 72, 000) Keck HIRES spectra of eight narrow quasar absorption systems. We consider both systematic uncertainties in spectrograph wavelength calibration and also velocity offsets introduced by complex velocity structure in even apparently simple and weak narrow lines and analyze their effect on claimed variations in α. We find no significant change in α, Δα/α = (0.43 ± 0.34) × 10{sup –5}, in the redshift range z = 0.7-1.5, where this includes both statistical and systematic errors. We also show that the scatter in measurements of Δα/α arising from absorption line structure can be considerably larger than assigned statistical errors even for apparently simple and narrow absorption systems. We find a null result of Δα/α = (– 0.59 ± 0.55) × 10{sup –5} in a system at z = 1.7382 using lines of Cr II, Zn II, and Mn II, whereas using Cr II and Zn II lines in a system at z = 1.6614 we find a systematic velocity trend that, if interpreted as a shift in α, would correspond to Δα/α = (1.88 ± 0.47) × 10{sup –5}, where both results include both statistical and systematic errors. This latter result is almost certainly caused by varying ionic abundances in subcomponents of the line: using Mn II, Ni II, and Cr II in the analysis changes the result to Δα/α = (– 0.47 ± 0.53) × 10{sup –5}. Combining the Mg II and Fe II results with estimates based on Mn II, Ni II, and Cr II gives Δα/α = (– 0.01 ± 0.26) × 10{sup –5}. We conclude that spectroscopic measurements of

  16. Constraining the Variation of the Fine-structure Constant with Observations of Narrow Quasar Absorption Lines

    NASA Astrophysics Data System (ADS)

    Songaila, A.; Cowie, L. L.

    2014-10-01

    The unequivocal demonstration of temporal or spatial variability in a fundamental constant of nature would be of enormous significance. Recent attempts to measure the variability of the fine-structure constant α over cosmological time, using high-resolution spectra of high-redshift quasars observed with 10 m class telescopes, have produced conflicting results. We use the many multiplet (MM) method with Mg II and Fe II lines on very high signal-to-noise, high-resolution (R = 72, 000) Keck HIRES spectra of eight narrow quasar absorption systems. We consider both systematic uncertainties in spectrograph wavelength calibration and also velocity offsets introduced by complex velocity structure in even apparently simple and weak narrow lines and analyze their effect on claimed variations in α. We find no significant change in α, Δα/α = (0.43 ± 0.34) × 10-5, in the redshift range z = 0.7-1.5, where this includes both statistical and systematic errors. We also show that the scatter in measurements of Δα/α arising from absorption line structure can be considerably larger than assigned statistical errors even for apparently simple and narrow absorption systems. We find a null result of Δα/α = (- 0.59 ± 0.55) × 10-5 in a system at z = 1.7382 using lines of Cr II, Zn II, and Mn II, whereas using Cr II and Zn II lines in a system at z = 1.6614 we find a systematic velocity trend that, if interpreted as a shift in α, would correspond to Δα/α = (1.88 ± 0.47) × 10-5, where both results include both statistical and systematic errors. This latter result is almost certainly caused by varying ionic abundances in subcomponents of the line: using Mn II, Ni II, and Cr II in the analysis changes the result to Δα/α = (- 0.47 ± 0.53) × 10-5. Combining the Mg II and Fe II results with estimates based on Mn II, Ni II, and Cr II gives Δα/α = (- 0.01 ± 0.26) × 10-5. We conclude that spectroscopic measurements of quasar absorption lines are not yet capable of

  17. Spatial scale of deformation constrained by combinations of InSAR and GPS observations in Southern California

    NASA Astrophysics Data System (ADS)

    Lohman, R. B.; Scott, C. P.

    2014-12-01

    Efforts to understand the buildup and release of strain within the Earth's crust often rely on well-characterized observations of ground deformation, over time scales that include interseismic periods, earthquakes, and transient deformation episodes. Constraints on current rates of surface deformation in 1-, 2- or 3-dimensions can be obtained by examining sets of GPS and Interferometric Synthetic Aperture Radar (InSAR) observations, both alone and in combination. Contributions to the observed signal often include motion along faults, seasonal cycles of subsidence and recharge associated with aquifers, anthropogenic extraction of hydrocarbons, and variations in atmospheric water vapor and ionospheric properties. Here we examine methods for extracting time-varying ground deformation signals from combinations of InSAR and GPS data, real and synthetic, applied to Southern California. We show that two methods for combining the data through removal of a GPS-constrained function (a plane, and filtering) from the InSAR result in a clear tradeoff between the contribution from the two datatypes at diffferent spatial scales. We also show that the contribution to the secular rates at GPS sites from seasonal signals is large enough to be a significant error in this estimation process, and should be accounted for.

  18. Using CATS near-real-time lidar observations to monitor and constrain volcanic sulfur dioxide (SO2) forecasts

    NASA Astrophysics Data System (ADS)

    Hughes, E. J.; Yorks, J.; Krotkov, N. A.; Silva, A. M.; McGill, M.

    2016-10-01

    An eruption of Italian volcano Mount Etna on 3 December 2015 produced fast-moving sulfur dioxide (SO2) and sulfate aerosol clouds that traveled across Asia and the Pacific Ocean, reaching North America in just 5 days. The Ozone Profiler and Mapping Suite's Nadir Mapping UV spectrometer aboard the U.S. National Polar-orbiting Partnership satellite observed the horizontal transport of the SO2 cloud. Vertical profiles of the colocated volcanic sulfate aerosols were observed between 11.5 and 13.5 km by the new Cloud Aerosol Transport System (CATS) space-based lidar aboard the International Space Station. Backward trajectory analysis estimates the SO2 cloud altitude at 7-12 km. Eulerian model simulations of the SO2 cloud constrained by CATS measurements produced more accurate dispersion patterns compared to those initialized with the back trajectory height estimate. The near-real-time data processing capabilities of CATS are unique, and this work demonstrates the use of these observations to monitor and model volcanic clouds.

  19. Using CATS Near-Real-time Lidar Observations to Monitor and Constrain Volcanic Sulfur Dioxide (SO2) Forecasts

    NASA Technical Reports Server (NTRS)

    Hughes, E. J.; Yorks, J.; Krotkov, N. A.; da Silva, A. M.; Mcgill, M.

    2016-01-01

    An eruption of Italian volcano Mount Etna on 3 December 2015 produced fast-moving sulfur dioxide (SO2) and sulfate aerosol clouds that traveled across Asia and the Pacific Ocean, reaching North America in just 5 days. The Ozone Profiler and Mapping Suite's Nadir Mapping UV spectrometer aboard the U.S. National Polar-orbiting Partnership satellite observed the horizontal transport of the SO2 cloud. Vertical profiles of the colocated volcanic sulfate aerosols were observed between 11.5 and 13.5 km by the new Cloud Aerosol Transport System (CATS) space-based lidar aboard the International Space Station. Backward trajectory analysis estimates the SO2 cloud altitude at 7-12 km. Eulerian model simulations of the SO2 cloud constrained by CATS measurements produced more accurate dispersion patterns compared to those initialized with the back trajectory height estimate. The near-real-time data processing capabilities of CATS are unique, and this work demonstrates the use of these observations to monitor and model volcanic clouds.

  20. Determination of the atmospheric neutrino flux and searches for new physics with AMANDA-II

    SciTech Connect

    Abbasi, R.; Andeen, K.; Baker, M.; Berghaus, P.; Boersma, D. J.; Braun, J.; Chirkin, D.; Desiati, P.; Diaz-Velez, J. C.; Dumm, J. P.; Eisch, J.; Finley, C.; Ganugapati, R.; Gladstone, L.; Grullon, S.; Halzen, F.; Hanson, K.; Hill, G. C.; Hoshina, K.; Jacobsen, J.

    2009-05-15

    The AMANDA-II detector, operating since 2000 in the deep ice at the geographic South Pole, has accumulated a large sample of atmospheric muon neutrinos in the 100 GeV to 10 TeV energy range. The zenith angle and energy distribution of these events can be used to search for various phenomenological signatures of quantum gravity in the neutrino sector, such as violation of Lorentz invariance or quantum decoherence. Analyzing a set of 5511 candidate neutrino events collected during 1387 days of livetime from 2000 to 2006, we find no evidence for such effects and set upper limits on violation of Lorentz invariance and quantum decoherence parameters using a maximum likelihood method. Given the absence of evidence for new flavor-changing physics, we use the same methodology to determine the conventional atmospheric muon neutrino flux above 100 GeV.

  1. Determination of the Atmospheric Neutrino Flux and Searches for New Physics with AMANDA-II

    SciTech Connect

    IceCube Collaboration; Klein, Spencer; Collaboration, IceCube

    2009-06-02

    The AMANDA-II detector, operating since 2000 in the deep ice at the geographic South Pole, has accumulated a large sample of atmospheric muon neutrinos in the 100 GeV to 10 TeV energy range. The zenith angle and energy distribution of these events can be used to search for various phenomenological signatures of quantum gravity in the neutrino sector, such as violation of Lorentz invariance (VLI) or quantum decoherence (QD). Analyzing a set of 5511 candidate neutrino events collected during 1387 days of livetime from 2000 to 2006, we find no evidence for such effects and set upper limits on VLI and QD parameters using a maximum likelihood method. Given the absence of evidence for new flavor-changing physics, we use the same methodology to determine the conventional atmospheric muon neutrino flux above 100 GeV.

  2. Searching for quantum gravity with high-energy atmospheric neutrinos and AMANDA-II

    NASA Astrophysics Data System (ADS)

    Kelley, John Lawrence

    2008-06-01

    The AMANDA-II detector, operating since 2000 in the deep ice at the geographic South Pole, has accumulated a large sample of atmospheric muon neutrinos in the 100 GeV to 10 TeV energy range. The zenith angle and energy distribution of these events can be used to search for various phenomenological signatures of quantum gravity in the neutrino sector, such as violation of Lorentz invariance (VLI) or quantum decoherence (QD). Analyzing a set of 5511 candidate neutrino events collected during 1387 days of livetime from 2000 to 2006, we find no evidence for such effects and set upper limits on VLI and QD parameters using a maximum likelihood method. Given the absence of new flavor-changing physics, we use the same methodology to determine the conventional atmospheric muon neutrino flux above 100 GeV.

  3. Modeling of the Inner Coma of Comet 67P/Churyumov-Gerasimenko Constrained by VIRTIS and ROSINA Observations

    NASA Astrophysics Data System (ADS)

    Fougere, N.; Combi, M. R.; Tenishev, V.; Bieler, A. M.; Migliorini, A.; Bockelée-Morvan, D.; Toth, G.; Huang, Z.; Gombosi, T. I.; Hansen, K. C.; Capaccioni, F.; Filacchione, G.; Piccioni, G.; Debout, V.; Erard, S.; Leyrat, C.; Fink, U.; Rubin, M.; Altwegg, K.; Tzou, C. Y.; Le Roy, L.; Calmonte, U.; Berthelier, J. J.; Rème, H.; Hässig, M.; Fuselier, S. A.; Fiethe, B.; De Keyser, J.

    2015-12-01

    As it orbits around comet 67P/Churyumov-Gerasimenko (CG), the Rosetta spacecraft acquires more information about its main target. The numerous observations made at various geometries and at different times enable a good spatial and temporal coverage of the evolution of CG's cometary coma. However, the question regarding the link between the coma measurements and the nucleus activity remains relatively open notably due to gas expansion and strong kinetic effects in the comet's rarefied atmosphere. In this work, we use coma observations made by the ROSINA-DFMS instrument to constrain the activity at the surface of the nucleus. The distribution of the H2O and CO2 outgassing is described with the use of spherical harmonics. The coordinates in the orthogonal system represented by the spherical harmonics are computed using a least squared method, minimizing the sum of the square residuals between an analytical coma model and the DFMS data. Then, the previously deduced activity distributions are used in a Direct Simulation Monte Carlo (DSMC) model to compute a full description of the H2O and CO2 coma of comet CG from the nucleus' surface up to several hundreds of kilometers. The DSMC outputs are used to create synthetic images, which can be directly compared with VIRTIS measurements. The good agreement between the VIRTIS observations and the DSMC model, itself constrained with ROSINA data, provides a compelling juxtaposition of the measurements from these two instruments. Acknowledgements Work at UofM was supported by contracts JPL#1266313, JPL#1266314 and NASA grant NNX09AB59G. Work at UoB was funded by the State of Bern, the Swiss National Science Foundation and by the ESA PRODEX Program. Work at Southwest Research institute was supported by subcontract #1496541 from the JPL. Work at BIRA-IASB was supported by the Belgian Science Policy Office via PRODEX/ROSINA PEA 90020. The authors would like to thank ASI, CNES, DLR, NASA for supporting this research. VIRTIS was built

  4. CONSTRAINING A MODEL OF TURBULENT CORONAL HEATING FOR AU MICROSCOPII WITH X-RAY, RADIO, AND MILLIMETER OBSERVATIONS

    SciTech Connect

    Cranmer, Steven R.; Wilner, David J.; MacGregor, Meredith A.

    2013-08-01

    Many low-mass pre-main-sequence stars exhibit strong magnetic activity and coronal X-ray emission. Even after the primordial accretion disk has been cleared out, the star's high-energy radiation continues to affect the formation and evolution of dust, planetesimals, and large planets. Young stars with debris disks are thus ideal environments for studying the earliest stages of non-accretion-driven coronae. In this paper we simulate the corona of AU Mic, a nearby active M dwarf with an edge-on debris disk. We apply a self-consistent model of coronal loop heating that was derived from numerical simulations of solar field-line tangling and magnetohydrodynamic turbulence. We also synthesize the modeled star's X-ray luminosity and thermal radio/millimeter continuum emission. A realistic set of parameter choices for AU Mic produces simulated observations that agree with all existing measurements and upper limits. This coronal model thus represents an alternative explanation for a recently discovered ALMA central emission peak that was suggested to be the result of an inner 'asteroid belt' within 3 AU of the star. However, it is also possible that the central 1.3 mm peak is caused by a combination of active coronal emission and a bright inner source of dusty debris. Additional observations of this source's spatial extent and spectral energy distribution at millimeter and radio wavelengths will better constrain the relative contributions of the proposed mechanisms.

  5. Combined assimilation of IASI and MLS observations to constrain tropospheric and stratospheric ozone in a global chemical transport model

    NASA Astrophysics Data System (ADS)

    Emili, E.; Barret, B.; Massart, S.; Le Flochmoen, E.; Piacentini, A.; El Amraoui, L.; Pannekoucke, O.; Cariolle, D.

    2014-01-01

    Accurate and temporally resolved fields of free-troposphere ozone are of major importance to quantify the intercontinental transport of pollution and the ozone radiative forcing. We consider a global chemical transport model (MOdèle de Chimie Atmosphérique à Grande Échelle, MOCAGE) in combination with a linear ozone chemistry scheme to examine the impact of assimilating observations from the Microwave Limb Sounder (MLS) and the Infrared Atmospheric Sounding Interferometer (IASI). The assimilation of the two instruments is performed by means of a variational algorithm (4D-VAR) and allows to constrain stratospheric and tropospheric ozone simultaneously. The analysis is first computed for the months of August and November 2008 and validated against ozonesonde measurements to verify the presence of observations and model biases. Furthermore, a longer analysis of 6 months (July-December 2008) showed that the combined assimilation of MLS and IASI is able to globally reduce the uncertainty (root mean square error, RMSE) of the modeled ozone columns from 30 to 15% in the upper troposphere/lower stratosphere (UTLS, 70-225 hPa). The assimilation of IASI tropospheric ozone observations (1000-225 hPa columns, TOC - tropospheric O3 column) decreases the RMSE of the model from 40 to 20% in the tropics (30° S-30° N), whereas it is not effective at higher latitudes. Results are confirmed by a comparison with additional ozone data sets like the Measurements of OZone and wAter vapour by aIrbus in-service airCraft (MOZAIC) data, the Ozone Monitoring Instrument (OMI) total ozone columns and several high-altitude surface measurements. Finally, the analysis is found to be insensitive to the assimilation parameters. We conclude that the combination of a simplified ozone chemistry scheme with frequent satellite observations is a valuable tool for the long-term analysis of stratospheric and free-tropospheric ozone.

  6. An Experimental Path to Constraining the Origins of the Jupiter Trojans Using Observations, Theoretical Predictions, and Laboratory Simulants

    NASA Astrophysics Data System (ADS)

    Blacksberg, Jordana; Eiler, John; Brown, Mike; Ehlmann, Bethany; Hand, Kevin; Hodyss, Robert; Mahjoub, Ahmed; Poston, Michael; Liu, Yang; Choukroun, Mathieu; Carey, Elizabeth; Wong, Ian

    2014-11-01

    Hypotheses based on recent dynamical models (e.g. the Nice Model) shape our current understanding of solar system evolution, suggesting radical rearrangement in the first hundreds of millions of years of its history, changing the orbital distances of Jupiter, Saturn, and a large number of small bodies. The goal of this work is to build a methodology to concretely tie individual solar system bodies to dynamical models using observables, providing evidence for their origins and evolutionary pathways. Ultimately, one could imagine identifying a set of chemical or mineralogical signatures that could quantitatively and predictably measure the radial distance at which icy and rocky bodies first accreted. The target of the work presented here is the Jupiter Trojan asteroids, predicted by the Nice Model to have initially formed in the Kuiper belt and later been scattered inward to co-orbit with Jupiter. Here we present our strategy which is fourfold: (1) Generate predictions about the mineralogical, chemical, and isotopic compositions of materials accreted in the early solar system as a function of distance from the Sun. (2) Use temperature and irradiation to simulate evolutionary processing of ices and silicates, and measure the alteration in spectral properties from the UV to mid-IR. (3) Characterize simulants to search for potential fingerprints of origin and processing pathways, and (4) Use telescopic observations to increase our knowledge of the Trojan asteroids, collecting data on populations and using spectroscopy to constrain their compositions. In addition to the overall strategy, we will present preliminary results on compositional modeling, observations, and the synthesis, processing, and characterization of laboratory simulants including ices and silicates. This work has been supported by the Keck Institute for Space Studies (KISS). The research described here was carried out at the Jet Propulsion Laboratory, Caltech, under a contract with the National

  7. Constraining Annual Water Balance Estimates with Basin-Scale Observations from the Airborne Snow Observatory during the Current Californian Drought

    NASA Astrophysics Data System (ADS)

    Bormann, K.; Painter, T. H.; Marks, D. G.; Hedrick, A. R.; Deems, J. S.; Patterson, V.; McGurk, B. J.

    2015-12-01

    One of the great unknowns in mountain hydrology is how much water is stored within a seasonal snowpack at the basin scale. Quantifying mountain water resources is critical for assisting with water resource management, but has proven elusive due to high spatial and temporal variability of mountain snow cover, complex terrain, accessibility constraints and limited in-situ networks. The Airborne Snow Observatory (ASO, aso.jpl.nasa.gov) uses coupled airborne LiDAR and spectrometer instruments for high resolution snow depth retrievals which are used to derive unprecedented basin-wide estimates of snow water mass (snow water equivalent, SWE). ASO has been operational over key basins in the Sierra Nevada Mountains in California since 2013. Each operational year has been very dry, with precipitation in 2013 at 75% of average, 2014 at 50% of average and 2015 - the lowest snow year on record for the region. With vastly improved estimates of the snowpack water content from ASO, we can now for the first time conduct observation-based mass balance accounting of surface water in snow-dominated basins, and reconcile these estimates with observed reservoir inflows. In this study we use ASO SWE data to constrain mass balance accounting of basin annual water storages to quantify the water contained within the snowpack above the Hetch Hetchy water supply reservoir (Tuolumne River basin, California). The analysis compares and contrasts annual snow water volumes from observed reservoir inflows, snow water volume estimates from ASO, a physically based model that simulates the snowpack from meteorological inputs and a semi-distributed hydrological model. The study provides invaluable insight to the overall volume of water contained within a seasonal snowpack during a severe drought and how these quantities are simulated in our modelling systems. We envisage that this research will be of great interest to snowpack modellers, hydrologists, dam operators and water managers worldwide.

  8. Constraining lightning channel growth dynamics by comparison of time domain electromagnetic simulations to Huntsville Alabama Marx Meter Array observations

    NASA Astrophysics Data System (ADS)

    Carlson, B. E.; Bitzer, P. M.; Burchfield, J.

    2015-12-01

    Major unknowns in lightning research include the mechanism and dynamics of lightning channel extension. Such processes are most simple during the initial growth of the channel, when the channel is relatively short and has not yet branched extensively throughout the cloud. During this initial growth phase, impulsive electromagnetic emissions (preliminary breakdown pulses) can be well-described as produced by current pulses generated as the channel extends, but the overall growth rate, channel geometry, and degree of branching are not known. We approach such issues by examining electric field change measurements made with the Huntsville Alabama Marx Meter Array (HAMMA) during the first few milliseconds of growth of a lightning discharge. We compare HAMMA observations of electromagnetic emissions and overall field change to models of lightning channel growth and development and attempt to constrain channel growth rate, degree of branching, channel physical properties, and uniformity of thunderstorm electric field. Preliminary comparisons suggest that the lightning channel branches relatively early in the discharge, though more complete and detailed analysis will be presented.

  9. Combined assimilation of IASI and MLS observations to constrain tropospheric and stratospheric ozone in a global chemical transport model

    NASA Astrophysics Data System (ADS)

    Emili, E.; Barret, B.; Massart, S.; Le Flochmoen, E.; Piacentini, A.; El Amraoui, L.; Pannekoucke, O.; Cariolle, D.

    2013-08-01

    Accurate and temporally resolved fields of free-troposphere ozone are of major importance to quantify the intercontinental transport of pollution and the ozone radiative forcing. In this study we examine the impact of assimilating ozone observations from the Microwave Limb Sounder (MLS) and the Infrared Atmospheric Sounding Interferometer (IASI) in a global chemical transport model (MOdèle de Chimie Atmosphérique à Grande Échelle, MOCAGE). The assimilation of the two instruments is performed by means of a variational algorithm (4-D-VAR) and allows to constrain stratospheric and tropospheric ozone simultaneously. The analysis is first computed for the months of August and November 2008 and validated against ozone-sondes measurements to verify the presence of observations and model biases. It is found that the IASI Tropospheric Ozone Column (TOC, 1000-225 hPa) should be bias-corrected prior to assimilation and MLS lowermost level (215 hPa) excluded from the analysis. Furthermore, a longer analysis of 6 months (July-August 2008) showed that the combined assimilation of MLS and IASI is able to globally reduce the uncertainty (Root Mean Square Error, RMSE) of the modeled ozone columns from 30% to 15% in the Upper-Troposphere/Lower-Stratosphere (UTLS, 70-225 hPa) and from 25% to 20% in the free troposphere. The positive effect of assimilating IASI tropospheric observations is very significant at low latitudes (30° S-30° N), whereas it is not demonstrated at higher latitudes. Results are confirmed by a comparison with additional ozone datasets like the Measurements of OZone and wAter vapour by aIrbus in-service airCraft (MOZAIC) data, the Ozone Monitoring Instrument (OMI) total ozone columns and several high-altitude surface measurements. Finally, the analysis is found to be little sensitive to the assimilation parameters and the model chemical scheme, due to the high frequency of satellite observations compared to the average life-time of free

  10. Mechanisms of postseismic relaxation after a great subduction earthquake constrained by cross-scale thermomechanical model and geodetic observations

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan; Muldashev, Iskander

    2016-04-01

    According to conventional view, postseismic relaxation process after a great megathrust earthquake is dominated by fault-controlled afterslip during first few months to year, and later by visco-elastic relaxation in mantle wedge. We test this idea by cross-scale thermomechanical models of seismic cycle that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. As initial conditions for the models we use thermomechanical models of subduction zones at geological time-scale including a narrow subduction channel with low static friction for two settings, similar to the Southern Chile in the region of the great Chile Earthquake of 1960 and Japan in the region of Tohoku Earthquake of 2011. We next introduce in the same models classic rate-and state friction law in subduction channels, leading to stick-slip instability. The models start to generate spontaneous earthquake sequences and model parameters are set to closely replicate co-seismic deformations of Chile and Japan earthquakes. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing integration step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We show that for the case of the Chile earthquake visco-elastic relaxation in the mantle wedge becomes dominant relaxation process already since 1 hour after the earthquake, while for the smaller Tohoku earthquake this happens some days after the earthquake. We also show that our model for Tohoku earthquake is consistent with the geodetic observations for the day-to-4year time range. We will demonstrate and discuss modeled deformation patterns during seismic cycles and identify the regions where the effects of afterslip and visco-elastic relaxation can be best distinguished.

  11. Constraining the particle nature of dark matter: Model-independent tests from the intersection of theory and observation

    NASA Astrophysics Data System (ADS)

    Mack, Gregory Daniel

    Dark matter is one of the greatest mysteries of modern astrophysics. It comprises about 83% of the matter density in the Universe and approximately 22% of the total energy density, yet its identity and particle properties are unknown. Gravitational interactions reveal its presence, but it does not readily interact with light or normal matter. The purpose of this dissertation is to provide insight into the particle properties of this exotic type of matter in a model-independent fashion. Dark matter is expected to be its own antiparticle, but the strength of its self-annihilation is not known. It is often assumed to be consistent with that which gives the correct abundance if dark matter were produced as a thermal relic in the early Universe, but that has not been proven. Constraints on the dark matter self-annihilation cross section are found over a wide range of masses, both for the separate cases of monoenergetic neutrino and monoenergetic photon production, and the corresponding limits on the total self-annihilation cross section. This is done by comparing the theoretical flux from a region of annihilating dark matter to observational data of that region. While larger than the thermal relic value, the resulting upper bounds are surprisingly stringent and among the first model-independent limits of their kind. A specific application of residual dark matter annihilations during the time of Big Bang Nucleosynthesis is analyzed, adding a lower limit to the value of the annihilation cross section for a certain mass range to couple with the calculated upper bounds mentioned above. The interaction strength of dark matter with normal matter is constrained by the case of dark matter capture in Earth and the resulting heat flow from annihilation in the core. When compared to observation, the analysis rules out many possible interaction strengths between dark matter and normal matter, showing that the interaction, as measured by the interaction cross section, must be truly

  12. Search for Point Sources of High Energy Neutrinos with Final Data from AMANDA-II

    SciTech Connect

    IceCube Collaboration; Klein, Spencer

    2009-03-06

    We present a search for point sources of high energy neutrinos using 3.8 years of data recorded by AMANDA-II during 2000-2006. After reconstructing muon tracks and applying selection criteria designed to optimally retain neutrino-induced events originating in the Northern Sky, we arrive at a sample of 6595 candidate events, predominantly from atmospheric neutrinos with primary energy 100 GeV to 8 TeV. Our search of this sample reveals no indications of a neutrino point source. We place the most stringent limits to date on E{sup -2} neutrino fluxes from points in the Northern Sky, with an average upper limit of E{sup 2}{Phi}{sub {nu}{sub {mu}}+{nu}{sub {tau}}} {le} 5.2 x 10{sup -11} TeV cm{sup -2} s{sup -1} on the sum of {nu}{sub {mu}} and {nu}{sub {tau}} fluxes, assumed equal, over the energy range from 1.9 TeV to 2.5 PeV.

  13. Multi-year search for a diffuse flxu of muon neutrinos with AMANDA-II

    SciTech Connect

    IceCube Collaboration; Klein, Spencer; Achterberg, A.; Collaboration, IceCube

    2008-04-13

    A search for TeV-PeV muon neutrinos from unresolved sources was performed on AMANDA-II data collected between 2000 and 2003 with an equivalent livetime of 807 days. This diffuse analysis sought to find an extraterrestrial neutrino flux from sources with non-thermal components. The signal is expected to have a harder spectrum than the atmospheric muon and neutrino backgrounds. Since no excess of events was seen in the data over the expected background, an upper limit of E{sup 2}{Phi}{sub 90%C.L.} < 7.4 x 10{sup -8} GeV cm{sup -2} s{sup -1} sr{sup -1} is placed on the diffuse flux of muon neutrinos with a {Phi} {proportional_to} E{sup -2} spectrum in the energy range 16 TeV to 2.5 PeV. This is currently the most sensitive {Phi} {proportional_to} E{sup -2} diffuse astrophysical neutrino limit. We also set upper limits for astrophysical and prompt neutrino models, all of which have spectra different than {Phi} {proportional_to} E{sup -2}.

  14. A maximum-likelihood search for neutrino point sources with the AMANDA-II detector

    NASA Astrophysics Data System (ADS)

    Braun, James R.

    Neutrino astronomy offers a new window to study the high energy universe. The AMANDA-II detector records neutrino-induced muon events in the ice sheet beneath the geographic South Pole, and has accumulated 3.8 years of livetime from 2000 - 2006. After reconstructing muon tracks and applying selection criteria, we arrive at a sample of 6595 events originating from the Northern Sky, predominantly atmospheric neutrinos with primary energy 100 GeV to 8 TeV. We search these events for evidence of astrophysical neutrino point sources using a maximum-likelihood method. No excess above the atmospheric neutrino background is found, and we set upper limits on neutrino fluxes. Finally, a well-known potential dark matter signature is emission of high energy neutrinos from annihilation of WIMPs gravitationally bound to the Sun. We search for high energy neutrinos from the Sun and find no excess. Our limits on WIMP-nucleon cross section set new constraints on MSSM parameter space.

  15. Combined assimilation of IASI and MLS observations to constrain tropospheric and stratospheric ozone in a global chemical transport model

    NASA Astrophysics Data System (ADS)

    Emili, Emanuele; Barret, Brice; Massart, Sebastien; Piacentini, Andrea; Pannekoucke, Olivier; Cariolle, Daniel

    2013-04-01

    Ozone acts as the main shield against UV radiation in the stratosphere, it contributes to the greenhouse effect in the troposphere and it is a major pollutant in the planetary boundary layer. In the last decades models and satellite observations reached a mature level, providing estimates of ozone with an accuracy of few percents in the stratosphere. On the other hand, tropospheric ozone still represents a challenge, because its signal is less detectable by space-borne sensors, its modelling depends on the knowledge of gaseous emissions at the surface, and stratosphere/troposphere exchanges might rapidly increase its abundance by several times. Moreover there is generally lack of in-situ observations of tropospheric ozone in many regions of the world. For these reasons the assimilation of satellite data into chemical transport models represents a promising technique to overcome limitations of both satellites and models. The objective of this study is to assess the value of vertically resolved observations from the Infrared Atmospheric Sounding Interferometer (IASI) and the Microwave Limb Sounder (MLS) to constrain both the tropospheric and stratospheric ozone profile in a global model. While ozone total columns and stratospheric profiles from UV and microwave sensors are nowadays routinely assimilated in operational models, still few studies have explored the assimilation of ozone products from IR sensors such as IASI, which provide better sensitivity in the troposphere. We assimilate both MLS ozone profiles and IASI tropospheric (1000-225 hPa) ozone columns in the Météo France chemical transport model MOCAGE for 2008. The model predicts ozone concentrations on a 2x2 degree global grid and for 60 vertical levels, ranging from the surface up to 0.1 hPa. The assimilation is based on a 4D-VAR algorithm, employs a linear chemistry scheme and accounts for the satellite vertical sensitivity via the averaging kernels. The assimilation of the two products is first tested

  16. Constraining U.S. ammonia emissions using TES remote sensing observations and the GEOS-Chem adjoint model

    EPA Science Inventory

    Ammonia (NH(3)has significant impacts on biodiversity, eutrophication, and acidification. Widespread uncertainty in the magnitude and seasonality of NH3 emissions hinders efforts to address these issues. In this work, we constrain U.S. NH3 sources using obse...

  17. Identification of changes in hydrological drought characteristics from a multi-GCM driven ensemble constrained by observed discharge

    NASA Astrophysics Data System (ADS)

    van Huijgevoort, M. H. J.; van Lanen, H. A. J.; Teuling, A. J.; Uijlenhoet, R.

    2014-05-01

    Drought severity and related socio-economic impacts are expected to increase due to climate change. To better adapt to these impacts, more knowledge on changes in future hydrological drought characteristics (e.g. frequency, duration) is needed rather than only knowledge on changes in meteorological or soil moisture drought characteristics. In this study, effects of climate change on droughts in several river basins across the globe were investigated. Downscaled and bias-corrected data from three General Circulation Models (GCMs) for the A2 emission scenario were used as forcing for large-scale models. Results from five large-scale hydrological models (GHMs) run within the EU-WATCH project were used to identify low flows and hydrological drought characteristics in the control period (1971-2000) and the future period (2071-2100). Low flows were defined by the monthly 20th percentile from discharge (Q20). The variable threshold level method was applied to determine hydrological drought characteristics. The climatology of normalized Q20 from model results for the control period was compared with the climatology of normalized Q20 from observed discharge of the Global Runoff Data Centre. An observation-constrained selection of model combinations (GHM and GCM) was made based on this comparison. Prior to the assessment of future change, the selected model combinations were evaluated against observations in the period 2001-2010 for a number of river basins. The majority of the combinations (82%) that performed sufficiently in the control period, also performed sufficiently in the period 2001-2010. With the selected model combinations, future changes in drought for each river basin were identified. In cold climates, model combinations projected a regime shift and increase in low flows between the control period and future period. Arid climates were found to become even drier in the future by all model combinations. Agreement between the combinations on future low flows was

  18. The Energy Spectrum of Atmospheric Neutrinos between 2 and 200 TeV with the AMANDA-II Detector

    SciTech Connect

    IceCube Collaboration; Abbasi, R.

    2010-05-11

    The muon and anti-muon neutrino energy spectrum is determined from 2000-2003 AMANDA telescope data using regularised unfolding. This is the first measurement of atmospheric neutrinos in the energy range 2-200 TeV. The result is compared to different atmospheric neutrino models and it is compatible with the atmospheric neutrinos from pion and kaon decays. No significant contribution from charm hadron decays or extraterrestrial neutrinos is detected. The capabilities to improve the measurement of the neutrino spectrum with the successor experiment IceCube are discussed.

  19. Constraining a land-surface model with multiple observations by application of the MPI-Carbon Cycle Data Assimilation System V1.0

    NASA Astrophysics Data System (ADS)

    Schürmann, Gregor J.; Kaminski, Thomas; Köstler, Christoph; Carvalhais, Nuno; Voßbeck, Michael; Kattge, Jens; Giering, Ralf; Rödenbeck, Christian; Heimann, Martin; Zaehle, Sönke

    2016-09-01

    We describe the Max Planck Institute Carbon Cycle Data Assimilation System (MPI-CCDAS) built around the tangent-linear version of the JSBACH land-surface scheme, which is part of the MPI-Earth System Model v1. The simulated phenology and net land carbon balance were constrained by globally distributed observations of the fraction of absorbed photosynthetically active radiation (FAPAR, using the TIP-FAPAR product) and atmospheric CO2 at a global set of monitoring stations for the years 2005 to 2009. When constrained by FAPAR observations alone, the system successfully, and computationally efficiently, improved simulated growing-season average FAPAR, as well as its seasonality in the northern extra-tropics. When constrained by atmospheric CO2 observations alone, global net and gross carbon fluxes were improved, despite a tendency of the system to underestimate tropical productivity. Assimilating both data streams jointly allowed the MPI-CCDAS to match both observations (TIP-FAPAR and atmospheric CO2) equally well as the single data stream assimilation cases, thereby increasing the overall appropriateness of the simulated biosphere dynamics and underlying parameter values. Our study thus demonstrates the value of multiple-data-stream assimilation for the simulation of terrestrial biosphere dynamics. It further highlights the potential role of remote sensing data, here the TIP-FAPAR product, in stabilising the strongly underdetermined atmospheric inversion problem posed by atmospheric transport and CO2 observations alone. Notwithstanding these advances, the constraint of the observations on regional gross and net CO2 flux patterns on the MPI-CCDAS is limited through the coarse-scale parametrisation of the biosphere model. We expect improvement through a refined initialisation strategy and inclusion of further biosphere observations as constraints.

  20. Mothers, daughters and midlife (self)-discoveries: gender and aging in the Amanda Cross' Kate Fansler series.

    PubMed

    Domínguez-Rué, Emma

    2012-12-01

    In the same way that many aspects of gender cannot be understood aside from their relationship to race, class, culture, nationality and/or sexuality, the interactions between gender and aging constitute an interesting field for academic research, without which we cannot gain full insight into the complex and multi-faceted nature of gender studies. Although the American writer and Columbia professor Carolyn Gold Heilbrun (1926-2003) is more widely known for her best-selling mystery novels, published under the pseudonym of Amanda Cross, she also authored remarkable pieces of non-fiction in which she asserted her long-standing commitment to feminism, while she also challenged established notions on women and aging and advocated for a reassessment of those negative views. To my mind, the Kate Fansler novels became an instrument to reach a massive audience of female readers who might not have read her non-fiction, but who were perhaps finding it difficult to reach fulfillment as women under patriarchy, especially upon reaching middle age. Taking her essays in feminism and literary criticism as a basis and her later fiction as substantiation to my argument, this paper will try to reveal the ways in which Heilbrun's seemingly more superficial and much more commercial mystery novels as Amanda Cross were used a catalyst that informed her feminist principles while vindicating the need to rethink about issues concerning literary representations of mature women and cultural stereotypes about motherhood.

  1. On the convergence of ionospheric constrained precise point positioning (IC-PPP) based on undifferential uncombined raw GNSS observations.

    PubMed

    Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing

    2013-11-18

    Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications.

  2. Constraining precipitation initiation in marine stratocumulus using aircraft observations and LES with high spectral resolution bin microphysics

    NASA Astrophysics Data System (ADS)

    Witte, M.; Chuang, P. Y.; Rossiter, D.; Ayala, O.; Wang, L. P.

    2015-12-01

    Turbulence has been suggested as one possible mechanism to accelerate the onset of autoconversion and widen the process "bottleneck" in the formation of warm rain. While direct observation of the collision-coalescence process remains beyond the reach of present-day instrumentation, co-located sampling of atmospheric motion and the drop size spectrum allows for comparison of in situ observations with simulation results to test representations of drop growth processes. This study evaluates whether observations of drops in the autoconversion regime can be replicated using our best theoretical understanding of collision-coalescence. A state-of-the-art turbulent collisional growth model is applied to a bin microphysics scheme within a large-eddy simulation such that the full range of cloud drop growth mechanisms are represented (i.e. CCN activation, condensation, collision-coalescence, mixing, etc.) at realistic atmospheric conditions. The spectral resolution of the microphysics scheme has been quadrupled in order to (a) more closely match the resolution of the observational instrumentation and (b) limit numerical diffusion, which leads to spurious broadening of the drop size spectrum at standard mass-doubling resolution. We compare simulated cloud drop spectra with those obtained from aircraft observations to assess the quality and limits of our theoretical knowledge. The comparison is performed for two observational cases from the Physics of Stratocumulus Top (POST) field campaign: 12 August 2008 (drizzling night flight, Rmax~2 mm/d) and 15 August 2008 (nondrizzling day flight, Rmax<0.5 mm/d). Both flights took place off the coast of Monterey, CA and the two cases differ in their radiative cooling rates, shear, cloud-top temperature and moisture jumps, and entrainment rates. Initial results from a collision box model suggest that enhancements of approximately 2 orders of magnitude over theoretical turbulent collision rates may be necessary to reproduce the

  3. Fault and anthropogenic processes in central California constrained by satellite and airborne InSAR and in-situ observations

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Lundgren, Paul

    2016-07-01

    , but are subject to severe decorrelation. The L-band ALOS and UAVSAR SAR sensors provide improved coherence compared to the shorter wavelength radar data. Joint analysis of UAVSAR and ALOS interferometry measurements show clear variability in deformation along the fault strike, suggesting variable fault creep and locking at depth and along strike. Modeling selected fault transects reveals a distinct change in surface creep and shallow slip deficit from the central creeping section towards the Parkfield transition. In addition to fault creep, the L-band ALOS, and especially ALOS-2 ScanSAR interferometry, show large-scale ground subsidence in the SJV due to over-exploitation of groundwater. Groundwater related deformation is spatially and temporally variable and is composed of both recoverable elastic and non-recoverable inelastic components. InSAR time series are compared to GPS and well-water hydraulic head in-situ time series to understand water storage processes and mass loading changes. We are currently developing poroelastic finite element method models to assess the influence of anthropogenic processes on surface deformation and fault mechanics. Ongoing work is to better constrain both tectonic and non-tectonic processes and understand their interaction and implication for regional earthquake hazard.

  4. Gravitational-wave Observations May Constrain Gamma-Ray Burst Models: The Case of GW150914-GBM

    NASA Astrophysics Data System (ADS)

    Veres, P.; Preece, R. D.; Goldstein, A.; Mészáros, P.; Burns, E.; Connaughton, V.

    2016-08-01

    The possible short gamma-ray burst (GRB) observed by Fermi/GBM in coincidence with the first gravitational-wave (GW) detection offers new ways to test GRB prompt emission models. GW observations provide previously inaccessible physical parameters for the black hole central engine such as its horizon radius and rotation parameter. Using a minimum jet launching radius from the Advanced LIGO measurement of GW 150914, we calculate photospheric and internal shock models and find that they are marginally inconsistent with the GBM data, but cannot be definitely ruled out. Dissipative photosphere models, however, have no problem explaining the observations. Based on the peak energy and the observed flux, we find that the external shock model gives a natural explanation, suggesting a low interstellar density (˜10-3 cm-3) and a high Lorentz factor (˜2000). We only speculate on the exact nature of the system producing the gamma-rays, and study the parameter space of a generic Blandford-Znajek model. If future joint observations confirm the GW-short-GRB association we can provide similar but more detailed tests for prompt emission models.

  5. Constraining the dark fluid

    SciTech Connect

    Kunz, Martin; Liddle, Andrew R.; Parkinson, David; Gao Changjun

    2009-10-15

    Cosmological observations are normally fit under the assumption that the dark sector can be decomposed into dark matter and dark energy components. However, as long as the probes remain purely gravitational, there is no unique decomposition and observations can only constrain a single dark fluid; this is known as the dark degeneracy. We use observations to directly constrain this dark fluid in a model-independent way, demonstrating, in particular, that the data cannot be fit by a dark fluid with a single constant equation of state. Parametrizing the dark fluid equation of state by a variety of polynomials in the scale factor a, we use current kinematical data to constrain the parameters. While the simplest interpretation of the dark fluid remains that it is comprised of separate dark matter and cosmological constant contributions, our results cover other model types including unified dark energy/matter scenarios.

  6. H(∞) constrained fuzzy control via state observer feedback for discrete-time Takagi-Sugeno fuzzy systems with multiplicative noises.

    PubMed

    Chang, Wen-Jer; Wu, Wen-Yuan; Ku, Cheung-Chieh

    2011-01-01

    The purpose of this paper is to study the H(∞) constrained fuzzy controller design problem for discrete-time Takagi-Sugeno (T-S) fuzzy systems with multiplicative noises by using the state observer feedback technique. The proposed fuzzy controller design approach is developed based on the Parallel Distributed Compensation (PDC) technique. Through the Lyapunov stability criterion, the stability analysis is completed to develop stability conditions for the closed-loop systems. Besides, the H(∞) performance constraints is also considered in the stability condition derivations for the worst case effect of disturbance on system states. Solving these stability conditions via the two-step Linear Matrix Inequality (LMI) algorithm, the observer-based fuzzy controller is obtained to achieve the stability and H(∞) performance constraints, simultaneously. Finally, a numerical example is provided to verify the applicability and effectiveness of the proposed fuzzy control approach.

  7. The compositional and thermal structure of the lithosphere from thermodynamically-constrained multi-observable probabilistic inversion

    NASA Astrophysics Data System (ADS)

    Afonso, J. C.; Fullea, J.; Yang, Y.; Jones, A. G.; Griffin, W. L.; Connolly, J. A. D.; O'Reilly, S. Y.; Lebedev, S.

    2012-04-01

    Our capacity to image and characterize the thermal and compositional structure of the lithospheric and sublithospheric upper mantle is a fundamental prerequisite for understanding the formation and evolution of the lithosphere, the interaction between the crust-mantle and lithosphere-asthenosphere systems, and the nature of the lithosphere-asthenosphere boundary (LAB). In this context, the conversion of geophysical observables (e.g. travel-time data, gravity anomalies, etc) into robust estimates of the true physical and chemical state of the Earth's interior plays a major role. Unfortunately, available methods/software used to make such conversions are not well suited to deal with one or more of the following problems: 1) Strong non-linearity of the system. Traditional linearized inversions do not generally provide reliable estimates. 2) The temperature effect on geophysical observables is much greater than the compositional effect, therefore the latter is much harder to isolate. 3) Non-uniqueness of the compositional field. Different compositions can fit equally well seismic and potential field observations. 4) Strong correlations between physical parameters and geophysical observables complicate the inversion procedure and their effects are poorly understood. 5) Trade-off between temperature and composition in wave speeds. In this contribution we present a new full-3D multi-observable inversion method particularly designed to circumvent these problems. Some other key aspects of the method are: a) it combines multiple datasets (ambient noise tomography, receiver function analysis, body-wave tomography, magnetotelluric, geothermal, petrological, and gravity) in a single thermodynamic-geophysical framework, b) a general probabilistic (Bayesian) formulation is used to appraise the data, c) neither initial models nor well-defined a priori information is required, and d) it provides realistic uncertainty estimates. Both synthetic models and preliminary results for real

  8. Constraining star formation and AGN in z ~ 2 massive galaxies using high-resolution MERLIN radio observations

    NASA Astrophysics Data System (ADS)

    Casey, C. M.; Chapman, S. C.; Muxlow, T. W. B.; Beswick, R. J.; Alexander, D. M.; Conselice, C. J.

    2009-05-01

    We present high spatial resolution Multi-Element Radio-Linked Interferometer Network (MERLIN) 1.4-GHz radio observations of two high-redshift (z ~ 2) sources, RGJ123623 (HDF147) and RGJ123617 (HDF130), selected as the brightest radio sources from a sample of submillimetre-faint radio galaxies. They have starburst classifications from their rest-frame ultraviolet spectra. However, their radio morphologies are remarkably compact (<80 and <65mas, respectively), demanding that the radio luminosity be dominated by active galactic nuclei (AGN) rather than starbursts. Near-infrared (IR) imaging [Hubble Space Telescope Near Infrared Camera and Multi-Object Spectrometer (NICMOS) F160W] shows large-scale sizes (R1/2 ~ 0.75arcsec, diameters ~12kpc) and spectral energy distribution (SED) fitting to photometric points (optical through the mid-IR) reveals massive (~5 × 1011Msolar), old (a few Gyr) stellar populations. Both sources have low flux densities at observed 24 μm and are undetected in observed 70 μm and 850 μm, suggesting a low mass of interstellar dust. They are also formally undetected in the ultradeep 2 Ms Chandra data, suggesting that any AGN activity is likely intrinsically weak. We suggest both galaxies have evolved stellar populations, low star formation rates and low accretion rates on to massive black holes (108.6Msolar) whose radio luminosity is weakly beamed (by factors of a few). A cluster-like environment has been identified near HDF130 by an overdensity of galaxies at z = 1.99, reinforcing the claim that clusters lead to more rapid evolution in galaxy populations. These observations suggest that high-resolution radio (MERLIN) can be a superb diagnostic tool of AGN in the diverse galaxy populations at z ~ 2.

  9. A Synthesized Model-Observation Approach to Constraining Gross Urban CO2 Fluxes Using 14CO2 and carbonyl sulfide

    NASA Astrophysics Data System (ADS)

    LaFranchi, B. W.; Campbell, J. E.; Cameron-Smith, P. J.; Bambha, R.; Michelsen, H. A.

    2013-12-01

    Urbanized regions are responsible for a disproportionately large percentage (30-40%) of global anthropogenic greenhouse gas (GHG) emissions, despite covering only 2% of the Earth's surface area [Satterthwaite, 2008]. As a result, policies enacted at the local level in these urban areas can, in aggregate, have a large global impact, both positive and negative. In order to address the scientific questions that are required to drive these policy decisions, methods are needed that resolve gross CO2 flux components from the net flux. Recent work suggests that the critical knowledge gaps in CO2 surface fluxes could be addressed through the combined analysis of atmospheric carbonyl sulfide (COS) and radiocarbon in atmospheric CO2 (14CO2) [e.g. Campbell et al., 2008; Graven et al., 2009]. The 14CO2 approach relies on mass balance assumptions about atmospheric CO2 and the large differences in 14CO2 abundance between fossil and natural sources of CO2 [Levin et al., 2003]. COS, meanwhile, is a potentially transformative tracer of photosynthesis because its variability in the atmosphere has been found to be influenced primarily by vegetative uptake, scaling linearly will gross primary production (GPP) [Kettle et al., 20027]. Taken together, these two observations provide constraints on two of the three main components of the CO2 budget at the urban scale: photosynthesis and fossil fuel emissions. The third component, respiration, can then be determined by difference if the net flux is known. Here we present a general overview of our synthesized model-observation approach for improving surface flux estimates of CO2 for the upwind fetch of a ~30m tower located in Livermore, CA, USA, a suburb (pop. ~80,000) at the eastern edge of the San Francisco Bay Area. Additionally, we will present initial results from a one week observational intensive, which includes continuous CO2, CH4, CO, SO2, NOx, and O3 observations in addition to measurements of 14CO2 and COS from air samples

  10. Observationally constrained modeling of sound in curved ocean internal waves: examination of deep ducting and surface ducting at short range.

    PubMed

    Duda, Timothy F; Lin, Ying-Tsong; Reeder, D Benjamin

    2011-09-01

    A study of 400 Hz sound focusing and ducting effects in a packet of curved nonlinear internal waves in shallow water is presented. Sound propagation roughly along the crests of the waves is simulated with a three-dimensional parabolic equation computational code, and the results are compared to measured propagation along fixed 3 and 6 km source/receiver paths. The measurements were made on the shelf of the South China Sea northeast of Tung-Sha Island. Construction of the time-varying three-dimensional sound-speed fields used in the modeling simulations was guided by environmental data collected concurrently with the acoustic data. Computed three-dimensional propagation results compare well with field observations. The simulations allow identification of time-dependent sound forward scattering and ducting processes within the curved internal gravity waves. Strong acoustic intensity enhancement was observed during passage of high-amplitude nonlinear waves over the source/receiver paths, and is replicated in the model. The waves were typical of the region (35 m vertical displacement). Two types of ducting are found in the model, which occur asynchronously. One type is three-dimensional modal trapping in deep ducts within the wave crests (shallow thermocline zones). The second type is surface ducting within the wave troughs (deep thermocline zones).

  11. Magnetotelluric observations over the Rhine Graben, France: a simple impedance tensor analysis helps constrain the dominant electrical features

    NASA Astrophysics Data System (ADS)

    Mareschal, M.; Jouanne, V.; Menvielle, M.; Chouteau, M.; Grandis, H.; Tarits, P.

    1992-12-01

    A simple impedance tensor analysis of four magnetotelluric soundings recorded over the ECORS section of the Rhine Graben shows that for periods shorter than about 30 s, induction dominates over channelling. For longer periods, 2-D induction galvanically distorted by surface heterogeneities and/or current chanelled in the Graben can explain the observations; the role of chanelling becomes dominant at periods of the order of a few hundred seconds. In the area considered, induction appears to be controlled by inclusions of saline water in a porous limestone layer (Grande Oolithe) and not by the limits of the Graben with its crystalline shoulder (Vosges). The simple analysis is supported by tipper analyses and by the results of schematic 2-D modelling.

  12. The evolution of the diffuse cosmic ultraviolet background constrained by the Hubble Space Telescope observations of 3C 273

    NASA Technical Reports Server (NTRS)

    Ikeuchi, Satoru; Turner, Edwin L.

    1991-01-01

    Results are presented of recent HST UV spectroscopy of 3C 273, which revealed more low-redshift Lyman-alpha absorption lines (IGM clouds) than expected from the extrapolation from high-redshift (not less than 1.6) observations. It is shown on the basis of the standard pressure confined cloud model of the Lyman-alpha forest that this result indicates a sharp drop in the diffuse cosmic UV background from 2 to 0 redshift. It is predicted that the H I optical depth will drop slowly or perhaps even increase with decreasing redshift at less than 2 redshift. The implied constraints on the density and pressure of the diffuse IGM at 0 redshift are also derived. The inferred evolution of the diffuse UV flux bears a striking resemblance to the most recent direct determinations of the volume emissivity of the quasar population.

  13. Formation history of old open clusters constrained by detailed asteroseismology of red giant stars observed by Kepler

    NASA Astrophysics Data System (ADS)

    Corsaro, E.; Lee, Y.-N.; García, R. A.; Hennebelle, P.; Mathur, S.; Beck, P. G.; Mathis, S.; Stello, D.; Bouvier, J.

    2016-12-01

    Stars originate by the gravitational collapse of a turbulent molecular cloud, often forming clusters of thousands of stars. Stellar clusters therefore play an important role in our understanding of star formation, a fundamental problem in astrophysics that is difficult to investigate because pre-stellar cores are typically obscured by dust. Thanks to a Bayesian analysis of about 50 red giants of NGC 6791 and NGC 6819, two old open clusters observed by NASA Kepler, we characterize thousands of individual oscillation modes. We show for the first time how the measured asteroseismic properties lead us to a discovery about the rotation history of these clusters. Finally, our findings are compared to 3D hydrodynamical simulations for stellar cluster formation to put strong constraints on the physical processes of turbulence and rotation, which are in action in the early formation stage of the stellar clusters.

  14. THE POWER OF IMAGING: CONSTRAINING THE PLASMA PROPERTIES OF GRMHD SIMULATIONS USING EHT OBSERVATIONS OF Sgr A*

    SciTech Connect

    Chan, Chi-Kwan; Psaltis, Dimitrios; Özel, Feryal; Narayan, Ramesh; Sadowski, Aleksander

    2015-01-20

    Recent advances in general relativistic magnetohydrodynamic simulations have expanded and improved our understanding of the dynamics of black-hole accretion disks. However, current simulations do not capture the thermodynamics of electrons in the low density accreting plasma. This poses a significant challenge in predicting accretion flow images and spectra from first principles. Because of this, simplified emission models have often been used, with widely different configurations (e.g., disk- versus jet-dominated emission), and were able to account for the observed spectral properties of accreting black holes. Exploring the large parameter space introduced by such models, however, requires significant computational power that exceeds conventional computational facilities. In this paper, we use GRay, a fast graphics processing unit (GPU) based ray-tracing algorithm, on the GPU cluster El Gato, to compute images and spectra for a set of six general relativistic magnetohydrodynamic simulations with different magnetic field configurations and black-hole spins. We also employ two different parametric models for the plasma thermodynamics in each of the simulations. We show that, if only the spectral properties of Sgr A* are used, all 12 models tested here can fit the spectra equally well. However, when combined with the measurement of the image size of the emission using the Event Horizon Telescope, current observations rule out all models with strong funnel emission, because the funnels are typically very extended. Our study shows that images of accretion flows with horizon-scale resolution offer a powerful tool in understanding accretion flows around black holes and their thermodynamic properties.

  15. The Power of Imaging: Constraining the Plasma Properties of GRMHD Simulations using EHT Observations of Sgr A*

    NASA Astrophysics Data System (ADS)

    Chan, Chi-Kwan; Psaltis, Dimitrios; Özel, Feryal; Narayan, Ramesh; Saḑowski, Aleksander

    2015-01-01

    Recent advances in general relativistic magnetohydrodynamic simulations have expanded and improved our understanding of the dynamics of black-hole accretion disks. However, current simulations do not capture the thermodynamics of electrons in the low density accreting plasma. This poses a significant challenge in predicting accretion flow images and spectra from first principles. Because of this, simplified emission models have often been used, with widely different configurations (e.g., disk- versus jet-dominated emission), and were able to account for the observed spectral properties of accreting black holes. Exploring the large parameter space introduced by such models, however, requires significant computational power that exceeds conventional computational facilities. In this paper, we use GRay, a fast graphics processing unit (GPU) based ray-tracing algorithm, on the GPU cluster El Gato, to compute images and spectra for a set of six general relativistic magnetohydrodynamic simulations with different magnetic field configurations and black-hole spins. We also employ two different parametric models for the plasma thermodynamics in each of the simulations. We show that, if only the spectral properties of Sgr A* are used, all 12 models tested here can fit the spectra equally well. However, when combined with the measurement of the image size of the emission using the Event Horizon Telescope, current observations rule out all models with strong funnel emission, because the funnels are typically very extended. Our study shows that images of accretion flows with horizon-scale resolution offer a powerful tool in understanding accretion flows around black holes and their thermodynamic properties.

  16. Analysis of long-term observations of NOx and CO in megacities and application to constraining emissions inventories

    NASA Astrophysics Data System (ADS)

    Hassler, Birgit; McDonald, Brian C.; Frost, Gregory J.; Borbon, Agnes; Carslaw, David C.; Civerolo, Kevin; Granier, Claire; Monks, Paul S.; Monks, Sarah; Parrish, David D.; Pollack, Ilana B.; Rosenlof, Karen H.; Ryerson, Thomas B.; Schneidemesser, Erika; Trainer, Michael

    2016-09-01

    Long-term atmospheric NOx/CO enhancement ratios in megacities provide evaluations of emission inventories. A fuel-based emission inventory approach that diverges from conventional bottom-up inventory methods explains 1970-2015 trends in NOx/CO enhancement ratios in Los Angeles. Combining this comparison with similar measurements in other U.S. cities demonstrates that motor vehicle emissions controls were largely responsible for U.S. urban NOx/CO trends in the past half century. Differing NOx/CO enhancement ratio trends in U.S. and European cities over the past 25 years highlights alternative strategies for mitigating transportation emissions, reflecting Europe's increased use of light-duty diesel vehicles and correspondingly slower decreases in NOx emissions compared to the U.S. A global inventory widely used by global chemistry models fails to capture these long-term trends and regional differences in U.S. and Europe megacity NOx/CO enhancement ratios, possibly contributing to these models' inability to accurately reproduce observed long-term changes in tropospheric ozone.

  17. Modelled Black Carbon Radiative Forcing and Atmospheric Lifetime in AeroCom Phase II Constrained by Aircraft Observations

    SciTech Connect

    Samset, B. H.; Myhre, G.; Herber, Andreas; Kondo, Yutaka; Li, Shao-Meng; Moteki, N.; Koike, Makoto; Oshima, N.; Schwarz, Joshua P.; Balkanski, Y.; Bauer, S.; Bellouin, N.; Berntsen, T.; Bian, Huisheng; Chin, M.; Diehl, Thomas; Easter, Richard C.; Ghan, Steven J.; Iversen, T.; Kirkevag, A.; Lamarque, Jean-Francois; Lin, Guang; Liu, Xiaohong; Penner, Joyce E.; Schulz, M.; Seland, O.; Skeie, R. B.; Stier, P.; Takemura, T.; Tsigaridis, Kostas; Zhang, Kai

    2014-11-27

    Black carbon (BC) aerosols absorb solar radiation, and are generally held to exacerbate global warming through exerting a positive radiative forcing1. However, the total contribution of BC to the ongoing changes in global climate is presently under debate2-8. Both anthropogenic BC emissions and the resulting spatial and temporal distribution of BC concentration are highly uncertain2,9. In particular, long range transport and processes affecting BC atmospheric lifetime are poorly understood, leading to large estimated uncertainty in BC concentration at high altitudes and far from emission sources10. These uncertainties limit our ability to quantify both the historical, present and future anthropogenic climate impact of BC. Here we compare vertical profiles of BC concentration from four recent aircraft measurement campaigns with 13 state of the art aerosol models, and show that recent assessments may have overestimated present day BC radiative forcing. Further, an atmospheric lifetime of BC of less than 5 days is shown to be essential for reproducing observations in transport dominated remote regions. Adjusting model results to measurements in remote regions, and at high altitudes, leads to a 25% reduction in the multi-model median direct BC forcing from fossil fuel and biofuel burning over the industrial era.

  18. Constraining the Properties of the Eta Carinae System via 3-D SPH Models of Space-Based Observations: The Absolute Orientation of the Binary Orbit

    NASA Technical Reports Server (NTRS)

    Madura, Thomas I.; Gull, Theodore R.; Owocki, Stanley P.; Okazaki, Atsuo T.; Russell, Christopher M. P.

    2010-01-01

    The extremely massive (> 90 Solar Mass) and luminous (= 5 x 10(exp 6) Solar Luminosity) star Eta Carinae, with its spectacular bipolar "Homunculus" nebula, comprises one of the most remarkable and intensely observed stellar systems in the galaxy. However, many of its underlying physical parameters remain a mystery. Multiwavelength variations observed to occur every 5.54 years are interpreted as being due to the collision of a massive wind from the primary star with the fast, less dense wind of a hot companion star in a highly elliptical (e approx. 0.9) orbit. Using three-dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) simulations of the binary wind-wind collision in Eta Car, together with radiative transfer codes, we compute synthetic spectral images of [Fe III] emission line structures and compare them to existing Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) observations. We are thus able, for the first time, to constrain the absolute orientation of the binary orbit on the sky. An orbit with an inclination of i approx. 40deg, an argument of periapsis omega approx. 255deg, and a projected orbital axis with a position angle of approx. 312deg east of north provides the best fit to the observations, implying that the orbital axis is closely aligned in 3-1) space with the Homunculus symmetry axis, and that the companion star orbits clockwise on the sky relative to the primary.

  19. Constraining the Properties of the Eta Carinae System via 3-D SPH Models of Space-Based Observations: The Absolute Orientation of the Binary Orbit

    NASA Technical Reports Server (NTRS)

    Madura, Thomas I.; Gull, Theodore R.; Owocki, Stanley P.; Okazaki, Atsuo T.; Russell, Christopher M. P.

    2011-01-01

    The extremely massive (> 90 Stellar Mass) and luminous (= 5 x 10(exp 6) Stellar Luminosity) star Eta Carinae, with its spectacular bipolar "Homunculus" nebula, comprises one of the most remarkable and intensely observed stellar systems in the Galaxy. However, many of its underlying physical parameters remain unknown. Multiwavelength variations observed to occur every 5.54 years are interpreted as being due to the collision of a massive wind from the primary star with the fast, less dense wind of a hot companion star in a highly elliptical (e approx. 0.9) orbit. Using three-dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) simulations of the binary wind-wind collision, together with radiative transfer codes, we compute synthetic spectral images of [Fe III] emission line structures and compare them to existing Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) observations. We are thus able, for the first time, to tightly constrain the absolute orientation of the binary orbit on the sky. An orbit with an inclination of approx. 40deg, an argument of periapsis omega approx. 255deg, and a projected orbital axis with a position angle of approx. 312deg east of north provides the best fit to the observations, implying that the orbital axis is closely aligned in 3-D space with the Homunculus symmetry axis, and that the companion star orbits clockwise on the sky relative to the primary.

  20. Joint inversions of three types of electromagnetic data explicitly constrained by seismic observations: results from the central Okavango Delta, Botswana

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Blake, Sarah; Podgorski, Joel E.; Wagner, Frederic; Green, Alan G.; Maurer, Hansruedi; Jones, Alan G.; Muller, Mark; Ntibinyane, Ongkopotse; Tshoso, Gomotsang

    2015-09-01

    The Okavango Delta of northern Botswana is one of the world's largest inland deltas or megafans. To obtain information on the character of sediments and basement depths, audiomagnetotelluric (AMT), controlled-source audiomagnetotelluric (CSAMT) and central-loop transient electromagnetic (TEM) data were collected on the largest island within the delta. The data were inverted individually and jointly for 1-D models of electric resistivity. Distortion effects in the AMT and CSAMT data were accounted for by including galvanic distortion tensors as free parameters in the inversions. By employing Marquardt-Levenberg inversion, we found that a 3-layer model comprising a resistive layer overlying sequentially a conductive layer and a deeper resistive layer was sufficient to explain all of the electromagnetic data. However, the top of the basal resistive layer from electromagnetic-only inversions was much shallower than the well-determined basement depth observed in high-quality seismic reflection images and seismic refraction velocity tomograms. To resolve this discrepancy, we jointly inverted the electromagnetic data for 4-layer models by including seismic depths to an interface between sedimentary units and to basement as explicit a priori constraints. We have also estimated the interconnected porosities, clay contents and pore-fluid resistivities of the sedimentary units from their electrical resistivities and seismic P-wave velocities using appropriate petrophysical models. In the interpretation of our preferred model, a shallow ˜40 m thick freshwater sandy aquifer with 85-100 Ωm resistivity, 10-32 per cent interconnected porosity and <13 per cent clay content overlies a 105-115 m thick conductive sequence of clay and intercalated salt-water-saturated sands with 15-20 Ωm total resistivity, 1-27 per cent interconnected porosity and 15-60 per cent clay content. A third ˜60 m thick sandy layer with 40-50 Ωm resistivity, 10-33 per cent interconnected porosity and <15

  1. Constraining the Lyα escape fraction with far-infrared observations of Lyα emitters

    SciTech Connect

    Wardlow, Julie L.; Calanog, J.; Cooray, A.; Malhotra, S.; Zheng, Z.; Rhoads, J.; Finkelstein, S.; Bock, J.; Bridge, C.; Ciardullo, R.; Gronwall, C.; Conley, A.; Farrah, D.; Gawiser, E.; Heinis, S.; Ibar, E.; Ivison, R. J.; Marsden, G.; Oliver, S. J.; Riechers, D.; and others

    2014-05-20

    We study the far-infrared properties of 498 Lyα emitters (LAEs) at z = 2.8, 3.1, and 4.5 in the Extended Chandra Deep Field-South, using 250, 350, and 500 μm data from the Herschel Multi-tiered Extragalactic Survey and 870 μm data from the LABOCA ECDFS Submillimeter Survey. None of the 126, 280, or 92 LAEs at z = 2.8, 3.1, and 4.5, respectively, are individually detected in the far-infrared data. We use stacking to probe the average emission to deeper flux limits, reaching 1σ depths of ∼0.1 to 0.4 mJy. The LAEs are also undetected at ≥3σ in the stacks, although a 2.5σ signal is observed at 870 μm for the z = 2.8 sources. We consider a wide range of far-infrared spectral energy distributions (SEDs), including an M82 and an Sd galaxy template, to determine upper limits on the far-infrared luminosities and far-infrared-derived star formation rates of the LAEs. These star formation rates are then combined with those inferred from the Lyα and UV emission to determine lower limits on the LAEs' Lyα escape fraction (f {sub esc}(Lyα)). For the Sd SED template, the inferred LAEs f {sub esc}(Lyα) are ≳ 30% (1σ) at z = 2.8, 3.1, and 4.5, which are all significantly higher than the global f {sub esc}(Lyα) at these redshifts. Thus, if the LAEs f {sub esc}(Lyα) follows the global evolution, then they have warmer far-infrared SEDs than the Sd galaxy template. The average and M82 SEDs produce lower limits on the LAE f {sub esc}(Lyα) of ∼10%-20% (1σ), all of which are slightly higher than the global evolution of f {sub esc}(Lyα), but consistent with it at the 2σ-3σ level.

  2. Contemporary kinematics of the Ordos block, North China and its adjacent rift systems constrained by dense GPS observations

    NASA Astrophysics Data System (ADS)

    Zhao, Bin; Zhang, Caihong; Wang, Dongzhen; Huang, Yong; Tan, Kai; Du, Ruilin; Liu, Jingnan

    2017-03-01

    The detailed kinematic pattern of the Ordos block, North China and its surrounding rift systems remains uncertain, mainly due to the low signal-to-noise ratio of the Global Positioning System (GPS) velocity data and the lack of GPS stations in this region. In this study, we have obtained a new and dense velocity field by processing GPS data primarily collected from the Crustal Motion Observation Network of China and from other GPS networks between 1998 and 2014. The GPS velocities within the Ordos block can be interpreted as counterclockwise rotation of the block about the Euler pole with respect to the Eurasia plate. Velocity profiles across the graben-bounding faults show relatively rapid right-lateral strike-slip motion along the Yinchuan graben, with a rate of 0.8-2.6 mm/a from north to south. In addition, a right-lateral slip rate of 1.1-1.6 mm/a is estimated along the central segment of the Shanxi rift. However, strike-slip motion is not detected along the northern and southern margins of the Ordos block. Conversely, significant extension motion is detected across the northwestern corner of the block, with a value of 1.6 mm/a, and along the northern segment of the Shanxi rift, where an extensional rate of 1.3-1.7 mm/a is measured. Both the Daihai and Datong basins are experiencing crustal extension. On the southwestern margin of the block, deformation across the compressional zone of the Liupanshan range is subtle; however, the far-field shorting rate is as high as 3.0 mm/a, implying that this region is experiencing ongoing compression. The results reveal that present-day fault slip occurs mainly along the block bounding faults, with the exception of faults along the northern and southern margins of the block. These results provide new insights into the nature of tectonic deformation around the Ordos block, and are useful for assessing the seismic activity in this region.

  3. Thermal-based modeling of coupled carbon, water and energy fluxes using nominal light use efficiencies constrained by leaf chlorophyll observations

    NASA Astrophysics Data System (ADS)

    Schull, M. A.; Anderson, M. C.; Houborg, R.; Gitelson, A.; Kustas, W. P.

    2014-10-01

    Recent studies have shown that estimates of leaf chlorophyll content (Chl), defined as the combined mass of chlorophyll a and chlorophyll b per unit leaf area, can be useful for constraining estimates of canopy light-use-efficiency (LUE). Canopy LUE describes the amount of carbon assimilated by a vegetative canopy for a given amount of Absorbed Photosynthetically Active Radiation (APAR) and is a key parameter for modeling land-surface carbon fluxes. A carbon-enabled version of the remote sensing-based Two-Source Energy Balance (TSEB) model simulates coupled canopy transpiration and carbon assimilation using an analytical sub-model of canopy resistance constrained by inputs of nominal LUE (βn), which is modulated within the model in response to varying conditions in light, humidity, ambient CO2 concentration and temperature. Soil moisture constraints on water and carbon exchange are conveyed to the TSEB-LUE indirectly through thermal infrared measurements of land-surface temperature. We investigate the capability of using Chl estimates for capturing seasonal trends in the canopy βn from in situ measurements of Chl acquired in irrigated and rain-fed fields of soybean and maize near Mead, Nebraska. The results show that field-measured Chl is non-linearly related to βn, with variability primarily related to phenological changes during early growth and senescence. Utilizing seasonally varying βn inputs based on an empirical relationship with in-situ measured Chl resulted in improvements in carbon flux estimates from the TSEB model, while adjusting the partitioning of total water loss between plant transpiration and soil evaporation. The observed Chl-βn relationship provides a functional mechanism for integrating remotely sensed Chl into the TSEB model, with the potential for improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.

  4. Thermal-based modeling of coupled carbon, water, and energy fluxes using nominal light use efficiencies constrained by leaf chlorophyll observations

    NASA Astrophysics Data System (ADS)

    Schull, M. A.; Anderson, M. C.; Houborg, R.; Gitelson, A.; Kustas, W. P.

    2015-03-01

    Recent studies have shown that estimates of leaf chlorophyll content (Chl), defined as the combined mass of chlorophyll a and chlorophyll b per unit leaf area, can be useful for constraining estimates of canopy light use efficiency (LUE). Canopy LUE describes the amount of carbon assimilated by a vegetative canopy for a given amount of absorbed photosynthetically active radiation (APAR) and is a key parameter for modeling land-surface carbon fluxes. A carbon-enabled version of the remote-sensing-based two-source energy balance (TSEB) model simulates coupled canopy transpiration and carbon assimilation using an analytical sub-model of canopy resistance constrained by inputs of nominal LUE (βn), which is modulated within the model in response to varying conditions in light, humidity, ambient CO2 concentration, and temperature. Soil moisture constraints on water and carbon exchange are conveyed to the TSEB-LUE indirectly through thermal infrared measurements of land-surface temperature. We investigate the capability of using Chl estimates for capturing seasonal trends in the canopy βn from in situ measurements of Chl acquired in irrigated and rain-fed fields of soybean and maize near Mead, Nebraska. The results show that field-measured Chl is nonlinearly related to βn, with variability primarily related to phenological changes during early growth and senescence. Utilizing seasonally varying βn inputs based on an empirical relationship with in situ measured Chl resulted in improvements in carbon flux estimates from the TSEB model, while adjusting the partitioning of total water loss between plant transpiration and soil evaporation. The observed Chl-βn relationship provides a functional mechanism for integrating remotely sensed Chl into the TSEB model, with the potential for improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.

  5. Comparison of Satellite-Derived TOA Shortwave Clear-Sky Fluxes to Estimates from GCM Simulations Constrained by Satellite Observations of Land Surface Characteristics

    NASA Technical Reports Server (NTRS)

    Anantharaj, Valentine G.; Nair, Udaysankar S.; Lawrence, Peter; Chase, Thomas N.; Christopher, Sundar; Jones, Thomas

    2010-01-01

    Clear-sky, upwelling shortwave flux at the top of the atmosphere (S(sub TOA raised arrow)), simulated using the atmospheric and land model components of the Community Climate System Model 3 (CCSM3), is compared to corresponding observational estimates from the Clouds and Earth's Radiant Energy System (CERES) sensor. Improvements resulting from the use of land surface albedo derived from Moderate Resolution Imaging Spectroradiometer (MODIS) to constrain the simulations are also examined. Compared to CERES observations, CCSM3 overestimates global, annual averaged S(sub TOA raised arrow) over both land and oceans. However, regionally, CCSM3 overestimates S(sub TOA raised arrow) over some land and ocean areas while underestimating it over other sites. CCSM3 underestimates S(sub TOA raised arrow) over the Saharan and Arabian Deserts and substantial differences exist between CERES observations and CCSM3 over agricultural areas. Over selected sites, after using groundbased observations to remove systematic biases that exist in CCSM computation of S(sub TOA raised arrow), it is found that use of MODIS albedo improves the simulation of S(sub TOA raised arrow). Inability of coarse resolution CCSM3 simulation to resolve spatial heterogeneity of snowfall over high altitude sites such as the Tibetan Plateau causes overestimation of S(sub TOA raised arrow) in these areas. Discrepancies also exist in the simulation of S(sub TOA raised arrow) over ocean areas as CCSM3 does not account for the effect of wind speed on ocean surface albedo. This study shows that the radiative energy budget at the TOA is improved through the use of MODIS albedo in Global Climate Models.

  6. Constraining the Structure of the Transition Disk HD 135344B (SAO 206462) by Simultaneous Modeling of Multiwavelength Gas and Dust Observations

    NASA Technical Reports Server (NTRS)

    Carmona, A.; Pinte, C.; Thi, W. F.; Benisty, M.; Menard, F.; Grady, C.; Kamp, I.; Woitke, P.; Olofsson, J.; Roberge, A.; Brittain, S.; Duchene, G.; Meeus, G.; Martin-Zaidi, C.; Dent, B.; Le Bouquin, J. E.; Berger, J. P.

    2014-01-01

    Context: Constraining the gas and dust disk structure of transition disks, particularly in the inner dust cavity, is a crucial step toward understanding the link between them and planet formation. HD 135344B is an accreting (pre-)transition disk that displays the CO 4.7 micrometer emission extending tens of AU inside its 30 AU dust cavity. Aims: We constrain HD 135344B's disk structure from multi-instrument gas and dust observations. Methods: We used the dust radiative transfer code MCFOST and the thermochemical code ProDiMo to derive the disk structure from the simultaneous modeling of the spectral energy distribution (SED), VLT/CRIRES CO P(10) 4.75 Micrometers, Herschel/PACS [O(sub I)] 63 Micrometers, Spitzer/IRS, and JCMT CO-12 J = 3-2 spectra, VLTI/PIONIER H-band visibilities, and constraints from (sub-)mm continuum interferometry and near-IR imaging. Results: We found a disk model able to describe the current gas and dust observations simultaneously. This disk has the following structure. (1) To simultaneously reproduce the SED, the near-IR interferometry data, and the CO ro-vibrational emission, refractory grains (we suggest carbon) are present inside the silicate sublimation radius (0.08 is less than R less than 0.2 AU). (2) The dust cavity (R is less than 30 AU) is filled with gas, the surface density of the gas inside the cavity must increase with radius to fit the CO ro-vibrational line profile, a small gap of a few AU in the gas distribution is compatible with current data, and a large gap of tens of AU in the gas does not appear likely. (4) The gas-to-dust ratio inside the cavity is >100 to account for the 870 Micrometers continuum upper limit and the CO P(10) line flux. (5) The gas-to-dust ratio in the outer disk (30 is less than R less than 200 AU) is less than 10 to simultaneously describe the [O(sub I)] 63 Micrometers line flux and the CO P(10) line profile. (6) In the outer disk, most of the gas and dust mass should be located in the midplane, and

  7. Fermi/LAT observations of dwarf galaxies highly constrain a dark matter interpretation of excess positrons seen in AMS-02, HEAT, and PAMELA

    SciTech Connect

    López, Alejandro; Savage, Christopher; Spolyar, Douglas; Adams, Douglas Q. E-mail: chris@savage.name E-mail: doug.q.adams@gmail.com

    2016-03-01

    It is shown that a Weakly Interacting Massive dark matter Particle (WIMP) interpretation for the positron excess observed in a variety of experiments, HEAT, PAMELA, and AMS-02, is highly constrained by the Fermi/LAT observations of dwarf galaxies. In particular, this paper examines the annihilation channels that best fit the current AMS-02 data (Boudaud et al., 2014), specifically focusing on channels and parameter space not previously explored by the Fermi/LAT collaboration. The Fermi satellite has surveyed the γ-ray sky, and its observations of dwarf satellites are used to place strong bounds on the annihilation of WIMPs into a variety of channels. For the single channel case, we find that dark matter annihilation into (b b-bar ,e{sup +}e{sup -}, μ{sup +}μ{sup -}, τ{sup +}τ{sup -},4-e or 4-τ ) is ruled out as an explanation of the AMS positron excess (here b quarks are a proxy for all quarks, gauge and Higgs bosons). In addition, we find that the Fermi/LAT 2σ upper limits, assuming the best-fit AMS-02 branching ratios, exclude multichannel combinations into b b-bar and leptons. The tension between the results might relax if the branching ratios are allowed to deviate from their best-fit values, though a substantial change would be required. Of all the channels we considered, the only viable channel that survives the Fermi/LAT constraint and produces a good fit to the AMS-02 data is annihilation (via a mediator) to 4-μ, or mainly to 4-μ in the case of multichannel combinations.

  8. Fermi/LAT observations of dwarf galaxies highly constrain a dark matter interpretation of excess positrons seen in AMS-02, HEAT, and PAMELA

    NASA Astrophysics Data System (ADS)

    López, Alejandro; Savage, Christopher; Spolyar, Douglas; Adams, Douglas Q.

    2016-03-01

    It is shown that a Weakly Interacting Massive dark matter Particle (WIMP) interpretation for the positron excess observed in a variety of experiments, HEAT, PAMELA, and AMS-02, is highly constrained by the Fermi/LAT observations of dwarf galaxies. In particular, this paper examines the annihilation channels that best fit the current AMS-02 data (Boudaud et al., 2014), specifically focusing on channels and parameter space not previously explored by the Fermi/LAT collaboration. The Fermi satellite has surveyed the γ-ray sky, and its observations of dwarf satellites are used to place strong bounds on the annihilation of WIMPs into a variety of channels. For the single channel case, we find that dark matter annihilation into {bbar b,e+e-, μ+μ-, τ+τ-,4-e or 4-τ } is ruled out as an explanation of the AMS positron excess (here b quarks are a proxy for all quarks, gauge and Higgs bosons). In addition, we find that the Fermi/LAT 2σ upper limits, assuming the best-fit AMS-02 branching ratios, exclude multichannel combinations into bbar b and leptons. The tension between the results might relax if the branching ratios are allowed to deviate from their best-fit values, though a substantial change would be required. Of all the channels we considered, the only viable channel that survives the Fermi/LAT constraint and produces a good fit to the AMS-02 data is annihilation (via a mediator) to 4-μ, or mainly to 4-μ in the case of multichannel combinations.

  9. Pn wave geometrical spreading and attenuation in Northeast China and the Korean Peninsula constrained by observations from North Korean nuclear explosions

    NASA Astrophysics Data System (ADS)

    Zhao, Lian-Feng; Xie, Xiao-Bi; Tian, Bao-Feng; Chen, Qi-Fu; Hao, Tian-Yao; Yao, Zhen-Xing

    2015-11-01

    We investigate the geometric spreading and attenuation of seismic Pn waves in Northeast China and the Korean Peninsula. A high-quality broadband Pn wave data set generated by North Korean nuclear tests is used to constrain the parameters of a frequency-dependent log-quadratic geometric spreading function and a power law Pn Q model. The geometric spreading function and apparent Pn wave Q are obtained for Northeast China and the Korean Peninsula between 2.0 and 10.0 Hz. Using the two-station amplitude ratios of the Pn spectra and correcting them with the known spreading function, we remove the contributions of the source and crust from the apparent Pn Q and retrieve the P wave attenuation information along the pure upper mantle path. We then use both Pn amplitudes and amplitude ratios in a tomographic approach to obtain the upper mantle P wave attenuation in the studied area. The Pn wave spectra observed in China are compared with those recorded in Japan, and the result reveals that the high-frequency Pn signal across the oceanic path attenuated faster compared with those through the continental path.

  10. Improved western U.S. background ozone estimates via constraining nonlocal and local source contributions using Aura TES and OMI observations

    NASA Astrophysics Data System (ADS)

    Huang, Min; Bowman, Kevin W.; Carmichael, Gregory R.; Lee, Meemong; Chai, Tianfeng; Spak, Scott N.; Henze, Daven K.; Darmenov, Anton S.; Silva, Arlindo M.

    2015-04-01

    Western U.S. near-surface ozone (O3) concentrations are sensitive to transported background O3 from the eastern Pacific free troposphere, as well as U.S. anthropogenic and natural emissions. The current 75 ppbv U.S. O3 primary standard may be lowered soon, hence accurately estimating O3 source contributions, especially background O3 in this region has growing policy-relevant significance. In this study, we improve the modeled total and background O3, via repartitioning and redistributing the contributions from nonlocal and local anthropogenic/wildfires sources in a multi-scale satellite data assimilation system containing global Goddard Earth Observing System-Chemistry model (GEOS-Chem) and regional Sulfur Transport and dEposition Model (STEM). Focusing on NASA's ARCTAS (Arctic Research of the Composition of the Troposphere from Aircraft and Satellites) field campaign period in June-July 2008, we first demonstrate that the negative biases in GEOS-Chem free simulation in the eastern Pacific at 400-900 hPa are reduced via assimilating Aura-Tropospheric Emission Spectrometer (TES) O3 profiles. Using the TES-constrained boundary conditions, we then assimilated into STEM the tropospheric nitrogen dioxide (NO2) columns from Aura-Ozone Monitoring Instrument to indicate U.S. nitrogen oxides (NOx = NO2 + NO) emissions at 12 × 12 km2 grid scale. Improved model skills are indicated from cross validation against independent ARCTAS measurements. Leveraging Aura observations, we show anomalously high wildfire NOx emissions in this summer in Northern California and the Central Valley while lower anthropogenic emissions in multiple urban areas than those representing the year of 2005. We found strong spatial variability of the daily maximum 8 h average background O3 and its contribution to the modeled total O3, with the mean value of ~48 ppbv (~77% of the total).

  11. Using Fermi Large Area Telescope Observations to Constrain the Emission and Field Geometries of Young Gamma-ray Pulsars and to Guide Millisecond Pulsar Searches

    NASA Astrophysics Data System (ADS)

    DeCesar, Megan Elizabeth

    This thesis has two parts, the first focusing on analysis and modeling of high-energy pulsar emission and the second on pulsar observations. In part 1, I constrain the magnetospheric emission geometry (magnetic inclination alpha, emission width w, maximum emission radius r, and observer colatitude zeta) by modeling >100 MeV light curves of four bright gamma-ray pulsars with geometrical representations of the slot gap and outer gap emission models. I also model the >100 MeV phase resolved spectra, measuring the power law cutoff energy Ec with phase. Assuming curvature radiation reaction (CRR) is the dominant emission process, I use Ec to compute the accelerating electric field strength, E||. The original contributions of this thesis to astrophysical research are the use of the force-free magnetic field solution in light curve modeling, the inclusion of an offset polar cap in the slot gap geometry, and the calculation of E|| from observationally determined quantities (i.e., Ec). The simulations reproduce observed light curve features and accurately match multi-wavelength zeta measurements, but the specific combination of best-fit emission and field geometry varies between pulsars. Perhaps pulsar magnetospheres contain some combination of slot gap and outer gap geometries, whose contributions to the light curve depend on viewing angle. The requirement that, locally, E||/B < 1 rules out the vacuum field as a valid approximation to the true pulsar field under the CRR assumption. The E|| values imply that the youngest, most energetic pulsar has a near-force-free field, and that CRR and/or narrow acceleration gaps may not be applicable to older pulsars. In part 2, I present discoveries of two radio millisecond pulsars (MSPs) from LAT-guided pulsar searches. I timed the first MSP, resulting in the detection of gamma-ray pulsations. The second MSP is in a globular cluster. My initial timing efforts show that it is in a highly eccentric ( e ~ 0.95) binary orbit with a

  12. Southern San Andreas-San Jacinto fault system slip rates estimated from earthquake cycle models constrained by GPS and interferometric synthetic aperture radar observations

    NASA Astrophysics Data System (ADS)

    Lundgren, Paul; Hetland, Eric A.; Liu, Zhen; Fielding, Eric J.

    2009-02-01

    We use ground geodetic and interferometric synthetic aperture radar satellite observations across the southern San Andreas (SAF)-San Jacinto (SJF) fault systems to constrain their slip rates and the viscosity structure of the lower crust and upper mantle on the basis of periodic earthquake cycle, Maxwell viscoelastic, finite element models. Key questions for this system are the SAF and SJF slip rates, the slip partitioning between the two main branches of the SJF, and the dip of the SAF. The best-fitting models generally have a high-viscosity lower crust (η = 1021 Pa s) overlying a lower-viscosity upper mantle (η = 1019 Pa s). We find considerable trade-offs between the relative time into the current earthquake cycle of the San Jacinto fault and the upper mantle viscosity. With reasonable assumptions for the relative time in the earthquake cycle, the partition of slip is fairly robust at around 24-26 mm/a for the San Jacinto fault system and 16-18 mm/a for the San Andreas fault. Models for two subprofiles across the SAF-SJF systems suggest that slip may transfer from the western (Coyote Creek) branch to the eastern (Clark-Superstition hills) branch of the SJF from NW to SE. Across the entire system our best-fitting model gives slip rates of 2 ± 3, 12 ± 9, 12 ± 9, and 17 ± 3 mm/a for the Elsinore, Coyote Creek, Clark, and San Andreas faults, respectively, where the large uncertainties in the slip rates for the SJF branches reflect the large uncertainty in the slip rate partitioning within the SJF system.

  13. On the constraining observations of the dark GRB 001109 and the properties of a z = 0.398 radio selected starburst galaxy contained in its error box

    NASA Astrophysics Data System (ADS)

    Castro Cerón, J. M.; Gorosabel, J.; Castro-Tirado, A. J.; Sokolov, V. V.; Afanasiev, V. L.; Fatkhullin, T. A.; Dodonov, S. N.; Komarova, V. N.; Cherepashchuk, A. M.; Postnov, K. A.; Lisenfeld, U.; Greiner, J.; Klose, S.; Hjorth, J.; Fynbo, J. P. U.; Pedersen, H.; Rol, E.; Fliri, J.; Feldt, M.; Feulner, G.; Andersen, M. I.; Jensen, B. L.; Pérez Ramírez, M. D.; Vrba, F. J.; Henden, A. A.; Israelian, G.; Tanvir, N. R.

    2004-09-01

    We present optical and NIR (near infrared) follow up observations of the GRB 001109 from 1 to 300 days after the burst. No transient emission was found at these wavelengths within this GRB's (Gamma Ray Burst) 50 arcsec radius BeppoSAX error box. Strong limits (3σ) are set with: R ⪆ 21, 10.2 h after the GRB; I ⪆ 23, 11.4 h after the GRB; H ⪆ 20.7, 9.9 h after the GRB; and KS⪆ 20, 9.6 h after the GRB. We discuss whether the radio source found in the GRB's error box (\\cite{taylor00}) might be related to the afterglow. We also present a multiwavelength study of a reddened starburst galaxy, found coincident with the potential radio and the X-ray afterglow. We show that our strong I band upper limit makes of the GRB 001109 the darkest one localised by the BeppoSAX's NFI (Narrow Field Instrument), and it is one of the most constraining upper limits on GRB afterglows to date. Further to it, the implications of these observations in the context of dark GRBs are considered. Based on observations made with telescopes at the Centro Astronómico Hispano Alemán (1.23 m + 3.50 m), at the Observatorio del Roque de los Muchachos (NOT + WHT), at the United States Naval Observatory (1.00 m) and at the Russian Academy of Sciences's Special Astrophysical Observatory (6.05 m). The NOT is operated on the island of San Miguel de la Palma jointly by Denmark, Finland, Iceland, Norway and Sweden, in Spain's Observatorio del Roque de los Muchahos of the Instituto de Astrofísica de Canarias. The Centro Astronómico Hispano Alemán is operated in Calar Alto by the Max-Planck Institut für Astronomie of Heidelberg, jointly with Spain's Comisión Nacional de Astronomía.

  14. Constraining inflation

    SciTech Connect

    Adshead, Peter; Easther, Richard E-mail: richard.easther@yale.edu

    2008-10-15

    We analyze the theoretical limits on slow roll reconstruction, an optimal algorithm for recovering the inflaton potential (assuming a single-field slow roll scenario) from observational data. Slow roll reconstruction is based upon the Hamilton-Jacobi formulation of the inflationary dynamics. We show that at low inflationary scales the Hamilton-Jacobi equations simplify considerably. We provide a new classification scheme for inflationary models, based solely on the number of parameters needed to specify the potential, and provide forecasts for the bounds on the slow roll parameters from future data sets. A minimal running of the spectral index, induced solely by the first two slow roll parameters ({epsilon} and {eta}), appears to be effectively undetectable by realistic cosmic microwave background (CMB) experiments. However, since the ability to detect any running increases with the lever arm in comoving wavenumber, we conjecture that high redshift 21 cm data may allow tests of second-order consistency conditions on inflation. Finally, we point out that the second-order corrections to the spectral index are correlated with the inflationary scale, and thus the amplitude of the CMB B mode.

  15. A province-scale block model of Walker Lane and western Basin and Range crustal deformation constrained by GPS observations (Invited)

    NASA Astrophysics Data System (ADS)

    Hammond, W. C.; Bormann, J.; Blewitt, G.; Kreemer, C.

    2013-12-01

    The Walker Lane in the western Great Basin of the western United States is an 800 km long and 100 km wide zone of active intracontinental transtension that absorbs ~10 mm/yr, about 20% of the Pacific/North America plate boundary relative motion. Lying west of the Sierra Nevada/Great Valley microplate (SNGV) and adjoining the Basin and Range Province to the east, deformation is predominantly shear strain overprinted with a minor component of extension. The Walker Lane responds with faulting, block rotations, structural step-overs, and has distinct and varying partitioned domains of shear and extension. Resolving these complex deformation patterns requires a long term observation strategy with a dense network of GPS stations (spacing ~20 km). The University of Nevada, Reno operates the 373 station Mobile Array of GPS for Nevada transtension (MAGNET) semi-continuous network that supplements coverage by other networks such as EarthScope's Plate Boundary Observatory, which alone has insufficient density to resolve the deformation patterns. Uniform processing of data from these GPS mega-networks provides a synoptic view and new insights into the kinematics and mechanics of Walker Lane tectonics. We present velocities for thousands of stations with time series between 3 to 17 years in duration aligned to our new GPS-based North America fixed reference frame NA12. The velocity field shows a rate budget across the southern Walker Lane of ~10 mm/yr, decreasing northward to ~7 mm/yr at the latitude of the Mohawk Valley and Pyramid Lake. We model the data with a new block model that estimates rotations and slip rates of known active faults between the Mojave Desert and northern Nevada and northeast California. The density of active faults in the region requires including a relatively large number of blocks in the model to accurately estimate deformation patterns. With 49 blocks, our the model captures structural detail not represented in previous province-scale models, and

  16. The intergalactic magnetic field constrained by Fermi/Large Area Telescope observations of the TeV blazar 1ES0229+200

    NASA Astrophysics Data System (ADS)

    Tavecchio, F.; Ghisellini, G.; Foschini, L.; Bonnoli, G.; Ghirlanda, G.; Coppi, P.

    2010-07-01

    TeV photons from blazars at relatively large distances, interacting with the optical-infrared cosmic background, are efficiently converted into electron-positron pairs. The produced pairs are extremely relativistic (Lorentz factors of the order of 106- 107) and promptly lose their energy through inverse Compton scatterings with the photons of the microwave cosmic background, producing emission in the GeV band. The spectrum and the flux level of this reprocessed emission are critically dependent on the intensity of the intergalactic magnetic field, B, that can deflect the pairs diluting the intrinsic emission over a large solid angle. We derive a simple relation for the reprocessed spectrum expected from a steady source. We apply this treatment to the blazar 1ES0229+200, whose intrinsic, very hard TeV spectrum is expected to be approximately steady. Comparing the predicted reprocessed emission with the upper limits measured by the Fermi/Large Area Telescope, we constrain the value of the intergalactic magnetic field to be larger than B ~= 5 × 10-15 G, depending on the model of extragalactic background light.

  17. Constraining Dark Energy

    NASA Astrophysics Data System (ADS)

    Abrahamse, Augusta

    2010-12-01

    Future advances in cosmology will depend on the next generation of cosmological observations and how they shape our theoretical understanding of the universe. Current theoretical ideas, however, have an important role to play in guiding the design of such observational programs. The work presented in this thesis concerns the intersection of observation and theory, particularly as it relates to advancing our understanding of the accelerated expansion of the universe (or the dark energy). Chapters 2 - 4 make use of the simulated data sets developed by the Dark Energy Task Force (DETF) for a number of cosmological observations currently in the experimental pipeline. We use these forecast data in the analysis of four quintessence models of dark energy: the PNGB, Exponential, Albrecht-Skordis and Inverse Power Law (IPL) models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of these models. We examine the potential of the data for differentiating time-varying models from a pure cosmological constant. Additionally, we introduce an abstract parameter space to facilitate comparison between models and investigate the ability of future data to distinguish between these quintessence models. In Chapter 5 we present work towards understanding the effects of systematic errors associated with photometric redshift estimates. Due to the need to sample a vast number of deep and faint galaxies, photometric redshifts will be used in a wide range of future cosmological observations including gravitational weak lensing, baryon accoustic oscillations and type 1A supernovae observations. The uncertainty in the redshift distributions of galaxies has a significant potential impact on the cosmological parameter values inferred from such observations. We introduce a method for parameterizing uncertainties in modeling assumptions affecting photometric redshift calculations and for propagating these

  18. Observation of high energy atmospheric neutrinos with antarctic muon and neutrino detector array

    SciTech Connect

    Ahrens, J.; Andres, E.; Bai, X.; Barouch, G.; Barwick, S.W.; Bay, R.C.; Becka, T.; Becker, K.-H.; Bertrand, D.; Binon, F.; Biron, A.; Booth, J.; Botner, O.; Bouchta, A.; Bouhali, O.; Boyce, M.M.; Carius, S.; Chen, A.; Chirkin, D.; Conrad, J.; Cooley, J.; Costa, C.G.S.; Cowen, D.F.; Dalberg, E.; De Clercq, C.; DeYoung, T.; Desiati, P.; Dewulf, J.-P.; Doksus, P.; Edsjo, J.; Ekstrom, P.; Feser, T.; Frere, J.-M.; Gaisser, T.K.; Gaug, M.; Goldschmidt, A.; Hallgren, A.; Halzen, F.; Hanson, K.; Hardtke, R.; Hauschildt, T.; Hellwig, M.; Heukenkamp, H.; Hill, G.C.; Hulth, P.O.; Hundertmark, S.; Jacobsen, J.; Karle, A.; Kim, J.; Koci, B.; Kopke, L.; Kowalski, M.; Lamoureux, J.I.; Leich, H.; Leuthold, M.; Lindahl, P.; Liubarsky, I.; Loaiza, P.; Lowder, D.M.; Madsen, J.; Marciniewski, P.; Matis, H.S.; McParland, C.P.; Miller, T.C.; Minaeva, Y.; Miocinovic, P.; Mock, P.C.; Morse, R.; Neunhoffer, T.; Niessen, P.; Nygren, D.R.; Ogelman, H.; Olbrechts, Ph.; Perez de los Heros, C.; Pohl, A.C.; Porrata, R.; Price, P.B.; Przybylski, G.T.; Rawlins, K.; Reed, C.; Rhode, W.; Ribordy, M.; Richter, S.; Rodriguez Martino, J.; Romenesko, P.; Ross, D.; Sander, H.-G.; Schmidt, T.; Schneider, D.; Schwarz, R.; Silvestri, A.; Solarz, M.; Spiczak, G.M.; Spiering, C.; Starinsky, N.; Steele, D.; Steffen, P.; Stokstad, R.G.; Streicher, O.; Sudhoff, P.; Sulanke, K.-H.; Taboada, I.; Thollander, L.; Thon, T.; Tilav, S.; Vander Donckt, M.; Walck, C.; Weinheimer, C.; Wiebusch, C.H.; Wiedeman, C.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Wu, W.; Yodh, G.; Young, S.

    2002-05-07

    The Antarctic Muon and Neutrino Detector Array (AMANDA) began collecting data with ten strings in 1997. Results from the first year of operation are presented. Neutrinos coming through the Earth from the Northern Hemisphere are identified by secondary muons moving upward through the array. Cosmic rays in the atmosphere generate a background of downward moving muons, which are about 10{sup 6} times more abundant than the upward moving muons. Over 130 days of exposure, we observed a total of about 300 neutrino events. In the same period, a background of 1.05 x 10{sup 9} cosmic ray muon events was recorded. The observed neutrino flux is consistent with atmospheric neutrino predictions. Monte Carlo simulations indicate that 90 percent of these events lie in the energy range 66 GeV to 3.4 TeV. The observation of atmospheric neutrinos consistent with expectations establishes AMANDA-B10 as a working neutrino telescope.

  19. Constraining Galileon inflation

    SciTech Connect

    Regan, Donough; Anderson, Gemma J.; Hull, Matthew; Seery, David E-mail: G.Anderson@sussex.ac.uk E-mail: D.Seery@sussex.ac.uk

    2015-02-01

    In this short paper, we present constraints on the Galileon inflationary model from the CMB bispectrum. We employ a principal-component analysis of the independent degrees of freedom constrained by data and apply this to the WMAP 9-year data to constrain the free parameters of the model. A simple Bayesian comparison establishes that support for the Galileon model from bispectrum data is at best weak.

  20. Joint modeling of teleseismic and tsunami wave observations to constrain the 16 September 2015 Illapel, Chile, Mw 8.3 earthquake rupture process

    NASA Astrophysics Data System (ADS)

    Li, Linyan; Lay, Thorne; Cheung, Kwok Fai; Ye, Lingling

    2016-05-01

    The 16 September 2015 Illapel, Chile, Mw 8.3 earthquake ruptured ~170 km along the plate boundary megathrust fault from 30.0°S to 31.6°S. A patch of offshore slip of up to 10 m extended to near the trench, and a patch of ~3 m slip occurred downdip below the coast. Aftershocks fringe the large-slip zone, extending along the coast from 29.5°S to 32.5°S between the 1922 and 1971/1985 ruptures. The coseismic slip distribution is determined by iterative modeling of teleseismic body waves as well as tsunami signals recorded at three regional DART stations and tide gauges immediately north and south of the rupture. The tsunami observations tightly delimit the rupture length, suppressing bilateral southward extension of slip found in unconstrained teleseismic-wave inversions. The spatially concentrated rupture area, with a stress drop of ~3.2 MPa, is validated by modeling DART and tide gauge observations in Hawaii, which also prove sensitive to the along-strike length of the rupture.

  1. Updated Rupture Model for the M7.8 October 28, 2012, Haida Gwaii Earthquake as Constrained by GPS-Observed Displacements

    NASA Astrophysics Data System (ADS)

    Nykolaishen, L.; Dragert, H.; Wang, K.; James, T. S.; de Lange Boom, B.; Schmidt, M.; Sinnott, D.

    2014-12-01

    The M7.8 low-angle thrust earthquake off the west coast of southern Haida Gwaii on October 28, 2012, provided Canadian scientists the opportunity to study a local large thrust earthquake and has provided important information towards an improved understanding of geohazards in coastal British Columbia. Most large events along the Pacific-North America boundary in this region have involved strike-slip motion, such as the 1949 M8.1 earthquake on the Queen Charlotte Fault. In contrast along the southern portion of Haida Gwaii, the young (~8 Ma) Pacific plate crust also underthrusts North America and has been viewed as a small-scale analogy of the Cascadia Subduction Zone. Initial seismic-based rupture models for this event were improved through inclusion of GPS observed coseismic displacements, which are as large as 115 cm of horizontal motion (SSW) and 30 cm of subsidence. Additional campaign-style GPS surveys have since been repeated by the Canadian Hydrographic Service (CHS) at seven vertical reference benchmarks throughout Haida Gwaii, significantly improving the coverage of coseismic displacement observations in the region. These added offsets were typically calculated by differencing a single occupation before and after the earthquake and preliminary displacement estimates are consistent with previous GPS observations from the Geological Survey of Canada. Addition of the CHS coseismic offset estimates may allow direct inversion of the GPS data to derive a purely GPS-based rupture model. To date, cumulative postseismic displacements at six sites indicate up to 6 cm of motion, varying in azimuth between SSW and SE. Preliminary postseismic timeseries curve fitting to date has utilized a double exponential function characteristic of mantle relaxation. The current postseismic trends also suggest afterslip on the deeper plate interface beneath central Haida Gwaii along with possible induced aseismic slip on a deeper segment of the Queen Charlotte Fault located offshore

  2. Source Attribution and Interannual Variability of Arctic Pollution in Spring Constrained by Aircraft (ARCTAS, ARCPAC) and Satellite (AIRS) Observations of Carbon Monoxide

    NASA Technical Reports Server (NTRS)

    Fisher, J. A.; Jacob, D. J.; Purdy, M. T.; Kopacz, M.; LeSager, P.; Carouge, C.; Holmes, C. D.; Yantosca, R. M.; Batchelor, R. L.; Strong, K.; Diskin, G. S.; Fuelberg, H. E.; Holloway, J. S.; McMillan, W. W.; Warner, J.; Streets, D. G.; Zhang, Q.; Wang, Y.; Wu, S.

    2009-01-01

    We use aircraft observations of carbon monoxide (CO) from the NASA ARCTAS and NOAA ARCPAC campaigns in April 2008 together with multiyear (2003-2008) CO satellite data from the AIRS instrument and a global chemical transport model (GEOS-Chem) to better understand the sources, transport, and interannual variability of pollution in the Arctic in spring. Model simulation of the aircraft data gives best estimates of CO emissions in April 2008 of 26 Tg month-1 for Asian anthropogenic, 9.1 for European anthropogenic, 4.2 for North American anthropogenic, 9.3 for Russian biomass burning (anomalously large that year), and 21 for Southeast Asian biomass burning. We find that Asian anthropogenic emissions are the dominant source of Arctic CO pollution everywhere except in surface air where European anthropogenic emissions are of similar importance. Synoptic pollution influences in the Arctic free troposphere include contributions of comparable magnitude from Russian biomass burning and from North American, European, and Asian anthropogenic sources. European pollution dominates synoptic variability near the surface. Analysis of two pollution events sampled by the aircraft demonstrates that AIRS is capable of observing pollution transport to the Arctic in the mid-troposphere. The 2003-2008 record of CO from AIRS shows that interannual variability averaged over the Arctic cap is very small. AIRS CO columns over Alaska are highly correlated with the Ocean Nino Index, suggesting a link between El Nino and northward pollution transport. AIRS shows lower-than-average CO columns over Alaska during April 2008, despite the Russian fires, due to a weakened Aleutian Low hindering transport from Asia and associated with the moderate 2007-2008 La Nina. This suggests that Asian pollution influence over the Arctic may be particularly large under strong El Nino conditions.

  3. Long-term observations of black carbon mass concentrations at Fukue Island, western Japan, during 2009-2015: constraining wet removal rates and emission strengths from East Asia

    NASA Astrophysics Data System (ADS)

    Kanaya, Yugo; Pan, Xiaole; Miyakawa, Takuma; Komazaki, Yuichi; Taketani, Fumikazu; Uno, Itsushi; Kondo, Yutaka

    2016-08-01

    Long-term (2009-2015) observations of atmospheric black carbon (BC) mass concentrations were performed using a continuous soot-monitoring system (COSMOS) at Fukue Island, western Japan, to provide information on wet removal rate constraints and the emission strengths of important source regions in East Asia (China and others). The annual average mass concentration was 0.36 µg m-3, with distinct seasonality; high concentrations were recorded during autumn, winter, and spring and were caused by Asian continental outflows, which reached Fukue Island in 6-46 h. The observed data were categorized into two classes, i.e., with and without a wet removal effect, using the accumulated precipitation along a backward trajectory (APT) for the last 3 days as an index. Statistical analysis of the observed ΔBC / ΔCO ratios was performed to obtain information on the emission ratios (from data with zero APT only) and wet removal rates (including data with nonzero APTs). The estimated emission ratios (5.2-6.9 ng m-3 ppb-1) varied over the six air mass origin areas; the higher ratios for south-central East China (30-35° N) than for north-central East China (35-40° N) indicated the relative importance of domestic emissions and/or biomass burning sectors. The significantly higher BC / CO emission ratios adopted in the bottom-up Regional Emission inventory in Asia (REAS) version 2 (8.3-23 ng m-3 ppb-1) over central East China and Korea needed to be reduced at least by factors of 1.3 and 2.8 for central East China and Korea, respectively, but the ratio for Japan was reasonable. The wintertime enhancement of the BC emission from China, predicted by REAS2, was verified for air masses from south-central East China but not for those from north-central East China. Wet removal of BC was clearly identified as a decrease in the ΔBC / ΔCO ratio against APT. The transport efficiency (TE), defined as the ratio of the ΔBC / ΔCO ratio with precipitation to that without precipitation, was

  4. Sensitivity testing of a 1-D calving criterion numerical model constrained by observations of post-LIA fluctuations of Kangiata Nunaata Sermia, SW Greenland

    NASA Astrophysics Data System (ADS)

    Lea, J. M.; Mair, D.; Nick, F. M.; Rea, B. R.; Schofield, E.; Nienow, P. W.

    2012-12-01

    The ability to successfully model the behaviour of Greenlandic tidewater glaciers is pivotal for the prediction of future behaviour and potential impact on global sea level. However, to have confidence in the results of numerical models, they must be capable of replicating the full range of observed glacier behaviour (i.e. both advance and retreat) when realistic forcings are applied. Due to the paucity of observational records recording this behaviour, it is therefore necessary to verify calving models against reconstructions of glacier dynamics. The dynamics of Kangiata Nunaata Sermia (KNS) can be reconstructed with a high degree of detail using a combination of sedimentological and geomorphological evidence, photographs, historical sources and satellite imagery. Since the LIA-maximum KNS has retreated a total of 21 km with multiple phases of rapid retreat evident between topographic pinning points. A readvance attaining a position 9 km from the current terminus associated with the '1920 stade' is also identified. KNS therefore represents an ideal test location for calving models since it has both advanced and retreated over known timescales, while the scale of fluctuations implies KNS is sensitive to parameter(s) controlling terminus stability. Using the known stable positions for verification, we present the results of an array of sensitivity tests conducted on KNS using the 1-D flowband calving model of Nick et al (2009). The model is initially tuned to an historically stable position where the glacier configuration is accurately known (in this case 1985), and forced by varying surface mass balance, crevasse water depth, submarine melt rate at the calving front, in addition to the strength and pervasiveness of sikussak in the fjord. Successive series of experiments were run using each parameter to test model sensitivity to the initial conditions of each variable. Results indicate that the model is capable of stabilising at locations that are in agreement with

  5. Constrained control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne C.

    1992-01-01

    This paper addresses the problem of the allocation of several flight controls to the generation of specified body-axis moments. The number of controls is greater than the number of moments being controlled, and the ranges of the controls are constrained to certain limits. The controls are assumed to be individually linear in their effect throughout their ranges of motion, and independent of one another in their effects. The geometries of the subset of the constrained controls and of its image in moment space are examined. A direct method of allocating these several controls is presented, that guarantees the maximum possible moment is generated within the constraints of the controls. The results are illustrated by an example problem involving three controls and two moments.

  6. constrainedKriging: An R-package for customary, constrained and covariance-matching constrained point or block kriging

    NASA Astrophysics Data System (ADS)

    Hofer, Christoph; Papritz, Andreas

    2011-10-01

    The article describes the R-package constrainedKriging, a tool for spatial prediction problems that involve change of support. The package provides software for spatial interpolation by constrained (CK), covariance-matching constrained (CMCK), and customary universal (UK) kriging. CK and CMCK yield approximately unbiased predictions of nonlinear functionals of target quantities under change of support and are therefore an attractive alternative to conditional Gaussian simulations. The constrainedKriging package computes CK, CMCK, and UK predictions for points or blocks of arbitrary shape from data observed at points in a two-dimensional survey domain. Predictions are computed for a random process model that involves a nonstationary mean function (modeled by a linear regression) and a weakly stationary, isotropic covariance function (or variogram). CK, CMCK, and UK require the point-block and block-block averages of the covariance function if the prediction targets are blocks. The constrainedKriging package uses numerically efficient approximations to compute these averages. The article contains, apart from a brief summary of CK and CMCK, a detailed description of the algorithm used to compute the point-block and block-block covariances, and it describes the functionality of the software in detail. The practical use of the package is illustrated by a comparison of universal and constrained lognormal block kriging for the Meuse Bank heavy metal data set.

  7. Constrained noninformative priors

    SciTech Connect

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given.

  8. The era of synoptic galactic archeology: using HST and Chandra observations to constrain the evolution of elliptical galaxies through the spatial distribution of globular clusters and X-ray binaries.

    NASA Astrophysics Data System (ADS)

    D'Abrusco, Raffaele; Fabbiano, Giuseppina; Zezas, Andreas

    2017-01-01

    Most of the stellar mass observed today in early-type galaxies is thought to be due to merging and accretion of smaller companions, but the details of these processes are still poorly constrained. Globular clusters, visible from the center to the halo of galaxies, reflect the evolution of their host galaxy in their kinematic, photometric and spatial distributions. By characterizing the spatial distribution of the population of globular clusters extracted from archival HST data of some of the most massive elliptical galaxies in the local Universe with a novel statistical approach, we recently discovered that two-dimensional spatial structures at small radii are common (D’Abrusco et al. 2014a; 2014b; 2015). Such structures, not detectable from ground-based data, can be linked to events in the evolution of the host galaxy. Moreover, we devised an interpretative framework that, based on the form, area and number of globular clusters of such structures, infers the frequency of major mergers and the mass spectrum of the accreted companions.For some of the galaxies investigated, X-ray data from Chandra joint observing programs were also available. Our method, applied to the distribution of X-ray binaries, has revealed, at least in the case of two galaxies (D’Abrusco et al. 2014a; D’Abrusco et al.23014c) the existence of overdensities that are not associated to globular cluster structures. These findings provide complementary hints about the evolution of the stellar component of these galaxies that can be used to further refine the sequence of events that determined their growth.In this contribution, we will summarize our main results and highlight the novelty of our approach. Furthermore, we will advocate the fundamental importance of joint observations of galaxies by HST and Chandra as a way to provide unique, complementary views of such systems and unlock the mysteries of their evolution.

  9. Observation

    ERIC Educational Resources Information Center

    Helfrich, Shannon

    2016-01-01

    Helfrich addresses two perspectives from which to think about observation in the classroom: that of the teacher observing her classroom, her group, and its needs, and that of the outside observer coming into the classroom. Offering advice from her own experience, she encourages and defends both. Do not be afraid of the disruption of outside…

  10. Observations

    ERIC Educational Resources Information Center

    Joosten, Albert Max

    2016-01-01

    Joosten begins his article by telling us that love and knowledge together are the foundation for our work with children. This combination is at the heart of our observation. With this as the foundation, he goes on to offer practical advice to aid our practice of observation. He offers a "List of Objects of Observation" to help guide our…

  11. Exploring constrained quantum control landscapes

    NASA Astrophysics Data System (ADS)

    Moore, Katharine W.; Rabitz, Herschel

    2012-10-01

    The broad success of optimally controlling quantum systems with external fields has been attributed to the favorable topology of the underlying control landscape, where the landscape is the physical observable as a function of the controls. The control landscape can be shown to contain no suboptimal trapping extrema upon satisfaction of reasonable physical assumptions, but this topological analysis does not hold when significant constraints are placed on the control resources. This work employs simulations to explore the topology and features of the control landscape for pure-state population transfer with a constrained class of control fields. The fields are parameterized in terms of a set of uniformly spaced spectral frequencies, with the associated phases acting as the controls. This restricted family of fields provides a simple illustration for assessing the impact of constraints upon seeking optimal control. Optimization results reveal that the minimum number of phase controls necessary to assure a high yield in the target state has a special dependence on the number of accessible energy levels in the quantum system, revealed from an analysis of the first- and second-order variation of the yield with respect to the controls. When an insufficient number of controls and/or a weak control fluence are employed, trapping extrema and saddle points are observed on the landscape. When the control resources are sufficiently flexible, solutions producing the globally maximal yield are found to form connected "level sets" of continuously variable control fields that preserve the yield. These optimal yield level sets are found to shrink to isolated points on the top of the landscape as the control field fluence is decreased, and further reduction of the fluence turns these points into suboptimal trapping extrema on the landscape. Although constrained control fields can come in many forms beyond the cases explored here, the behavior found in this paper is illustrative of

  12. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  13. Observation

    ERIC Educational Resources Information Center

    Kripalani, Lakshmi A.

    2016-01-01

    The adult who is inexperienced in the art of observation may, even with the best intentions, react to a child's behavior in a way that hinders instead of helping the child's development. Kripalani outlines the need for training and practice in observation in order to "understand the needs of the children and...to understand how to remove…

  14. Constraining the Europa Neutral Torus

    NASA Astrophysics Data System (ADS)

    Smith, Howard T.; Mitchell, Donald; mauk, Barry; Johnson, Robert E.; clark, george

    2016-10-01

    "Neutral tori" consist of neutral particles that usually co-orbit along with their source forming a toroidal (or partial toroidal) feature around the planet. The distribution and composition of these features can often provide important, if not unique, insight into magnetospheric particles sources, mechanisms and dynamics. However, these features can often be difficult to directly detect. One innovative method for detecting neutral tori is by observing Energetic Neutral Atoms (ENAs) that are generally considered produced as a result of charge exchange interactions between charged and neutral particles.Mauk et al. (2003) reported the detection of a Europa neutral particle torus using ENA observations. The presence of a Europa torus has extremely large implications for upcoming missions to Jupiter as well as understanding possible activity at this moon and providing critical insight into what lies beneath the surface of this icy ocean world. However, ENAs can also be produced as a result of charge exchange interactions between two ionized particles and in that case cannot be used to infer the presence of neutral particle population. Thus, a detailed examination of all possible source interactions must be considered before one can confirm that likely original source population of these ENA images is actually a Europa neutral particle torus. For this talk, we examine the viability that the Mauk et al. (2003) observations were actually generated from a neutral torus emanating from Europa as opposed to charge particle interactions with plasma originating from Io. These results help constrain such a torus as well as Europa source processes.

  15. Constraining QGP properties with CHIMERA

    NASA Astrophysics Data System (ADS)

    Garishvili, Irakli; Abelev, Betty; Cheng, Michael; Glenn, Andrew; Soltz, Ron

    2011-10-01

    Understanding essential properties of strongly interacting matter is arguably the most important goal of the relativistic heavy-ion programs both at RHIC and the LHC. In particular, constraining observables such as ratio of shear viscosity to entropy density, η/s, initial temperature, Tinit, and energy density is of critical importance. For this purpose we have developed CHIMERA, Comprehensive Heavy Ion Model Reporting and Evaluation Algorithm. CHIMERA is designed to facilitate global statistical comparison of results from our multi-stage hydrodynamics/hadron cascade model of heavy ion collisions to the key soft observables (HBT, elliptic flow, spectra) measured at RHIC and the LHC. Within this framework the data representing multiple different measurements from different experiments are compiled into single format. One of the unique features of CHIMERA is, that in addition to taking into account statistical errors, it also treats different types of systematic uncertainties. The hydrodynamics/hadron cascade model used in the framework incorporates different initial state conditions, pre-equilibrium flow, the UVH2+1 viscous hydro model, Cooper-Frye freezeout, and the UrQMD hadronic cascade model. The sensitivity of the observables to the equation of state (EoS) is explored using several EoS's in the hydrodynamic evolution. The latest results from CHIMERA, including data from the LHC, will be presented.

  16. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  17. Power-constrained supercomputing

    NASA Astrophysics Data System (ADS)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound

  18. Constrained Vapor Bubble

    NASA Technical Reports Server (NTRS)

    Huang, J.; Karthikeyan, M.; Plawsky, J.; Wayner, P. C., Jr.

    1999-01-01

    The nonisothermal Constrained Vapor Bubble, CVB, is being studied to enhance the understanding of passive systems controlled by interfacial phenomena. The study is multifaceted: 1) it is a basic scientific study in interfacial phenomena, fluid physics and thermodynamics; 2) it is a basic study in thermal transport; and 3) it is a study of a heat exchanger. The research is synergistic in that CVB research requires a microgravity environment and the space program needs thermal control systems like the CVB. Ground based studies are being done as a precursor to flight experiment. The results demonstrate that experimental techniques for the direct measurement of the fundamental operating parameters (temperature, pressure, and interfacial curvature fields) have been developed. Fluid flow and change-of-phase heat transfer are a function of the temperature field and the vapor bubble shape, which can be measured using an Image Analyzing Interferometer. The CVB for a microgravity environment, has various thin film regions that are of both basic and applied interest. Generically, a CVB is formed by underfilling an evacuated enclosure with a liquid. Classification depends on shape and Bond number. The specific CVB discussed herein was formed in a fused silica cell with inside dimensions of 3x3x40 mm and, therefore, can be viewed as a large version of a micro heat pipe. Since the dimensions are relatively large for a passive system, most of the liquid flow occurs under a small capillary pressure difference. Therefore, we can classify the discussed system as a low capillary pressure system. The studies discussed herein were done in a 1-g environment (Bond Number = 3.6) to obtain experience to design a microgravity experiment for a future NASA flight where low capillary pressure systems should prove more useful. The flight experiment is tentatively scheduled for the year 2000. The SCR was passed on September 16, 1997. The RDR is tentatively scheduled for October, 1998.

  19. BICEP2 constrains composite inflation

    NASA Astrophysics Data System (ADS)

    Channuie, Phongpichit

    2014-07-01

    In light of BICEP2, we re-examine single field inflationary models in which the inflation is a composite state stemming from various four-dimensional strongly coupled theories. We study in the Einstein frame a set of cosmological parameters, the primordial spectral index ns and tensor-to-scalar ratio r, predicted by such models. We confront the predicted results with the joint Planck data, and with the recent BICEP2 data. We constrain the number of e-foldings for composite models of inflation in order to obtain a successful inflation. We find that the minimal composite inflationary model is fully consistent with the Planck data. However it is in tension with the recent BICEP2 data. The observables predicted by the glueball inflationary model can be consistent with both Planck and BICEP2 contours if a suitable number of e-foldings are chosen. Surprisingly, the super Yang-Mills inflationary prediction is significantly consistent with the Planck and BICEP2 observations.

  20. Constraining Neutron Star Matter with Quantum Chromodynamics

    NASA Astrophysics Data System (ADS)

    Kurkela, Aleksi; Fraga, Eduardo S.; Schaffner-Bielich, Jürgen; Vuorinen, Aleksi

    2014-07-01

    In recent years, there have been several successful attempts to constrain the equation of state of neutron star matter using input from low-energy nuclear physics and observational data. We demonstrate that significant further restrictions can be placed by additionally requiring the pressure to approach that of deconfined quark matter at high densities. Remarkably, the new constraints turn out to be highly insensitive to the amount—or even presence—of quark matter inside the stars.

  1. Order-constrained linear optimization.

    PubMed

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-02-27

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data.

  2. Gyrification from constrained cortical expansion

    PubMed Central

    Tallinen, Tuomas; Chung, Jun Young; Biggins, John S.; Mahadevan, L.

    2014-01-01

    The exterior of the mammalian brain—the cerebral cortex—has a conserved layered structure whose thickness varies little across species. However, selection pressures over evolutionary time scales have led to cortices that have a large surface area to volume ratio in some organisms, with the result that the brain is strongly convoluted into sulci and gyri. Here we show that the gyrification can arise as a nonlinear consequence of a simple mechanical instability driven by tangential expansion of the gray matter constrained by the white matter. A physical mimic of the process using a layered swelling gel captures the essence of the mechanism, and numerical simulations of the brain treated as a soft solid lead to the formation of cusped sulci and smooth gyri similar to those in the brain. The resulting gyrification patterns are a function of relative cortical expansion and relative thickness (compared with brain size), and are consistent with observations of a wide range of brains, ranging from smooth to highly convoluted. Furthermore, this dependence on two simple geometric parameters that characterize the brain also allows us to qualitatively explain how variations in these parameters lead to anatomical anomalies in such situations as polymicrogyria, pachygyria, and lissencephalia. PMID:25136099

  3. Constrained Allocation Flux Balance Analysis

    PubMed Central

    Mori, Matteo; Hwa, Terence; Martin, Olivier C.

    2016-01-01

    New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an “ensemble averaging” procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325

  4. Constraining the mass of the Local Group

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  5. Constraining curvatonic reheating

    NASA Astrophysics Data System (ADS)

    Hardwick, Robert J.; Vennin, Vincent; Koyama, Kazuya; Wands, David

    2016-08-01

    We derive the first systematic observational constraints on reheating in models of inflation where an additional light scalar field contributes to primordial density perturbations and affects the expansion history during reheating. This encompasses the original curvaton model but also covers a larger class of scenarios. We find that, compared to the single-field case, lower values of the energy density at the end of inflation and of the reheating temperature are preferred when an additional scalar field is introduced. For instance, if inflation is driven by a quartic potential, which is one of the most favoured models when a light scalar field is added, the upper bound Treh < 5 × 104 GeV on the reheating temperature Treh is derived, and the implications of this value on post-inflationary physics are discussed. The information gained about reheating is also quantified and it is found that it remains modest in plateau inflation (though still larger than in the single-field version of the model) but can become substantial in quartic inflation. The role played by the vev of the additional scalar field at the end of inflation is highlighted, and opens interesting possibilities for exploring stochastic inflation effects that could determine its distribution.

  6. Optimization of retinotopy constrained source estimation constrained by prior

    PubMed Central

    Hagler, Donald J.

    2015-01-01

    Studying how the timing and amplitude of visual evoked responses (VERs) vary between visual areas is important for understanding visual processing but is complicated by difficulties in reliably estimating VERs in individual visual areas using non-invasive brain measurements. Retinotopy constrained source estimation (RCSE) addresses this challenge by using multiple, retinotopically-mapped stimulus locations to simultaneously constrain estimates of VERs in visual areas V1, V2, and V3, taking advantage of the spatial precision of fMRI retinotopy and the temporal resolution of magnetoencephalography (MEG) or electroencephalography (EEG). Nonlinear optimization of dipole locations, guided by a group-constrained RCSE solution as a prior, improved the robustness of RCSE. This approach facilitated the analysis of differences in timing and amplitude of VERs between V1, V2, and V3, elicited by stimuli with varying luminance contrast in a sample of eight adult humans. The V1 peak response was 37% larger than that of V2 and 74% larger than that of V3, and also ~10–20 msec earlier. Normalized contrast response functions were nearly identical for the three areas. Results without dipole optimization, or with other nonlinear methods not constrained by prior estimates were similar but suffered from greater between-subject variability. The increased reliability of estimates offered by this approach may be particularly valuable when using a smaller number of stimulus locations, enabling a greater variety of stimulus and task manipulations. PMID:23868690

  7. CONSTRAINING SOURCE REDSHIFT DISTRIBUTIONS WITH GRAVITATIONAL LENSING

    SciTech Connect

    Wittman, D.; Dawson, W. A.

    2012-09-10

    We introduce a new method for constraining the redshift distribution of a set of galaxies, using weak gravitational lensing shear. Instead of using observed shears and redshifts to constrain cosmological parameters, we ask how well the shears around clusters can constrain the redshifts, assuming fixed cosmological parameters. This provides a check on photometric redshifts, independent of source spectral energy distribution properties and therefore free of confounding factors such as misidentification of spectral breaks. We find that {approx}40 massive ({sigma}{sub v} = 1200 km s{sup -1}) cluster lenses are sufficient to determine the fraction of sources in each of six coarse redshift bins to {approx}11%, given weak (20%) priors on the masses of the highest-redshift lenses, tight (5%) priors on the masses of the lowest-redshift lenses, and only modest (20%-50%) priors on calibration and evolution effects. Additional massive lenses drive down uncertainties as N{sub lens}{sup -1/2}, but the improvement slows as one is forced to use lenses further down the mass function. Future large surveys contain enough clusters to reach 1% precision in the bin fractions if the tight lens-mass priors can be maintained for large samples of lenses. In practice this will be difficult to achieve, but the method may be valuable as a complement to other more precise methods because it is based on different physics and therefore has different systematic errors.

  8. Constrained inversion of seismo-volcanic events

    NASA Astrophysics Data System (ADS)

    Nocerino, Luciano; D'Auria, Luca; Giudicepietro, Flora; Martini, Marcello

    2014-05-01

    The inversion of seismo-volcanic events is performed to retrieve the source geometry and to determine volumetric budgets of the source. Such observations have shown to be an important tool for the seismological monitoring of volcanoes. We developed a novel technique for the non-linear constrained inversion of low frequency seismo-volcanic events. Unconstrained linear inversion methods work well when a dense network of broadband seismometers is available. We propose a new constrained inversion technique, which has shown to be efficient also in a reduced network configuration and a low signal-noise ratio. The waveform inversion is performed in the frequency domain, constraining the source mechanism during the event to vary only in its magnitude. The eigenvectors orientation and the eigenvalue ratio are kept constant. This significantly reduces the number of parameters to invert, making the procedure more stable. The method has been tested over a synthetic dataset, reproducing realistic very-long-period (VLP) signals Stromboli volcano. We have applied the method to a VLP dataset recorded on Stromboli volcano and to low-frequency earthquakes recorded on Mt.Vesuvius.

  9. Compositionally Constraining Elysium Lava Fields

    NASA Astrophysics Data System (ADS)

    Karunatillake, S.; Button, N. E.; Skok, J. R.

    2013-12-01

    Chemical provinces of Mars defined recently [1-3] became possible with the maps of elemental mass fractions generated with Mars Odyssey Gamma and Neutron Spectrometer (GS) data [4,5]. These provide a unique perspective by representing compositional signatures distinctive of the regolith vertically at decimeter depths and laterally at hundreds of kilometer scale. Some provinces overlap compellingly with regions highlighted by other remote sensing observations, such as the Mars Radar Stealth area [3]. The spatial convergence of mutually independent data with the consequent highlight of a region provides a unique opportunity of insight not possible with a single type of remote sensing observation. Among such provinces, previous work [3] highlighted Elysium lava flows as a promising candidate on the basis of convergence with mapped geologic units identifying Elysium's lava fields generally, and Amazonian-aged lava flows specifically. The South Eastern lava flows of Elysium Mons, dating to the recent Amazonian epoch, overlap compellingly with a chemical province of K and Th depletion relative to the Martian midlatitudes. We characterize the composition, geology, and geomorphology of the SE Elysium province to constrain the confluence of geologic and alteration processes that may have contributed to its evolution. We compare this with the North Western lava fields, extending the discussion on chemical products from the thermal evolution of Martian volcanism as discussed by Baratoux et al. [6]. The chemical province, by regional proximity to Cerberus Fossae, may also reflect the influence of recently identified buried flood channels [7] in the vicinity of Orcus Patera. Despite the compelling chemical signature from γ spectra, fine grained unconsolidated sediment hampers regional VNTIR (Visible, Near, and Thermal Infrared) spectral analysis. But some observations near scarps and fresh craters allow a view of small scale mineral content. The judicious synthesis of

  10. COBS: COnstrained B-Splines

    NASA Astrophysics Data System (ADS)

    Ng, Pin T.; Maechler, Martin

    2015-05-01

    COBS (COnstrained B-Splines), written in R, creates constrained regression smoothing splines via linear programming and sparse matrices. The method has two important features: the number and location of knots for the spline fit are established using the likelihood-based Akaike Information Criterion (rather than a heuristic procedure); and fits can be made for quantiles (e.g. 25% and 75% as well as the usual 50%) in the response variable, which is valuable when the scatter is asymmetrical or non-Gaussian. This code is useful for, for example, estimating cluster ages when there is a wide spread in stellar ages at a chosen absorption, as a standard regression line does not give an effective measure of this relationship.

  11. Constraining the Mass of A Galaxy Cluster

    NASA Astrophysics Data System (ADS)

    Cemenenkoff, Nicholas; Rines, Kenneth J.; Geller, Margaret J.; Diaferio, Antonaldo

    2017-01-01

    Accurate cluster masses are critical for understanding dark matter and for using clusters to constrain cosmological parameters. We use the observed surface number density profile and velocity dispersion profile of galaxies in the Coma cluster to constrain its mass profile via Jeans analysis. In particular, we evaluate the robustness of the mass estimate M_200 by using different parametric forms for the distribution of mass and galaxies as well as different models of the orbital anisotropy parameter β (r) . Allowing for variation between the scale radii of the mass profile and the galaxy profile (i.e. relaxing the assumption that galaxies trace mass) does not significantly change the estimate of M 200 . We use a Bayesian approach to construct probability distribution functions of M 200, scale radius, and beta via Markov Chain Monte Carlo (MCMC) sampling. We apply this approach to ensemble clusters stacked by either their Sunyaev-Zel'dovich (SZ) signals or X-ray luminosities to measure the scaling relations of dynamical mass estimates with these mass proxies. Specifically, we test the hypothesis that the apparent deficit of SZ clusters (compared to predictions based on observations of the microwave background) can be explained by a bias of ˜ 60% in the normalization of the scaling relation between SZ signal and mass.

  12. Constraining torsion with Gravity Probe B

    SciTech Connect

    Mao Yi; Guth, Alan H.; Cabi, Serkan; Tegmark, Max

    2007-11-15

    It is well-entrenched folklore that all torsion gravity theories predict observationally negligible torsion in the solar system, since torsion (if it exists) couples only to the intrinsic spin of elementary particles, not to rotational angular momentum. We argue that this assumption has a logical loophole which can and should be tested experimentally, and consider nonstandard torsion theories in which torsion can be generated by macroscopic rotating objects. In the spirit of action=reaction, if a rotating mass like a planet can generate torsion, then a gyroscope would be expected to feel torsion. An experiment with a gyroscope (without nuclear spin) such as Gravity Probe B (GPB) can test theories where this is the case. Using symmetry arguments, we show that to lowest order, any torsion field around a uniformly rotating spherical mass is determined by seven dimensionless parameters. These parameters effectively generalize the parametrized post-Newtonian formalism and provide a concrete framework for further testing Einstein's general theory of relativity (GR). We construct a parametrized Lagrangian that includes both standard torsion-free GR and Hayashi-Shirafuji maximal torsion gravity as special cases. We demonstrate that classic solar system tests rule out the latter and constrain two observable parameters. We show that Gravity Probe B is an ideal experiment for further constraining nonstandard torsion theories, and work out the most general torsion-induced precession of its gyroscope in terms of our torsion parameters.

  13. How alive is constrained SUSY really?

    SciTech Connect

    Bechtle, Philip; Desch, Klaus; Dreiner, Herbert K.; Hamer, Matthias; Kramer, Michael; O'Leary, Ben; Porod, Werner; Sarrazin, Bjorn; Stefaniak, Tim; Uhlenbrock, Mathias; Wienemann, Peter

    2016-05-31

    Constrained supersymmetric models like the CMSSM might look less attractive nowadays because of fine tuning arguments. They also might look less probable in terms of Bayesian statistics. The question how well the model under study describes the data, however, is answered by frequentist p-values. Thus, for the first time, we calculate a p-value for a supersymmetric model by performing dedicated global toy fits. We combine constraints from low-energy and astrophysical observables, Higgs boson mass and rate measurements as well as the non-observation of new physics in searches for supersymmetry at the LHC. Furthermore, using the framework Fittino, we perform global fits of the CMSSM to the toy data and find that this model is excluded at the 90% confidence level.

  14. How alive is constrained SUSY really?

    DOE PAGES

    Bechtle, Philip; Desch, Klaus; Dreiner, Herbert K.; ...

    2016-05-31

    Constrained supersymmetric models like the CMSSM might look less attractive nowadays because of fine tuning arguments. They also might look less probable in terms of Bayesian statistics. The question how well the model under study describes the data, however, is answered by frequentist p-values. Thus, for the first time, we calculate a p-value for a supersymmetric model by performing dedicated global toy fits. We combine constraints from low-energy and astrophysical observables, Higgs boson mass and rate measurements as well as the non-observation of new physics in searches for supersymmetry at the LHC. Furthermore, using the framework Fittino, we perform globalmore » fits of the CMSSM to the toy data and find that this model is excluded at the 90% confidence level.« less

  15. Method of constrained global optimization

    NASA Astrophysics Data System (ADS)

    Altschuler, Eric Lewin; Williams, Timothy J.; Ratner, Edward R.; Dowla, Farid; Wooten, Frederick

    1994-04-01

    We present a new method for optimization: constrained global optimization (CGO). CGO iteratively uses a Glauber spin flip probability and the Metropolis algorithm. The spin flip probability allows changing only the values of variables contributing excessively to the function to be minimized. We illustrate CGO with two problems-Thomson's problem of finding the minimum-energy configuration of unit charges on a spherical surface, and a problem of assigning offices-for which CGO finds better minima than other methods. We think CGO will apply to a wide class of optimization problems.

  16. Constraining walking and custodial technicolor

    SciTech Connect

    Foadi, Roshan; Frandsen, Mads T.; Sannino, Francesco

    2008-05-01

    We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level - custodial technicolor - and argue that these models cannot emerge from walking-type dynamics. We suggest that it is possible to have a very light spin-one axial (vector) boson. However, in the walking dynamics the associated vector boson is heavy while it is degenerate with the axial in custodial technicolor.

  17. Constraining relativistic viscous hydrodynamical evolution

    SciTech Connect

    Martinez, Mauricio; Strickland, Michael

    2009-04-15

    We show that by requiring positivity of the longitudinal pressure it is possible to constrain the initial conditions one can use in second-order viscous hydrodynamical simulations of ultrarelativistic heavy-ion collisions. We demonstrate this explicitly for (0+1)-dimensional viscous hydrodynamics and discuss how the constraint extends to higher dimensions. Additionally, we present an analytic approximation to the solution of (0+1)-dimensional second-order viscous hydrodynamical evolution equations appropriate to describe the evolution of matter in an ultrarelativistic heavy-ion collision.

  18. Constraining the braking indices of magnetars

    NASA Astrophysics Data System (ADS)

    Gao, Z. F.; Li, X.-D.; Wang, N.; Yuan, J. P.; Wang, P.; Peng, Q. H.; Du, Y. J.

    2016-02-01

    Because of the lack of long-term pulsed emission in quiescence and the strong timing noise, it is impossible to directly measure the braking index n of a magnetar. Based on the estimated ages of their potentially associated supernova remnants (SNRs), we estimate the values of the mean braking indices of eight magnetars with SNRs, and find that they cluster in the range of 1-42. Five magnetars have smaller mean braking indices of 1 < n < 3, and we interpret them within a combination of magneto-dipole radiation and wind-aided braking. The larger mean braking indices of n > 3 for the other three magnetars are attributed to the decay of external braking torque, which might be caused by magnetic field decay. We estimate the possible wind luminosities for the magnetars with 1 < n < 3, and the dipolar magnetic field decay rates for the magnetars with n > 3, within the updated magneto-thermal evolution models. Although the constrained range of the magnetars' braking indices is tentative, as a result of the uncertainties in the SNR ages due to distance uncertainties and the unknown conditions of the expanding shells, our method provides an effective way to constrain the magnetars' braking indices if the measurements of the SNR ages are reliable, which can be improved by future observations.

  19. Using In Situ Observations and Satellite Retrievals to Constrain Large-Eddy Simulations and Single-Column Simulations: Implications for Boundary-Layer Cloud Parameterization in the NASA GISS GCM

    NASA Astrophysics Data System (ADS)

    Remillard, J.

    2015-12-01

    Two low-cloud periods from the CAP-MBL deployment of the ARM Mobile Facility at the Azores are selected through a cluster analysis of ISCCP cloud property matrices, so as to represent two low-cloud weather states that the GISS GCM severely underpredicts not only in that region but also globally. The two cases represent (1) shallow cumulus clouds occurring in a cold-air outbreak behind a cold front, and (2) stratocumulus clouds occurring when the region was dominated by a high-pressure system. Observations and MERRA reanalysis are used to derive specifications used for large-eddy simulations (LES) and single-column model (SCM) simulations. The LES captures the major differences in horizontal structure between the two low-cloud fields, but there are unconstrained uncertainties in cloud microphysics and challenges in reproducing W-band Doppler radar moments. The SCM run on the vertical grid used for CMIP-5 runs of the GCM does a poor job of representing the shallow cumulus case and is unable to maintain an overcast deck in the stratocumulus case, providing some clues regarding problems with low-cloud representation in the GCM. SCM sensitivity tests with a finer vertical grid in the boundary layer show substantial improvement in the representation of cloud amount for both cases. GCM simulations with CMIP-5 versus finer vertical gridding in the boundary layer are compared with observations. The adoption of a two-moment cloud microphysics scheme in the GCM is also tested in this framework. The methodology followed in this study, with the process-based examination of different time and space scales in both models and observations, represents a prototype for GCM cloud parameterization improvements.

  20. Porosity and water ice content of the sub-surface material in the Imhotep region of 67P/Churyumov-Gerasimenko constrained with the Microwave Instrument on the Rosetta Orbiter (MIRO) observations

    NASA Astrophysics Data System (ADS)

    von Allmen, Paul

    2016-04-01

    In late October 2014, the Rosetta spacecraft orbited around 67P/Churyumov-Gerasimenko at a distance less than 10 km, the closest orbit in the mission so far. During this close approach, the Microwave Instrument on the Rosetta Orbiter (MIRO) observed an 800-meter long swath in the Imhotep region on October 27, 2014. Continuum and spectroscopic data were obtained. These data provided the highest spatial resolution obtained to date with the MIRO instrument. The footprint diameter of MIRO on the surface of the nucleus was about 20 meters in the sub-millimeter band at λ=0.5 mm, and 60 meters in the millimeter band at λ=1.6 mm. The swath transitions from a relatively flat area of the Imhotep region to a topographically more diverse area, still making the data relatively easy to analyze. We used a thermal model of the nucleus, including water ice sublimation to analyze the continuum data. The sub-surface material of the nucleus is described in terms of its porosity, grain size and water ice content, in addition to assumptions for the dust bulk density and grain packing geometry. We used the optimal estimation algorithm to fit the material parameters for the best agreement between the observations and the simulation results. We will present the material parameters determined from our analysis.

  1. Disappearance and Creation of Constrained Amorphous Phase

    NASA Astrophysics Data System (ADS)

    Cebe, Peggy; Lu, Sharon X.

    1997-03-01

    We report observation of the disappearance and recreation of rigid, or constrained, amorphous phase by sequential thermal annealing. Tempera- ture modulated differential scanning calorimetry (MDSC) is used to study the glass transition and lower melting endotherm after annealing. Cold crystallization of poly(phenylene sulfide), PPS, at a temperature just above Tg creates an initial large fraction of rigid amorphous phase (RAP). Brief, rapid annealing to a higher temperature causes RAP almost to disappear completely. Subsequent reannealing at the original lower temperature restores RAP to its original value. At the same time that RAP is being removed, Tg decreases; when RAP is restored, Tg also returns to its initial value. The crystal fraction remains unaffected by the annealing sequence.

  2. Constrained fits with non-Gaussian distributions

    NASA Astrophysics Data System (ADS)

    Frühwirth, R.; Cencic, O.

    2016-10-01

    Non-normally distributed data are ubiquitous in many areas of science, including high-energy physics. We present a general formalism for constrained fits, also called data reconciliation, with data that are not normally distributed. It is based on Bayesian reasoning and implemented via MCMC sampling. We show how systems of both linear and non-linear constraints can be efficiently treated. We also show how the fit can be made robust against outlying observations. The method is demonstrated on a couple of examples ranging from material flow analysis to the combination of non-normal measurements. Finally, we discuss possible applications in the field of event reconstruction, such as vertex fitting and kinematic fitting with non-normal track errors.

  3. A Novel Approach to Constraining Uncertain Stellar Evolution Models

    NASA Astrophysics Data System (ADS)

    Rosenfield, Philip; Girardi, Leo; Dalcanton, Julianne; Johnson, L. C.; Williams, Benjamin F.; Weisz, Daniel R.; Bressan, Alessandro; Fouesneau, Morgan

    2017-01-01

    Stellar evolution models are fundamental to nearly all studies in astrophysics. They are used to interpret spectral energy distributions of distant galaxies, to derive the star formation histories of nearby galaxies, and to understand fundamental parameters of exoplanets. Despite the success in using stellar evolution models, some important aspects of stellar evolution remain poorly constrained and their uncertainties rarely addressed. We present results using archival Hubble Space Telescope observations of 10 stellar clusters in the Magellanic Clouds to simultaneously constrain the values and uncertainties of the strength of core convective overshooting, metallicity, interstellar extinction, cluster distance, binary fraction, and age.

  4. Formal language constrained path problems

    SciTech Connect

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.

  5. Constraining asymmetric dark matter through observations of compact stars

    SciTech Connect

    Kouvaris, Chris; Tinyakov, Peter

    2011-04-15

    We put constraints on asymmetric dark matter candidates with spin-dependent interactions based on the simple existence of white dwarfs and neutron stars in globular clusters. For a wide range of the parameters (WIMP mass and WIMP-nucleon cross section), weakly interacting massive particles (WIMPs) can be trapped in progenitors in large numbers and once the original star collapses to a white dwarf or a neutron star, these WIMPs might self-gravitate and eventually collapse forming a mini-black hole that eventually destroys the star. We impose constraints competitive to direct dark matter search experiments, for WIMPs with masses down to the TeV scale.

  6. Slow Solar Wind: Observable Characteristics for Constraining Modelling

    NASA Astrophysics Data System (ADS)

    Ofman, L.; Abbo, L.; Antiochos, S. K.; Hansteen, V. H.; Harra, L.; Ko, Y. K.; Lapenta, G.; Li, B.; Riley, P.; Strachan, L.; von Steiger, R.; Wang, Y. M.

    2015-12-01

    The Slow Solar Wind (SSW) origin is an open issue in the post SOHO era and forms a major objective for planned future missions such as the Solar Orbiter and Solar Probe Plus.Results from spacecraft data, combined with theoretical modeling, have helped to investigate many aspects of the SSW. Fundamental physical properties of the coronal plasma have been derived from spectroscopic and imaging remote-sensing data and in-situ data, and these results have provided crucial insights for a deeper understanding of the origin and acceleration of the SSW.Advances models of the SSW in coronal streamers and other structures have been developed using 3D MHD and multi-fluid equations.Nevertheless, there are still debated questions such as:What are the source regions of SSW? What are their contributions to the SSW?Which is the role of the magnetic topology in corona for the origin, acceleration and energy deposition of SSW?Which are the possible acceleration and heating mechanisms for the SSW?The aim of this study is to present the insights on the SSW origin and formationarisen during the discussions at the International Space Science Institute (ISSI) by the Team entitled ''Slowsolar wind sources and acceleration mechanisms in the corona'' held in Bern (Switzerland) in March2014--2015. The attached figure will be presented to summarize the different hypotheses of the SSW formation.

  7. CONSTRAINING INTRACLUSTER GAS MODELS WITH AMiBA13

    SciTech Connect

    Molnar, Sandor M.; Umetsu, Keiichi; Ho, Paul T. P.; Koch, Patrick M.; Victor Liao, Yu-Wei; Lin, Kai-Yang; Liu, Guo-Chin; Nishioka, Hiroaki; Birkinshaw, Mark; Bryan, Greg; Haiman, Zoltan; Shang, Cien; Hearn, Nathan

    2010-11-10

    Clusters of galaxies have been extensively used to determine cosmological parameters. A major difficulty in making the best use of Sunyaev-Zel'dovich (SZ) and X-ray observations of clusters for cosmology is that using X-ray observations it is difficult to measure the temperature distribution and therefore determine the density distribution in individual clusters of galaxies out to the virial radius. Observations with the new generation of SZ instruments are a promising alternative approach. We use clusters of galaxies drawn from high-resolution adaptive mesh refinement cosmological simulations to study how well we should be able to constrain the large-scale distribution of the intracluster gas (ICG) in individual massive relaxed clusters using AMiBA in its configuration with 13 1.2 m diameter dishes (AMiBA13) along with X-ray observations. We show that non-isothermal {beta} models provide a good description of the ICG in our simulated relaxed clusters. We use simulated X-ray observations to estimate the quality of constraints on the distribution of gas density, and simulated SZ visibilities (AMiBA13 observations) for constraints on the large-scale temperature distribution of the ICG. We find that AMiBA13 visibilities should constrain the scale radius of the temperature distribution to about 50% accuracy. We conclude that the upgraded AMiBA, AMiBA13, should be a powerful instrument to constrain the large-scale distribution of the ICG.

  8. Constrained Peptides as Miniature Protein Structures

    PubMed Central

    Yin, Hang

    2012-01-01

    This paper discusses the recent developments of protein engineering using both covalent and noncovalent bonds to constrain peptides, forcing them into designed protein secondary structures. These constrained peptides subsequently can be used as peptidomimetics for biological functions such as regulations of protein-protein interactions. PMID:25969758

  9. Observation of high-energy neutrinos with Cerenkov detectors embedded deep in Antarctic ice

    SciTech Connect

    2001-03-02

    Neutrinos are elementary particles that carry no electric charge and have little mass. As they interact only weakly with other particles, they can penetrate enormous amounts of matter, and therefore have the potential to directly convey astrophysical information from the edge of the Universe and from deep inside the most cataclysmic high-energy regions. The neutrino's great penetrating power, however, also makes this particle difficult to detect. Underground detectors have observed low-energy neutrinos from the Sun and a nearby supernova, as well as neutrinos generated in the Earth's atmosphere. But the very low fluxes of high-energy neutrinos from cosmic sources can be observed only by much larger, expandable detectors in, for we report the detection of upwardly propagating atmospheric neutrinos by the ice-based Antarctic muon and neutrino detector array (AMANDA). These results establish a technology with which to build a kilometre-scale neutrino observatory necessary for astrophysical observations.

  10. Observation of high-energy neutrinos with Cerenkov detectors embedded deep in Antarctic ice

    SciTech Connect

    2001-03-22

    Neutrinos are elementary particles that carry no electric charge and have little mass. As they interact only weakly with other particles, they can penetrate enormous amounts of matter, and therefore have the potential to directly convey astrophysical information from the edge of the Universe and from deep inside the most cataclysmic high-energy regions. The neutrino's great penetrating power, however, also makes this particle difficult to detect. Underground detectors have observed low-energy neutrinos from the Sun and a nearby supernova, as well as neutrinos generated in the Earth's atmosphere. But the very low fluxes of high-energy neutrinos from cosmic sources can be observed only by much larger, expandable detectors in, for example, deep water or ice. Here we report the detection of upwardly propagating atmospheric neutrinos by the ice-based Antarctic muon and neutrino detector array (AMANDA). These results establish a technology with which to build a kilometre-scale neutrino observatory necessary for astrophysical observations.

  11. CONSTRAINING SOLAR FLARE DIFFERENTIAL EMISSION MEASURES WITH EVE AND RHESSI

    SciTech Connect

    Caspi, Amir; McTiernan, James M.; Warren, Harry P.

    2014-06-20

    Deriving a well-constrained differential emission measure (DEM) distribution for solar flares has historically been difficult, primarily because no single instrument is sensitive to the full range of coronal temperatures observed in flares, from ≲2 to ≳50 MK. We present a new technique, combining extreme ultraviolet (EUV) spectra from the EUV Variability Experiment (EVE) onboard the Solar Dynamics Observatory with X-ray spectra from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), to derive, for the first time, a self-consistent, well-constrained DEM for jointly observed solar flares. EVE is sensitive to ∼2-25 MK thermal plasma emission, and RHESSI to ≳10 MK; together, the two instruments cover the full range of flare coronal plasma temperatures. We have validated the new technique on artificial test data, and apply it to two X-class flares from solar cycle 24 to determine the flare DEM and its temporal evolution; the constraints on the thermal emission derived from the EVE data also constrain the low energy cutoff of the non-thermal electrons, a crucial parameter for flare energetics. The DEM analysis can also be used to predict the soft X-ray flux in the poorly observed ∼0.4-5 nm range, with important applications for geospace science.

  12. Constraining Solar Flare Differential Emission Measures with EVE and RHESSI

    NASA Astrophysics Data System (ADS)

    Caspi, Amir; McTiernan, James M.; Warren, Harry P.

    2014-06-01

    Deriving a well-constrained differential emission measure (DEM) distribution for solar flares has historically been difficult, primarily because no single instrument is sensitive to the full range of coronal temperatures observed in flares, from lsim2 to gsim50 MK. We present a new technique, combining extreme ultraviolet (EUV) spectra from the EUV Variability Experiment (EVE) onboard the Solar Dynamics Observatory with X-ray spectra from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), to derive, for the first time, a self-consistent, well-constrained DEM for jointly observed solar flares. EVE is sensitive to ~2-25 MK thermal plasma emission, and RHESSI to gsim10 MK together, the two instruments cover the full range of flare coronal plasma temperatures. We have validated the new technique on artificial test data, and apply it to two X-class flares from solar cycle 24 to determine the flare DEM and its temporal evolution; the constraints on the thermal emission derived from the EVE data also constrain the low energy cutoff of the non-thermal electrons, a crucial parameter for flare energetics. The DEM analysis can also be used to predict the soft X-ray flux in the poorly observed ~0.4-5 nm range, with important applications for geospace science.

  13. Constrained density functional for noncollinear magnetism

    NASA Astrophysics Data System (ADS)

    Ma, Pui-Wai; Dudarev, S. L.

    2015-02-01

    Energies of arbitrary small- and large-angle noncollinear excited magnetic configurations are computed using a highly accurate constrained density functional theory approach. Numerical convergence and accuracy are controlled by the choice of Lagrange multipliers λI entering the constraining conditions. The penalty part Ep of the constrained energy functional at its minimum is shown to be inversely proportional to λI, enabling a simple, robust, and accurate iterative procedure to be followed to find a convergent solution. The method is implemented as a part of ab initio vasp package, and applied to the investigation of noncollinear B2-like and <001 > double-layer antiferromagnetic configurations of bcc iron, Fe2 dimer, and amorphous iron. Forces acting on atoms depend on the orientations of magnetic moments, and the proposed approach enables constrained self-consistent noncollinear magnetic and structural relaxation of large atomic systems to be carried out.

  14. Active constrained clustering by examining spectral Eigenvectors

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; desJardins, Marie; Xu, Qianjun

    2005-01-01

    This work focuses on the active selection of pairwise constraints for spectral clustering. We develop and analyze a technique for Active Constrained Clustering by Examining Spectral eigenvectorS (ACCESS) derived from a similarity matrix.

  15. On the Constrained Attitude Control Problem

    NASA Technical Reports Server (NTRS)

    Hadaegh, Fred Y.; Kim, Yoonsoo; Mesbahi, Mehran; Singh, Gurkipal

    2004-01-01

    In this paper, we consider various classes of constrained attitude control (CAC) problem in single and multiple spacecraft settings. After categorizing attitude constraints into four distinct types, we provide an overview of the existing approaches to this problem. We then proceed to further expand on a recent algorithmic approach to the CAC problem. The paper concludes with an example demonstrating the viability of the proposed algorithm for a multiple spacecraft constrained attitude reconfiguration scenario.

  16. Constraining the Evolution of Poor Clusters

    NASA Astrophysics Data System (ADS)

    Broming, Emma J.; Fuse, C. R.

    2012-01-01

    There currently exists no method by which to quantify the evolutionary state of poor clusters (PCs). Research by Broming & Fuse (2010) demonstrated that the evolution of Hickson compact groups (HCGs) are constrained by the correlation between the X-ray luminosities of point sources and diffuse gas. The current investigation adopts an analogous approach to understanding PCs. Plionis et al. (2009) proposed a theory to define the evolution of poor clusters. The theory asserts that cannibalism of galaxies causes a cluster to become more spherical, develop increased velocity dispersion and increased X-ray temperature and gas luminosity. Data used to quantify the evolution of the poor clusters were compiled across multiple wavelengths. The sample includes 162 objects from the WBL catalogue (White et al. 1999), 30 poor clusters in the Chandra X-ray Observatory archive, and 15 Abell poor clusters observed with BAX (Sadat et al. 2004). Preliminary results indicate that the cluster velocity dispersion and X-ray gas and point source luminosities can be used to highlight a weak correlation. An evolutionary trend was observed for multiple correlations detailed herein. The current study is a continuation of the work by Broming & Fuse examining point sources and their properties to determine the evolutionary stage of compact groups, poor clusters, and their proposed remnants, isolated ellipticals and fossil groups. Preliminary data suggests that compact groups and their high-mass counterpart, poor clusters, evolve along tracks identified in the X-ray gas - X-ray point source relation. While compact groups likely evolve into isolated elliptical galaxies, fossil groups display properties that suggest they are the remains of fully coalesced poor clusters.

  17. The study of microstructure and mechanical properties of twin-roll cast AZ31 magnesium alloy after constrained groove pressing

    NASA Astrophysics Data System (ADS)

    Zimina, M.; Bohlen, J.; Letzig, D.; Kurz, G.; Cieslar, M.; Zník, J.

    2014-08-01

    Microstructure investigation and microhardness mapping were done on the material with ultra-fine grained structure prepared by constrained groove pressing of twin-roll cast AZ31 magnesium strips. The microstructure observations showed significant drop of the grain size from 200 gm to 20 gm after constrained groove pressing. Moreover, the heterogeneities in the microhardness along the cross-section observed in the as-cast strip were replaced by the bands of different microhardness in the constrained groove pressed material. It is shown that the constrained groove pressing technique is a good tool for the grain refinement of magnesium alloys.

  18. Shared developmental programme strongly constrains beak shape diversity in songbirds.

    PubMed

    Fritz, Joerg A; Brancale, Joseph; Tokita, Masayoshi; Burns, Kevin J; Hawkins, M Brent; Abzhanov, Arhat; Brenner, Michael P

    2014-04-16

    The striking diversity of bird beak shapes is an outcome of natural selection, yet the relative importance of the limitations imposed by the process of beak development on generating such variation is unclear. Untangling these factors requires mapping developmental mechanisms over a phylogeny far exceeding model systems studied thus far. We address this issue with a comparative morphometric analysis of beak shape in a diverse group of songbirds. Here we show that the dynamics of the proliferative growth zone must follow restrictive rules to explain the observed variation, with beak diversity constrained to a three parameter family of shapes, parameterized by length, depth and the degree of shear. We experimentally verify these predictions by analysing cell proliferation in the developing embryonic beaks of the zebra finch. Our findings indicate that beak shape variability in many songbirds is strongly constrained by shared properties of the developmental programme controlling the growth zone.

  19. Constraining the nuclear equation of state at subsaturation densities.

    PubMed

    Khan, E; Margueron, J; Vidaña, I

    2012-08-31

    Only one-third of the nucleons in 208Pb occupy the saturation density area. Consequently, nuclear observables related to the average properties of nuclei, such as masses or radii, constrain the equation of state not at the saturation density but rather around the so-called crossing density, localized close to the mean value of the density of nuclei: ρ is approximately equal to 0.11 fm(-3). This provides an explanation for the empirical fact that several equation of state quantities calculated with various functionals cross at a density significantly lower than the saturation one. The third derivative M of the energy per unit of volume at the crossing density is constrained by the giant monopole resonance measurements in an isotopic chain rather than the incompressibility at saturation density. The giant monopole resonance measurements provide M=1100±70 MeV (6% uncertainty), whose extrapolation gives K(∞)=230±40 MeV (17% uncertainty).

  20. Lilith: a tool for constraining new physics from Higgs measurements

    NASA Astrophysics Data System (ADS)

    Bernon, Jérémy; Dumont, Béranger

    2015-09-01

    The properties of the observed Higgs boson with mass around 125 GeV can be affected in a variety of ways by new physics beyond the Standard Model (SM). The wealth of experimental results, targeting the different combinations for the production and decay of a Higgs boson, makes it a non-trivial task to assess the patibility of a non-SM-like Higgs boson with all available results. In this paper we present Lilith, a new public tool for constraining new physics from signal strength measurements performed at the LHC and the Tevatron. Lilith is a Python library that can also be used in C and C++/ ROOT programs. The Higgs likelihood is based on experimental results stored in an easily extensible XML database, and is evaluated from the user input, given in XML format in terms of reduced couplings or signal strengths.The results of Lilith can be used to constrain a wide class of new physics scenarios.

  1. Inadequacy of single-impulse transfers for path constrained rendezvous

    NASA Technical Reports Server (NTRS)

    Stern, S. A.; Soileau, K. M.

    1987-01-01

    The use of single-impulse techniques to maneuver from point to point about a large space structure (LSS) with an arbitrary geometrical configuration and spin is examined. Particular consideration is given to transfers with both endpoints on the forbidden zone surface. Clohessy-Wiltshire equations of relative motion are employed to solve path constrained rendezvous problems. External and internal transfers between arbitrary points are analyzed in terms of tangential departure and arrival conditions. It is observed that single-impulse techniques are inadequate for transferring about the exterior of any LSS; however, single-impulse transfers are applicable for transfers in the interior of LSSs. It is concluded that single-impulse transducers are not applicable for path constrained rendezvous guidance.

  2. Gyrification from constrained cortical expansion

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas

    The convolutions of the human brain are a symbol of its functional complexity. But how does the outer surface of the brain, the layered cortex of neuronal gray matter get its folds? In this talk, we ask to which extent folding of the brain can be explained as a purely mechanical consequence of unpatterned growth of the cortical layer relative to the sublayers. Modeling the growing brain as a soft layered solid leads to elastic instabilities and the formation of cusped sulci and smooth gyri consistent with observations across species in both normal and pathological situations. Furthermore, we apply initial geometries obtained from fetal brain MRI to address the question of how the brain geometry and folding patterns may be coupled via mechanics.

  3. Constrained Least Absolute Deviation Neural Networks

    PubMed Central

    Wang, Zhishun; Peterson, Bradley S.

    2008-01-01

    It is well known that least absolute deviation (LAD) criterion or L1-norm used for estimation of parameters is characterized by robustness, i.e., the estimated parameters are totally resistant (insensitive) to large changes in the sampled data. This is an extremely useful feature, especially, when the sampled data are known to be contaminated by occasionally occurring outliers or by spiky noise. In our previous works, we have proposed the least absolute deviation neural network (LADNN) to solve unconstrained LAD problems. The theoretical proofs and numerical simulations have shown that the LADNN is Lyapunov-stable and it can globally converge to the exact solution to a given unconstrained LAD problem. We have also demonstrated its excellent application value in time-delay estimation. More generally, a practical LAD application problem may contain some linear constraints, such as a set of equalities and/or inequalities, which is called constrained LAD problem, whereas the unconstrained LAD can be considered as a special form of the constrained LAD. In this paper, we present a new neural network called constrained least absolute deviation neural network (CLADNN) to solve general constrained LAD problems. Theoretical proofs and numerical simulations demonstrate that the proposed CLADNN is Lyapunov stable and globally converges to the exact solution to a given constrained LAD problem, independent of initial values. The numerical simulations have also illustrated that the proposed CLADNN can be used to robustly estimate parameters for nonlinear curve fitting, which is extensively used in signal and image processing. PMID:18269958

  4. Motor Demands Constrain Cognitive Rule Structures

    PubMed Central

    Collins, Anne Gabrielle Eva; Frank, Michael Joshua

    2016-01-01

    Study of human executive function focuses on our ability to represent cognitive rules independently of stimulus or response modality. However, recent findings suggest that executive functions cannot be modularized separately from perceptual and motor systems, and that they instead scaffold on top of motor action selection. Here we investigate whether patterns of motor demands influence how participants choose to implement abstract rule structures. In a learning task that requires integrating two stimulus dimensions for determining appropriate responses, subjects typically structure the problem hierarchically, using one dimension to cue the task-set and the other to cue the response given the task-set. However, the choice of which dimension to use at each level can be arbitrary. We hypothesized that the specific structure subjects adopt would be constrained by the motor patterns afforded within each rule. Across four independent data-sets, we show that subjects create rule structures that afford motor clustering, preferring structures in which adjacent motor actions are valid within each task-set. In a fifth data-set using instructed rules, this bias was strong enough to counteract the well-known task switch-cost when instructions were incongruent with motor clustering. Computational simulations confirm that observed biases can be explained by leveraging overlap in cortical motor representations to improve outcome prediction and hence infer the structure to be learned. These results highlight the importance of sensorimotor constraints in abstract rule formation and shed light on why humans have strong biases to invent structure even when it does not exist. PMID:26966909

  5. Mars, Moon, Mercury: Magnetometry Constrains Planetary Evolution

    NASA Astrophysics Data System (ADS)

    Connerney, John E. P.

    2015-04-01

    We have long appreciated that magnetic measurements obtained about a magnetized planet are of great value in probing the deep interior. The existence of a substantial planetary magnetic field implies dynamo action requiring an electrically conducting, fluid core in convective motion and a source of energy to maintain it. Application of the well-known Lowe's spectrum may in some cases identify the dynamo outer radius; where secular variation can be measured, the outer radius can be estimated using the frozen flux approximation. Magnetic induction may be used to probe the electrical conductivity of the mantle and crust. These are useful constraints that together with gravity and/or other observables we may infer the state of the interior and gain insight into planetary evolution. But only recently has it become clear that space magnetometry can do much more, particularly about a planet that once sustained a dynamo that has since disappeared. Mars is the best example of this class: the Mars Global Surveyor spacecraft globally mapped a remanent crustal field left behind after the demise of the dynamo. This map is a magnetic record of the planet's evolution. I will argue that this map may be interpreted to constrain the era of dynamo activity within Mars; to establish the reversal history of the Mars dynamo; to infer the magnetization intensity of Mars crustal rock and the depth of the magnetized crustal layer; and to establish that plate tectonics is not unique to planet Earth, as has so often been claimed. The Lunar magnetic record is in contrast one of weakly magnetized and scattered sources, not easily interpreted as yet in terms of the interior. Magnetometry about Mercury is more difficult to interpret owing to the relatively weak field and proximity to the sun, but MESSENGER (and ultimately Beppi Columbo) may yet map crustal anomalies (induced and/or remanent).

  6. Constraining Cosmic Evolution of Type Ia Supernovae

    SciTech Connect

    Foley, Ryan J.; Filippenko, Alexei V.; Aguilera, C.; Becker, A.C.; Blondin, S.; Challis, P.; Clocchiatti, A.; Covarrubias, R.; Davis, T.M.; Garnavich, P.M.; Jha, S.; Kirshner, R.P.; Krisciunas, K.; Leibundgut, B.; Li, W.; Matheson, T.; Miceli, A.; Miknaitis, G.; Pignata, G.; Rest, A.; Riess, A.G.; /UC, Berkeley, Astron. Dept. /Cerro-Tololo InterAmerican Obs. /Washington U., Seattle, Astron. Dept. /Harvard-Smithsonian Ctr. Astrophys. /Chile U., Catolica /Bohr Inst. /Notre Dame U. /KIPAC, Menlo Park /Texas A-M /European Southern Observ. /NOAO, Tucson /Fermilab /Chile U., Santiago /Harvard U., Phys. Dept. /Baltimore, Space Telescope Sci. /Johns Hopkins U. /Res. Sch. Astron. Astrophys., Weston Creek /Stockholm U. /Hawaii U. /Illinois U., Urbana, Astron. Dept.

    2008-02-13

    We present the first large-scale effort of creating composite spectra of high-redshift type Ia supernovae (SNe Ia) and comparing them to low-redshift counterparts. Through the ESSENCE project, we have obtained 107 spectra of 88 high-redshift SNe Ia with excellent light-curve information. In addition, we have obtained 397 spectra of low-redshift SNe through a multiple-decade effort at Lick and Keck Observatories, and we have used 45 ultraviolet spectra obtained by HST/IUE. The low-redshift spectra act as a control sample when comparing to the ESSENCE spectra. In all instances, the ESSENCE and Lick composite spectra appear very similar. The addition of galaxy light to the Lick composite spectra allows a nearly perfect match of the overall spectral-energy distribution with the ESSENCE composite spectra, indicating that the high-redshift SNe are more contaminated with host-galaxy light than their low-redshift counterparts. This is caused by observing objects at all redshifts with similar slit widths, which corresponds to different projected distances. After correcting for the galaxy-light contamination, subtle differences in the spectra remain. We have estimated the systematic errors when using current spectral templates for K-corrections to be {approx}0.02 mag. The variance in the composite spectra give an estimate of the intrinsic variance in low-redshift maximum-light SN spectra of {approx}3% in the optical and growing toward the ultraviolet. The difference between the maximum-light low and high-redshift spectra constrain SN evolution between our samples to be < 10% in the rest-frame optical.

  7. Pattern Search Algorithms for Bound Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1996-01-01

    We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.

  8. Pattern Search Methods for Linearly Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.

  9. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  10. Pattern recognition constrains mantle properties, past and present

    NASA Astrophysics Data System (ADS)

    Atkins, S.; Rozel, A. B.; Valentine, A. P.; Tackley, P.; Trampert, J.

    2015-12-01

    Understanding and modelling mantle convection requires knowledge of many mantle properties, such as viscosity, chemical structure and thermal proerties such as radiogenic heating rate. However, many of these parameters are only poorly constrained. We demonstrate a new method for inverting present day Earth observations for mantle properties. We use neural networks to represent the posterior probability density functions of many different mantle properties given the present structure of the mantle. We construct these probability density functions by sampling a wide range of possible mantle properties and running forward simulations, using the convection code StagYY. Our approach is particularly powerful because of its flexibility. Our samples are selected in the prior space, rather than being targeted towards a particular observation, as would normally be the case for probabilistic inversion. This means that the same suite of simulations can be used for inversions using a wide range of geophysical observations without the need to resample. Our method is probabilistic and non-linear and is therefore compatible with non-linear convection, avoiding some of the limitations associated with other methods for inverting mantle flow. This allows us to consider the entire history of the mantle. We also need relatively few samples for our inversion, making our approach computationally tractable when considering long periods of mantle history. Using the present thermal and density structure of the mantle, we can constrain rheological and compositional parameters such as viscosity and yield stress. We can also use the present day mantle structure to make inferences about the initial conditions for convection 4.5 Gyr ago. We can constrain initial mantle conditions including the initial concentration of heat producing elements in the mantle and the initial thickness of primordial material at the CMB. Currently we use density and temperature structure for our inversions, but we can

  11. Constraining continuous rainfall simulations for derived design flood estimation

    NASA Astrophysics Data System (ADS)

    Woldemeskel, F. M.; Sharma, A.; Mehrotra, R.; Westra, S.

    2016-11-01

    Stochastic rainfall generation is important for a range of hydrologic and water resources applications. Stochastic rainfall can be generated using a number of models; however, preserving relevant attributes of the observed rainfall-including rainfall occurrence, variability and the magnitude of extremes-continues to be difficult. This paper develops an approach to constrain stochastically generated rainfall with an aim of preserving the intensity-durationfrequency (IFD) relationships of the observed data. Two main steps are involved. First, the generated annual maximum rainfall is corrected recursively by matching the generated intensity-frequency relationships to the target (observed) relationships. Second, the remaining (non-annual maximum) rainfall is rescaled such that the mass balance of the generated rain before and after scaling is maintained. The recursive correction is performed at selected storm durations to minimise the dependence between annual maximum values of higher and lower durations for the same year. This ensures that the resulting sequences remain true to the observed rainfall as well as represent the design extremes that may have been developed separately and are needed for compliance reasons. The method is tested on simulated 6 min rainfall series across five Australian stations with different climatic characteristics. The results suggest that the annual maximum and the IFD relationships are well reproduced after constraining the simulated rainfall. While our presentation focusses on the representation of design rainfall attributes (IFDs), the proposed approach can also be easily extended to constrain other attributes of the generated rainfall, providing an effective platform for post-processing of stochastic rainfall generators.

  12. Towards better constrained models of the solar magnetic cycle

    NASA Astrophysics Data System (ADS)

    Munoz-Jaramillo, Andres

    2010-12-01

    The best tools we have for understanding the origin of solar magnetic variability are kinematic dynamo models. During the last decade, this type of models has seen a continuous evolution and has become increasingly successful at reproducing solar cycle characteristics. The basic ingredients of these models are: the solar differential rotation -- which acts as the main source of energy for the system by shearing the magnetic field; the meridional circulation -- which plays a crucial role in magnetic field transport; the turbulent diffusivity -- which attempts to capture the effect of convective turbulence on the large scale magnetic field; and the poloidal field source -- which closes the cycle by regenerating the poloidal magnetic field. However, most of these ingredients remain poorly constrained which allows one to obtain solar-like solutions by "tuning" the input parameters, leading to controversy regarding which parameter set is more appropriate. In this thesis we revisit each of those ingredients in an attempt to constrain them better by using observational data and theoretical considerations, reducing the amount of free parameters in the model. For the meridional flow and differential rotation we use helioseismic data to constrain free parameters and find that the differential rotation is well determined, but the available data can only constrain the latitudinal dependence of the meridional flow. For the turbulent magnetic diffusivity we show that combining mixing-length theory estimates with magnetic quenching allows us to obtain viable magnetic cycles and that the commonly used diffusivity profiles can be understood as a spatiotemporal average of this process. For the poloidal source we introduce a more realistic way of modeling active region emergence and decay and find that this resolves existing discrepancies between kinematic dynamo models and surface flux transport simulations. We also study the physical mechanisms behind the unusually long minimum of

  13. A Model for Optimal Constrained Adaptive Testing.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Reese, Lynda M.

    1998-01-01

    Proposes a model for constrained computerized adaptive testing in which the information in the test at the trait level (theta) estimate is maximized subject to the number of possible constraints on the content of the test. Test assembly relies on a linear-programming approach. Illustrates the approach through simulation with items from the Law…

  14. Rhythmic Grouping Biases Constrain Infant Statistical Learning

    ERIC Educational Resources Information Center

    Hay, Jessica F.; Saffran, Jenny R.

    2012-01-01

    Linguistic stress and sequential statistical cues to word boundaries interact during speech segmentation in infancy. However, little is known about how the different acoustic components of stress constrain statistical learning. The current studies were designed to investigate whether intensity and duration each function independently as cues to…

  15. Constrained Subjective Assessment of Student Learning

    ERIC Educational Resources Information Center

    Saliu, Sokol

    2005-01-01

    Student learning is a complex incremental cognitive process; assessment needs to parallel this, reporting the results in similar terms. Application of fuzzy sets and logic to the criterion-referenced assessment of student learning is considered here. The constrained qualitative assessment (CQA) system was designed, and then applied in assessing a…

  16. Constraining axionlike particles using the distance-duality relation

    NASA Astrophysics Data System (ADS)

    Tiwari, Prabhakar

    2017-01-01

    One of the fundamental results used in observational cosmology is the distance duality relation (DDR), which relates the luminosity distance, DL , with angular diameter distance, DA , at a given redshift z . We employ the observed limits of this relation to constrain the coupling of axionlike particles (ALPs) with photons. With our detailed 3 D ALP-photon mixing simulation in standard Λ CDM universe and latest DDR limits observed in Holanda and Barros [Phys. Rev. D 94, 023524 (2016)]., 10.1103/PhysRevD.94.023524 we limit the coupling constant gϕ≤6 ×10-13 GeV-1(n/G ⟨B ⟩Mpc ) for ALPs of mass ≤10-15 eV . The DDR observations can provide very stringent constraint on ALPs mixing in the future. Also any deviation in DDR can be conventionally explained as photons decaying to axions or vice-versa.

  17. Diversity of Debris Disks - Constraining the Disk Outer Radii

    NASA Astrophysics Data System (ADS)

    Rieke, George; Smith, Paul; Su, Kate

    2008-03-01

    Existing Spitzer observations of debris disks show a wide range of diversity in disk morphologies and spectral energy distributions (SEDs). The majority of debris disks observed with Spitzer are not resolved, resulting in very few direct constraints on disk extent. In general, SEDs alone have little diagnostic power beyond some basic statistics. However, as demonstrated by some Spitzer observations of nearby systems (beta Leo and gamma Oph), the spectra of the excess emission in the IRS and MIPS-SED wavelength range can help to put tighter constraints on disk properties such as minimum/maximum grain sizes and inner/outer disk radii. The dust continuum slopes are very useful to differentiate between various disk structures and constrain the dust mass. We need to study sufficient numbers of disks to explore their characteristics systematically. Therefore, we propose to obtain MIPS-SED observations of 27 debris disks that already have IRS-LL spectra and MIPS 24 and 70 micron photometry.

  18. Mantle Convection Models Constrained by Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Durbin, C. J.; Shahnas, M.; Peltier, W. R.; Woodhouse, J. H.

    2011-12-01

    Perovskite-post-Perovskite transition (Murakami et al., 2004, Science) that appears to define the D" layer at the base of the mantle. In this initial phase of what will be a longer term project we are assuming that the internal mantle viscosity structure is spherically symmetric and compatible with the recent inferences of Peltier and Drummond (2010, Geophys. Res. Lett.) based upon glacial isostatic adjustment and Earth rotation constraints. The internal density structure inferred from the tomography model is assimilated into the convection model by continuously "nudging" the modification to the input density structure predicted by the convection model back towards the tomographic constraint at the long wavelengths that the tomography specifically resolves, leaving the shorter wavelength structure free to evolve, essentially "slaved" to the large scale structure. We focus upon the ability of the nudged model to explain observed plate velocities, including both their poloidal (divergence related) and toroidal (strike slip fault related) components. The true plate velocity field is then used as an additional field towards which the tomographically constrained solution is nudged.

  19. Cosmogenic photons strongly constrain UHECR source models

    NASA Astrophysics Data System (ADS)

    van Vliet, Arjen

    2017-03-01

    With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.

  20. Constraining SUSY GUTs and Inflation with Cosmology

    SciTech Connect

    Rocher, Jonathan

    2006-11-03

    In the framework of Supersymmetric Grand Unified Theories (SUSY GUTs), the universe undergoes a cascade of symmetry breakings, during which topological defects can be formed. We address the question of the probability of cosmic string formation after a phase of hybrid inflation within a large number of models of SUSY GUTs in agreement with particle and cosmological data. We show that cosmic strings are extremely generic and should be used to relate cosmology and high energy physics. This conclusion is employed together with the WMAP CMB data to strongly constrain SUSY hybrid inflation models. F-term and D-term inflation are studied in the SUSY and minimal SUGRA framework. They are both found to agree with data but suffer from fine tuning of their superpotential coupling ({lambda} (less-or-similar sign) 3 x 10-5 or less). Mass scales of inflation are also constrained to be less than M < or approx. 3 x 1015 GeV.

  1. CONSTRAINED SPECTRAL CLUSTERING FOR IMAGE SEGMENTATION

    PubMed Central

    Sourati, Jamshid; Brooks, Dana H.; Dy, Jennifer G.; Erdogmus, Deniz

    2013-01-01

    Constrained spectral clustering with affinity propagation in its original form is not practical for large scale problems like image segmentation. In this paper we employ novelty selection sub-sampling strategy, besides using efficient numerical eigen-decomposition methods to make this algorithm work efficiently for images. In addition, entropy-based active learning is also employed to select the queries posed to the user more wisely in an interactive image segmentation framework. We evaluate the algorithm on general and medical images to show that the segmentation results will improve using constrained clustering even if one works with a subset of pixels. Furthermore, this happens more efficiently when pixels to be labeled are selected actively. PMID:24466500

  2. The totally constrained model: three quantization approaches

    NASA Astrophysics Data System (ADS)

    Gambini, Rodolfo; Olmedo, Javier

    2014-08-01

    We provide a detailed comparison of the different approaches available for the quantization of a totally constrained system with a constraint algebra generating the non-compact group. In particular, we consider three schemes: the Refined Algebraic Quantization, the Master Constraint Programme and the Uniform Discretizations approach. For the latter, we provide a quantum description where we identify semiclassical sectors of the kinematical Hilbert space. We study the quantum dynamics of the system in order to show that it is compatible with the classical continuum evolution. Among these quantization approaches, the Uniform Discretizations provides the simpler description in agreement with the classical theory of this particular model, and it is expected to give new insights about the quantum dynamics of more realistic totally constrained models such as canonical general relativity.

  3. Constraining neutrino decays with CMBR data

    NASA Astrophysics Data System (ADS)

    Hannestad, Steen

    1998-07-01

    The decay of massive neutrinos to final states containing only invisible particles is poorly constrained experimentally. In this letter we describe the constraints that can be put on neutrino mass and lifetime using CMBR measurements. We find that very tight lifetime limits on neutrinos in the mass range 10 eV - 100 keV can be derived using CMBR data from upcoming satellite measurements.

  4. Synthesis of constrained analogues of tryptophan

    PubMed Central

    Negrato, Marco; Abbiati, Giorgio; Dell’Acqua, Monica

    2015-01-01

    Summary A Lewis acid-catalysed diastereoselective [4 + 2] cycloaddition of vinylindoles and methyl 2-acetamidoacrylate, leading to methyl 3-acetamido-1,2,3,4-tetrahydrocarbazole-3-carboxylate derivatives, is described. Treatment of the obtained cycloadducts under hydrolytic conditions results in the preparation of a small library of compounds bearing the free amino acid function at C-3 and pertaining to the class of constrained tryptophan analogues. PMID:26664620

  5. An English language interface for constrained domains

    NASA Technical Reports Server (NTRS)

    Page, Brenda J.

    1989-01-01

    The Multi-Satellite Operations Control Center (MSOCC) Jargon Interpreter (MJI) demonstrates an English language interface for a constrained domain. A constrained domain is defined as one with a small and well delineated set of actions and objects. The set of actions chosen for the MJI is from the domain of MSOCC Applications Executive (MAE) Systems Test and Operations Language (STOL) directives and contains directives for signing a cathode ray tube (CRT) on or off, calling up or clearing a display page, starting or stopping a procedure, and controlling history recording. The set of objects chosen consists of CRTs, display pages, STOL procedures, and history files. Translation from English sentences to STOL directives is done in two phases. In the first phase, an augmented transition net (ATN) parser and dictionary are used for determining grammatically correct parsings of input sentences. In the second phase, grammatically typed sentences are submitted to a forward-chaining rule-based system for interpretation and translation into equivalent MAE STOL directives. Tests of the MJI show that it is able to translate individual clearly stated sentences into the subset of directives selected for the prototype. This approach to an English language interface may be used for similarly constrained situations by modifying the MJI's dictionary and rules to reflect the change of domain.

  6. Constrained and joint inversion on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Doetsch, J.; Jordi, C.; Rieckh, V.; Guenther, T.; Schmelzbach, C.

    2015-12-01

    Unstructured meshes allow for inclusion of arbitrary surface topography, complex acquisition geometry and undulating geological interfaces in the inversion of geophysical data. This flexibility opens new opportunities for coupling different geophysical and hydrological data sets in constrained and joint inversions. For example, incorporating geological interfaces that have been derived from high-resolution geophysical data (e.g., ground penetrating radar) can add geological constraints to inversions of electrical resistivity data. These constraints can be critical for a hydrogeological interpretation of the inversion results. For time-lapse inversions of geophysical data, constraints can be derived from hydrological point measurements in boreholes, but it is difficult to include these hard constraints in the inversion of electrical resistivity monitoring data. Especially mesh density and the regularization footprint around the hydrological point measurements are important for an improved inversion compared to the unconstrained case. With the help of synthetic and field examples, we analyze how regularization and coupling operators should be chosen for time-lapse inversions constrained by point measurements and for joint inversions of geophysical data in order to take full advantage of the flexibility of unstructured meshes. For the case of constraining to point measurements, it is important to choose a regularization operator that extends beyond the neighboring cells and the uncertainty in the point measurements needs to be accounted for. For joint inversion, the choice of the regularization depends on the expected subsurface heterogeneity and the cell size of the parameter mesh.

  7. Constrained Implants in Total Knee Replacement.

    PubMed

    Touzopoulos, Panagiotis; Drosos, Georgios I; Ververidis, Athanasios; Kazakos, Konstantinos

    2015-05-01

    Total knee replacement (TKR) is a successful procedure for pain relief and functional restoration in patients with advanced osteoarthritis. The number of TKRs is increasing, and this has led to an increase in revision surgeries. The key to long-term success in both primary and revision TKR is stability, as well as adequate and stable fixation between components and underlying bone. In the vast majority of primary TKRs and in some revision cases, a posterior cruciate retaining or a posterior cruciate substituting device can be used. In some primary cases with severe deformity or ligamentous instability and in most of the revision cases, a more constrained implant is required. The purpose of this paper is to review the literature concerning the use of condylar constrained knee (CCK) and rotating hinge (RH) implants in primary and revision cases focusing on the indications and results. According to this review, although excellent and very good results have been reported, there are limitations of the existing literature concerning the indications for the use of constrained implants, the absence of long-term results, and the limited comparative studies.

  8. Effect of Constrained Arm Posture on the Processing of Action Verbs

    PubMed Central

    Yasuda, Masaaki; Stins, John F.; Higuchi, Takahiro

    2017-01-01

    Evidence is increasing that brain areas that are responsible for action planning and execution are activated during the information processing of action-related verbs (e.g., pick or kick). To obtain further evidence, we conducted three experiments to see if constraining arm posture, which could disturb the motor planning and imagery for that arm, would lead to delayed judgment of verbs referring to arm actions. In all experiments, native Japanese speakers judged as quickly as possible whether the presented object and the verb would be compatible (e.g., ball–throw) or not (e.g., ball–pour). Constrained arm posture was introduced to the task by asking participants to keep both hands behind their back. Two types of verbs were used: manual action verbs (i.e., verbs referring to actions performed on an object by a human hand) and non-manual action verbs. In contrast to our hypothesis that constrained arm posture would affect only the information processing of manual action verbs, the results showed delayed processing of both manual action and non-manual action verbs when the arm posture was constrained. The effect of constrained arm posture was observed even when participants responded with their voice, suggesting that the delayed judgment was not simply due to the difficulty of responding with the hand (i.e., basic motor interference). We discussed why, contrary to our hypothesis, constrained arm posture resulted in delayed CRTs regardless of the “manipulability” as symbolized by the verbs. PMID:28239336

  9. A self-constrained inversion of magnetic data based on correlation method

    NASA Astrophysics Data System (ADS)

    Sun, Shida; Chen, Chao

    2016-12-01

    Geologically-constrained inversion is a powerful method for producing geologically reasonable solutions in geophysical exploration problems. But in many cases, except the observed geophysical data to be inverted, the geological information is insufficiently available for improving reliability of recovered models. To deal with these situations, self-constraints extracted from preprocessing observed data have been applied to constrain the inversion. In this paper, we present a self-constrained inversion method based on correlation method. In our approach the correlation results are first obtained by calculating the cross-correlation between theoretical data and horizontal gradients of the observed data. Subsequently, we propose two specific strategies to extract the spatial variation from the correlation results and then translate them into spatial weighting functions. Incorporating the spatial weighting functions into the model objective function, we obtain self-constrained solutions with higher reliability. We presented two synthetic and one field magnetic data example to test the validity. All results demonstrate that the solution from our self-constrained inversion can delineate the geological bodies with clearer boundaries and much more concentrated physical property.

  10. Hygro-thermal mechanical behavior of Nafion during constrained swelling

    NASA Astrophysics Data System (ADS)

    Silberstein, Meredith N.; Boyce, Mary C.

    Durability is a major limitation of current proton exchange membrane fuel cells. Mechanical stress due to hygro-thermal cycling is one failure mechanism of the polymer electrolyte membrane. In previous work the cyclic rate, temperature, and hydration dependent elastic-viscoplastic mechanical behavior of Nafion has been extensively investigated in uniaxial and biaxial tension, serving as a data basis and means of validation for a three-dimensional constitutive model. Here, the important effect of loading via constrained swelling is studied. Specifically, two types of loading are investigated: partially constrained swelling via a bimaterial swelling test and hygro-thermal cycling within a fuel cell. The bimaterial swelling conditions are examined via experiments in conjunction with modeling. Nafion/GDL bimaterial strips were hydrated and observed to curl significantly with the membrane on the convex side due to the large Nafion hygro-expansion coefficient. Upon drying the bimaterial strips developed a slight reverse curvature with the membrane on the concave side due to the plastic deformation which had occurred in the membrane during hydration. Finite element simulations utilizing the Nafion constitutive model successfully predicted the behavior during hydration and drying, providing insight on the constrained swelling physics and the ability of the model to predict such events. Simulations of in situ fuel cell hygro-thermal cycling are performed via a simplified two-dimensional fuel cell model. The simulation results confirm the finding of other studies that a tensile stress develops in the membrane during drying. Further, a concentration of negative hydrostatic pressure is found to develop just inside the channel region in the dried state supporting the theory of hygro-thermal driven mechanical stresses causing pinhole formation in the channel. The amplitude of the pressure cycling is found to be large and sensitive to both hygro-thermal ramp time and hold time

  11. Constraining f (T ,T ) gravity models using type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Sáez-Gómez, Diego; Carvalho, C. Sofia; Lobo, Francisco S. N.; Tereno, Ismael

    2016-07-01

    We present an analysis of an f (T ,T ) extension of the Teleparallel Equivalent of General Relativity, where T denotes the torsion and T denotes the trace of the energy-momentum tensor. This extension includes nonminimal couplings between torsion and matter. In particular, we construct two specific models that recover the usual continuity equation, namely, f (T ,T )=T +g (T ) and f (T ,T )=T ×g (T ). We then constrain the parameters of each model by fitting the predicted distance modulus to that measured from type Ia supernovae and find that both models can reproduce the late-time cosmic acceleration. We also observe that one of the models satisfies well the observational constraints and yields a goodness-of-fit similar to the Λ CDM model, thus demonstrating that f (T ,T ) gravity theory encompasses viable models that can be an alternative to Λ CDM .

  12. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  13. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  14. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  15. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  16. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  17. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  18. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  19. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  20. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  1. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  2. 21 CFR 888.3780 - Wrist joint polymer constrained prosthesis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Wrist joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3780 Wrist joint polymer constrained prosthesis. (a) Identification. A wrist joint polymer constrained prosthesis is a device made...

  3. 21 CFR 888.3720 - Toe joint polymer constrained prosthesis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Toe joint polymer constrained prosthesis. 888.3720... (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3720 Toe joint polymer constrained prosthesis. (a) Identification. A toe joint polymer constrained prosthesis is a device made of...

  4. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  5. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  6. 21 CFR 888.3230 - Finger joint polymer constrained prosthesis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Finger joint polymer constrained prosthesis. 888... SERVICES (CONTINUED) MEDICAL DEVICES ORTHOPEDIC DEVICES Prosthetic Devices § 888.3230 Finger joint polymer constrained prosthesis. (a) Identification. A finger joint polymer constrained prosthesis is a device...

  7. HCV management in resource-constrained countries.

    PubMed

    Lim, Seng Gee

    2017-02-21

    With the arrival of all-oral directly acting antiviral (DAA) therapy with high cure rates, the promise of hepatitis C virus (HCV) eradication is within closer reach. The availability of generic DAAs has improved access to countries with constrained resources. However, therapy is only one component of the HCV care continuum, which is the framework for HCV management from identifying patients to cure. The large number of undiagnosed HCV cases is the biggest concern, and strategies to address this are needed, as risk factor screening is suboptimal, detecting <20% of known cases. Improvements in HCV confirmation through either reflex HCV RNA screening or ideally a sensitive point of care test are needed. HCV notification (e.g., Australia) may improve diagnosis (proportion of HCV diagnosed is 75%) and may lead to benefits by increasing linkage to care, therapy and cure. Evaluations for cirrhosis using non-invasive markers are best done with a biological panel, but they are only moderately accurate. In resource-constrained settings, only generic HCV medications are available, and a combination of sofosbuvir, ribavirin, ledipasvir or daclatasvir provides sufficient efficacy for all genotypes, but this is likely to be replaced with pangenetypic regimens such as sofosbuvir/velpatasvir and glecaprevir/pibrentaasvir. In conclusion, HCV management in resource-constrained settings is challenging on multiple fronts because of the lack of infrastructure, facilities, trained manpower and equipment. However, it is still possible to make a significant impact towards HCV eradication through a concerted effort by individuals and national organisations with domain expertise in this area.

  8. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  9. Constrained inflaton due to a complex scalar

    SciTech Connect

    Budhi, Romy H. S.; Kashiwase, Shoichi; Suematsu, Daijiro

    2015-09-14

    We reexamine inflation due to a constrained inflaton in the model of a complex scalar. Inflaton evolves along a spiral-like valley of special scalar potential in the scalar field space just like single field inflation. Sub-Planckian inflaton can induce sufficient e-foldings because of a long slow-roll path. In a special limit, the scalar spectral index and the tensor-to-scalar ratio has equivalent expressions to the inflation with monomial potential φ{sup n}. The favorable values for them could be obtained by varying parameters in the potential. This model could be embedded in a certain radiative neutrino mass model.

  10. Constraining nucleon high momentum in nuclei

    NASA Astrophysics Data System (ADS)

    Yong, Gao-Chan

    2017-02-01

    Recent studies at Jefferson Lab show that there are a certain proportion of nucleons in nuclei have momenta greater than the so-called nuclear Fermi momentum pF. Based on the transport model of nucleus-nucleus collisions at intermediate energies, nucleon high momentum caused by the neutron-proton short-range correlations in nuclei is constrained by comparing with π and photon experimental data and considering some uncertainties. The high momentum cutoff value pmax ≤ 2pF is obtained.

  11. Quantization of soluble classical constrained systems

    SciTech Connect

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  12. Incomplete Dirac reduction of constrained Hamiltonian systems

    SciTech Connect

    Chandre, C.

    2015-10-15

    First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified.

  13. Constraining the North Pacific carbon sink: biological and physical processes

    NASA Astrophysics Data System (ADS)

    Ayers, J.; Lozier, M.

    2010-12-01

    The transition zone region of the North Pacific is a notably large sink for atmospheric carbon dioxide on a mean annual basis, though seasonally the region varies between strong wintertime uptake and weak summertime outgassing. Because the direction of air-sea carbon flux is effectively set by the sea surface pCO2, we seek to identify and quantify those processes most responsible for its variability in this region. While changes in temperature, salinity, dissolved inorganic carbon, and alkalinity are all factors that impact sea surface pCO2 on a seasonal basis, on a mean annual basis the region must be maintained as a sink by processes that remove carbon from surface waters: biological drawdown as well as the result of advection/mixing. In this work we constrain the quantitative contribution of each of these processes throughout an annual cycle. The least constrained of these processes is the biological pump, which we estimate in two independent ways: bottom-up, using satellite data-based primary productivity models coupled with export estimates from literature; and top-down, by determining what the biological pump would need to be to maintain the observed sea surface pCO2 values in the region, given our estimates of all the other regulatory processes.

  14. Constraining the level density using fission of lead projectiles

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, J. L.; Benlliure, J.; Álvarez-Pol, H.; Audouin, L.; Ayyad, Y.; Bélier, G.; Boutoux, G.; Casarejos, E.; Chatillon, A.; Cortina-Gil, D.; Gorbinet, T.; Heinz, A.; Kelić-Heil, A.; Laurent, B.; Martin, J.-F.; Paradela, C.; Pellereau, E.; Pietras, B.; Ramos, D.; Rodríguez-Tajes, C.; Rossi, D. M.; Simon, H.; Taïeb, J.; Vargas, J.; Voss, B.

    2015-10-01

    The nuclear level density is one of the main ingredients for the statistical description of the fission process. In this work, we propose to constrain the description of this parameter by using fission reactions induced by protons and light ions on 208Pb at high kinetic energies. The experiment was performed at GSI (Darmstadt), where the combined use of the inverse kinematics technique with an efficient detection setup allowed us to measure the atomic number of the two fission fragments in coincidence. This measurement permitted us to obtain with high precision the partial fission cross sections and the width of the charge distribution as a function of the atomic number of the fissioning system. These data and others previously measured, covering a large range in fissility, are compared to state-of-the-art calculations. The results reveal that total and partial fission cross sections cannot unambiguously constrain the level density at ground-state and saddle-point deformations and additional observables, such as the width of the charge distribution of the final fission fragments, are required.

  15. Constraining dark sector perturbations I: cosmic shear and CMB lensing

    SciTech Connect

    Battye, Richard A.; Moss, Adam; Pearson, Jonathan A. E-mail: adam.moss@nottingham.ac.uk

    2015-04-01

    We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=−1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales.

  16. Constraining the Symmetry Energy Using Radioactive Ion Beams

    NASA Astrophysics Data System (ADS)

    Stiefel, Krystin; Kohley, Zachary; Morrissey, Dave; Thoennessen, Michael; MoNA Collaboration

    2016-09-01

    Calculations from the constrained molecular dynamics (CoMD) model have shown that the N/Z ratio of the residue fragments and neutron emissions from projectile fragmentation reactions is sensitive to the form of the symmetry energy, a term in the nuclear equation of state. In order to constrain the symmetry energy using the N/Z ratio observable, an experiment was performed using the MoNA-LISA and Sweeper magnet arrangement at the NSCL. Beams of 30S and 40S impinged on 9Be targets and the heavy residue fragments were measured in coincidence with fast neutrons. Comparison of the new experimental data with theoretical models should provide a constraint on the form of the symmetry energy. Some of the data from this experiment will be presented and discussed. This work is partially supported by the National Science Foundation under Grant No. PHY-1102511 and the Department of Energy National Nuclear Security Administration under Award No. DE-NA0000979.

  17. Regular Language Constrained Sequence Alignment Revisited

    NASA Astrophysics Data System (ADS)

    Kucherov, Gregory; Pinhas, Tamar; Ziv-Ukelson, Michal

    Imposing constraints in the form of a finite automaton or a regular expression is an effective way to incorporate additional a priori knowledge into sequence alignment procedures. With this motivation, Arslan [1] introduced the Regular Language Constrained Sequence Alignment Problem and proposed an O(n 2 t 4) time and O(n 2 t 2) space algorithm for solving it, where n is the length of the input strings and t is the number of states in the non-deterministic automaton, which is given as input. Chung et al. [2] proposed a faster O(n 2 t 3) time algorithm for the same problem. In this paper, we further speed up the algorithms for Regular Language Constrained Sequence Alignment by reducing their worst case time complexity bound to O(n 2 t 3/logt). This is done by establishing an optimal bound on the size of Straight-Line Programs solving the maxima computation subproblem of the basic dynamic programming algorithm. We also study another solution based on a Steiner Tree computation. While it does not improve the run time complexity in the worst case, our simulations show that both approaches are efficient in practice, especially when the input automata are dense.

  18. Physically constrained maximum likelihood mode filtering.

    PubMed

    Papp, Joseph C; Preisig, James C; Morozov, Andrey K

    2010-04-01

    Mode filtering is most commonly implemented using the sampled mode shapes or pseudoinverse algorithms. Buck et al. [J. Acoust. Soc. Am. 103, 1813-1824 (1998)] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [A. L. Kraay and A. B. Baggeroer, IEEE Trans. Signal Process. 55, 4048-4063 (2007)] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. Shallow water simulation results are presented showing the benefit of using the PCML method in adaptive mode filtering.

  19. Multiple Manifold Clustering Using Curvature Constrained Path

    PubMed Central

    Babaeian, Amir; Bayestehtashk, Alireza; Bandarabadi, Mojtaba

    2015-01-01

    The problem of multiple surface clustering is a challenging task, particularly when the surfaces intersect. Available methods such as Isomap fail to capture the true shape of the surface near by the intersection and result in incorrect clustering. The Isomap algorithm uses shortest path between points. The main draw back of the shortest path algorithm is due to the lack of curvature constrained where causes to have a path between points on different surfaces. In this paper we tackle this problem by imposing a curvature constraint to the shortest path algorithm used in Isomap. The algorithm chooses several landmark nodes at random and then checks whether there is a curvature constrained path between each landmark node and every other node in the neighborhood graph. We build a binary feature vector for each point where each entry represents the connectivity of that point to a particular landmark. Then the binary feature vectors could be used as a input of conventional clustering algorithm such as hierarchical clustering. We apply our method to simulated and some real datasets and show, it performs comparably to the best methods such as K-manifold and spectral multi-manifold clustering. PMID:26375819

  20. An approach to constrained aerodynamic design with application to airfoils

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.

    1992-01-01

    An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.

  1. Constraining top-Higgs couplings at high and low energy

    NASA Astrophysics Data System (ADS)

    Mereghetti, Emanuele

    2017-03-01

    The study of the couplings of the Higgs boson and of the top quark plays a preeminent role at the LHC, and could unveil the first signs of new physics. I will discuss the interplay of direct and indirect probes of certain classes of top and Higgs couplings. Including constraints from collider observables, precision electroweak tests, flavor physics, and electric dipole moments (EDMs), I will show that indirect probes are competitive, if not dominant, for both the CP-even and CP-odd top and Higgs couplings we considered. I will discuss the role of theoretical uncertainties, associated with hadronic and nuclear matrix elements, and indicate targets to further improve the constraining power of EDM experiments.

  2. Testing manifest monotonicity using order-constrained statistical inference.

    PubMed

    Tijmstra, Jesper; Hessen, David J; van der Heijden, Peter G M; Sijtsma, Klaas

    2013-01-01

    Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores, such as the restscore, a single item score, and in some cases the total score. In this study, we show that manifest monotonicity can be tested by means of the order-constrained statistical inference framework. We propose a procedure that uses this framework to determine whether manifest monotonicity should be rejected for specific items. This approach provides a likelihood ratio test for which the p-value can be approximated through simulation. A simulation study is presented that evaluates the Type I error rate and power of the test, and the procedure is applied to empirical data.

  3. Constraining photon mass by energy-dependent gravitational light bending

    NASA Astrophysics Data System (ADS)

    Qian, Lei

    2012-03-01

    In the standard model of particle physics, photons are massless particles with a particular dispersion relation. Tests of this claim at different scales are both interesting and important. Experiments in territory labs and several exterritorial tests have put some upper limits on photon mass, e.g., torsion balance experiment in the lab shows that photon mass should be smaller than 1.2 × 10-51g. In this work, this claim is tested at a cosmological scale by looking at strong gravitational lensing data available and an upper limit of 8.71 × 10-39g on photon mass is given. Observations of energy-dependent gravitational lensing with not yet available higher accuracy astrometry instruments may constrain photon mass better.

  4. Constraining the Ensemble Kalman Filter for improved streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Maxwell, Deborah; Jackson, Bethanna; McGregor, James

    2016-04-01

    Data assimilation techniques such as the Kalman Filter and its variants are often applied to hydrological models with minimal state volume/capacity constraints. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this presentation, we investigate the effect of constraining the Ensemble Kalman Filter (EnKF) on forecast performance. An EnKF implementation with no constraints is compared to model output with no assimilation, followed by a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then a more tightly constrained implementation where flux as well as mass constraints are imposed to limit the rate of water movement within a state. A three year period (2008-2010) with no significant data gaps and representative of the range of flows observed over the fuller 1976-2010 record was selected for analysis. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Overall, neither the unconstrained nor the "typically" mass-constrained forecasts were significantly better than the non-filtered forecasts; in fact several were significantly degraded. Flux constraints (in conjunction with mass constraints) significantly improved the forecast performance of six events relative to all other implementations, while the remaining two events showed no significant difference in performance. We conclude that placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state updating and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, and find that this

  5. A METHOD TO CONSTRAIN THE SIZE OF THE PROTOSOLAR NEBULA

    SciTech Connect

    Kretke, K. A.; Levison, H. F.; Buie, M. W.; Morbidelli, A.

    2012-04-15

    Observations indicate that the gaseous circumstellar disks around young stars vary significantly in size, ranging from tens to thousands of AU. Models of planet formation depend critically upon the properties of these primordial disks, yet in general it is impossible to connect an existing planetary system with an observed disk. We present a method by which we can constrain the size of our own protosolar nebula using the properties of the small body reservoirs in the solar system. In standard planet formation theory, after Jupiter and Saturn formed they scattered a significant number of remnant planetesimals into highly eccentric orbits. In this paper, we show that if there had been a massive, extended protoplanetary disk at that time, then the disk would have excited Kozai oscillations in some of the scattered objects, driving them into high-inclination (i {approx}> 50 Degree-Sign ), low-eccentricity orbits (q {approx}> 30 AU). The dissipation of the gaseous disk would strand a subset of objects in these high-inclination orbits; orbits that are stable on Gyr timescales. To date, surveys have not detected any Kuiper-belt objects with orbits consistent with this dynamical mechanism. Using these non-detections by the Deep Ecliptic Survey and the Palomar Distant Solar System Survey we are able to rule out an extended gaseous protoplanetary disk (R{sub D} {approx}> 80 AU) in our solar system at the time of Jupiter's formation. Future deep all sky surveys such as the Large Synoptic Survey Telescope will allow us to further constrain the size of the protoplanetary disk.

  6. Modeling Atmospheric CO2 Processes to Constrain the Missing Sink

    NASA Technical Reports Server (NTRS)

    Kawa, S. R.; Denning, A. S.; Erickson, D. J.; Collatz, J. C.; Pawson, S.

    2005-01-01

    We report on a NASA supported modeling effort to reduce uncertainty in carbon cycle processes that create the so-called missing sink of atmospheric CO2. Our overall objective is to improve characterization of CO2 source/sink processes globally with improved formulations for atmospheric transport, terrestrial uptake and release, biomass and fossil fuel burning, and observational data analysis. The motivation for this study follows from the perspective that progress in determining CO2 sources and sinks beyond the current state of the art will rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. The major components of this effort are: 1) Continued development of the chemistry and transport model using analyzed meteorological fields from the Goddard Global Modeling and Assimilation Office, with comparison to real time data in both forward and inverse modes; 2) An advanced biosphere model, constrained by remote sensing data, coupled to the global transport model to produce distributions of CO2 fluxes and concentrations that are consistent with actual meteorological variability; 3) Improved remote sensing estimates for biomass burning emission fluxes to better characterize interannual variability in the atmospheric CO2 budget and to better constrain the land use change source; 4) Evaluating the impact of temporally resolved fossil fuel emission distributions on atmospheric CO2 gradients and variability. 5) Testing the impact of existing and planned remote sensing data sources (e.g., AIRS, MODIS, OCO) on inference of CO2 sources and sinks, and use the model to help establish measurement requirements for future remote sensing instruments. The results will help to prepare for the use of OCO and other satellite data in a multi-disciplinary carbon data assimilation system for analysis and prediction of carbon cycle changes and carbodclimate interactions.

  7. Spatially constrained propulsion in jumping archer fish

    NASA Astrophysics Data System (ADS)

    Mendelson, Leah; Techet, Alexandra

    2016-11-01

    Archer fish jump multiple body lengths out of the water for prey capture with impressive accuracy. Their remarkable aim is facilitated by jumping from a stationary position directly below the free surface. As a result of this starting position, rapid acceleration to a velocity sufficient for reaching the target occurs with only a body length to travel before the fish leaves the water. Three-dimensional measurements of jumping kinematics and volumetric velocimetry using Synthetic Aperture PIV highlight multiple strategies for such spatially constrained acceleration. Archer fish rapidly extend fins at jump onset to increase added mass forces and modulate their swimming kinematics to minimize wasted energy when the body is partially out of the water. Volumetric measurements also enable assessment of efficiency during a jump, which is crucial to understanding jumping's role as an energetically viable hunting strategy for the fish.

  8. Constrained sampling method for analytic continuation.

    PubMed

    Sandvik, Anders W

    2016-12-01

    A method for analytic continuation of imaginary-time correlation functions (here obtained in quantum Monte Carlo simulations) to real-frequency spectral functions is proposed. Stochastically sampling a spectrum parametrized by a large number of δ functions, treated as a statistical-mechanics problem, it avoids distortions caused by (as demonstrated here) configurational entropy in previous sampling methods. The key development is the suppression of entropy by constraining the spectral weight to within identifiable optimal bounds and imposing a set number of peaks. As a test case, the dynamic structure factor of the S=1/2 Heisenberg chain is computed. Very good agreement is found with Bethe ansatz results in the ground state (including a sharp edge) and with exact diagonalization of small systems at elevated temperatures.

  9. Constrained sampling method for analytic continuation

    NASA Astrophysics Data System (ADS)

    Sandvik, Anders W.

    2016-12-01

    A method for analytic continuation of imaginary-time correlation functions (here obtained in quantum Monte Carlo simulations) to real-frequency spectral functions is proposed. Stochastically sampling a spectrum parametrized by a large number of δ functions, treated as a statistical-mechanics problem, it avoids distortions caused by (as demonstrated here) configurational entropy in previous sampling methods. The key development is the suppression of entropy by constraining the spectral weight to within identifiable optimal bounds and imposing a set number of peaks. As a test case, the dynamic structure factor of the S =1 /2 Heisenberg chain is computed. Very good agreement is found with Bethe ansatz results in the ground state (including a sharp edge) and with exact diagonalization of small systems at elevated temperatures.

  10. Perceived visual speed constrained by image segmentation

    NASA Technical Reports Server (NTRS)

    Verghese, P.; Stone, L. S.

    1996-01-01

    Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.

  11. Parametric and Combinatorial Problems in Constrained Optimization

    DTIC Science & Technology

    1993-02-28

    1 ).70 0.30 5 . 49 6 i 68 :ý7 3 74 76 76 17~ 7s 1711 0301 F 70 1 86 89 90) 90) 90 91 190 89...AD-A265 595 1 (),% AGE - 3 I F’ KAi/61 MfAR󈨞 Om sEB93 IIPA`ThA?4RXCTKtf COMBINATORIAL PROBLEMS IN CONSTRAINED OPTIMIZATION * ~2304/ 1 )5 AUJBREY B...POORE ~COLORADO STATE UNIVERSITY FORT COLLINS CO 80523 I SP CN SC ý N 1 -% vC iC R;N G A~ -~ S, i,’ S) 1 SCNSOR’NG M1’%C’NC (: AGENCY 4tPURT N~vBLQ

  12. Fluctuation theorem for constrained equilibrium systems.

    PubMed

    Gilbert, Thomas; Dorfman, J Robert

    2006-02-01

    We discuss the fluctuation properties of equilibrium chaotic systems with constraints such as isokinetic and Nosé-Hoover thermostats. Although the dynamics of these systems does not typically preserve phase-space volumes, the average phase-space contraction rate vanishes, so that the stationary states are smooth. Nevertheless, finite-time averages of the phase-space contraction rate have nontrivial fluctuations which we show satisfy a simple version of the Gallavotti-Cohen fluctuation theorem, complementary to the usual fluctuation theorem for nonequilibrium stationary states and appropriate to constrained equilibrium states. Moreover, we show that these fluctuations are distributed according to a Gaussian curve for long enough times. Three different systems are considered here: namely, (i) a fluid composed of particles interacting with Lennard-Jones potentials, (ii) a harmonic oscillator with Nosé-Hoover thermostatting, and (iii) a simple hyperbolic two-dimensional map.

  13. Fluctuation theorem for constrained equilibrium systems

    NASA Astrophysics Data System (ADS)

    Gilbert, Thomas; Dorfman, J. Robert

    2006-02-01

    We discuss the fluctuation properties of equilibrium chaotic systems with constraints such as isokinetic and Nosé-Hoover thermostats. Although the dynamics of these systems does not typically preserve phase-space volumes, the average phase-space contraction rate vanishes, so that the stationary states are smooth. Nevertheless, finite-time averages of the phase-space contraction rate have nontrivial fluctuations which we show satisfy a simple version of the Gallavotti-Cohen fluctuation theorem, complementary to the usual fluctuation theorem for nonequilibrium stationary states and appropriate to constrained equilibrium states. Moreover, we show that these fluctuations are distributed according to a Gaussian curve for long enough times. Three different systems are considered here: namely, (i) a fluid composed of particles interacting with Lennard-Jones potentials, (ii) a harmonic oscillator with Nosé-Hoover thermostatting, and (iii) a simple hyperbolic two-dimensional map.

  14. Sampling Motif-Constrained Ensembles of Networks

    NASA Astrophysics Data System (ADS)

    Fischer, Rico; Leitão, Jorge C.; Peixoto, Tiago P.; Altmann, Eduardo G.

    2015-10-01

    The statistical significance of network properties is conditioned on null models which satisfy specified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this Letter we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, networks with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.

  15. The asymptotics of large constrained graphs

    NASA Astrophysics Data System (ADS)

    Radin, Charles; Ren, Kui; Sadun, Lorenzo

    2014-05-01

    We show, through local estimates and simulation, that if one constrains simple graphs by their densities ɛ of edges and τ of triangles, then asymptotically (in the number of vertices) for over 95% of the possible range of those densities there is a well-defined typical graph, and it has a very simple structure: the vertices are decomposed into two subsets V1 and V2 of fixed relative size c and 1 - c, and there are well-defined probabilities of edges, gjk, between vj ∈ Vj, and vk ∈ Vk. Furthermore the four parameters c, g11, g22 and g12 are smooth functions of (ɛ, τ) except at two smooth ‘phase transition’ curves.

  16. Printer model inversion by constrained optimization

    NASA Astrophysics Data System (ADS)

    Cholewo, Tomasz J.

    1999-12-01

    This paper describes a novel method for finding colorant amounts for which a printer will produce a requested color appearance based on constrained optimization. An error function defines the gamut mapping method and black replacement method. The constraints limit the feasible solution region to the device gamut and prevent exceeding the maximum total area coverage. Colorant values corresponding to in-gamut colors are found with precision limited only by the accuracy of the device model. Out-of- gamut colors are mapped to colors within the boundary of the device gamut. This general approach, used in conjunction with different types of color difference equations, can perform a wide range of out-of-gamut mappings such as chroma clipping or for finding colors on gamut boundary having specified properties. We present an application of this method to the creation of PostScript color rendering dictionaries and ICC profiles.

  17. Building cancer nursing skills in a resource-constrained government hospital.

    PubMed

    Strother, R M; Fitch, Margaret; Kamau, Peter; Beattie, Kathy; Boudreau, Angela; Busakhalla, N; Loehrer, P J

    2012-09-01

    Cancer is a rising cause of morbidity and mortality in resource-constrained settings. Few places in the developing world have cancer care experts and infrastructure for caring for cancer patients; therefore, it is imperative to develop this infrastructure and expertise. A critical component of cancer care, rarely addressed in the published literature, is cancer nursing. This report describes an effort to develop cancer nursing subspecialty knowledge and skills in support of a growing resource-constrained comprehensive cancer care program in Western Kenya. This report highlights the context of cancer care delivery in a resource-constrained setting, and describes one targeted intervention to further develop the skill set and knowledge of cancer care providers, as part of collaboration between developed world academic institutions and a medical school and governmental hospital in Western Kenya. Based on observations of current practice, practice setting, and resource limitations, a pragmatic curriculum for cancer care nursing was developed and implemented.

  18. Newton's method for large bound-constrained optimization problems.

    SciTech Connect

    Lin, C.-J.; More, J. J.; Mathematics and Computer Science

    1999-01-01

    We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.

  19. Constraining Upper Mantle Azimuthal Anisotropy With Free Oscillation Data (Invited)

    NASA Astrophysics Data System (ADS)

    Beghein, C.; Resovsky, J. S.; van der Hilst, R. D.

    2009-12-01

    We investigate the potential of Earth's free oscillations coupled modes as a tool to constrain large-scale seismic anisotropy in the transition zone and in the bulk of the lower mantle. While the presence of seismic anisotropy is widely documented in the uppermost and the lowermost mantle, its observation at intermediate depths remains a formidable challenge. We show that several coupled modes of oscillations are sensitive to radial and azimuthal anisotropy throughout the mantle. In particular, modes of the type 0Sl-0T(l+1) have high sensitivity to shear-wave radial anisotropy and to six elastic parameters describing azimuthal anisotropy in the 200 km-1000 km depth range. The use of such data enables us thus to extend the sensitivity of traditionally used fundamental mode surface waves to depths corresponding to the transition zone and the top of the lower mantle. In addition, these modes have the potential to provide new and unique constraints on several elastic parameters to which surface waves are not sensitive. We attempted to fit degree two splitting measurements of 0Sl-0T(l+1) coupled modes using previously published isotropic and transversely isotropic mantle models, but we could not explain the entire signal. We then explored the model space with a forward modeling approach and determined that, after correction for the effect of the crust and mantle radial anisotropy, the remaining signal can be explained by the presence of azimuthal anisotropy in the upper mantle. When we allow the azimuthal anisotropy to go below 400 km depth, the data fit is slightly better and the model space search leads to better-resolved model than when we force the anisotropy to lie in the top 400 km of the mantle. Its depth extent and distribution are, however, still not well constrained by the data due to parameter tradeoffs and a limited coupled mode data set. It is thus clear that mode coupling measurements have the potential to constrain upper-mantle azimuthal anisotropy

  20. Asynchronous parallel generating set search for linearly-constrained optimization.

    SciTech Connect

    Lewis, Robert Michael; Griffin, Joshua D.; Kolda, Tamara Gibson

    2006-08-01

    Generating set search (GSS) is a family of direct search methods that encompasses generalized pattern search and related methods. We describe an algorithm for asynchronous linearly-constrained GSS, which has some complexities that make it different from both the asynchronous bound-constrained case as well as the synchronous linearly-constrained case. The algorithm has been implemented in the APPSPACK software framework and we present results from an extensive numerical study using CUTEr test problems. We discuss the results, both positive and negative, and conclude that GSS is a reliable method for solving small-to-medium sized linearly-constrained optimization problems without derivatives.

  1. Eulerian Formulation of Spatially Constrained Elastic Rods

    NASA Astrophysics Data System (ADS)

    Huynen, Alexandre

    Slender elastic rods are ubiquitous in nature and technology. For a vast majority of applications, the rod deflection is restricted by an external constraint and a significant part of the elastic body is in contact with a stiff constraining surface. The research work presented in this doctoral dissertation formulates a computational model for the solution of elastic rods constrained inside or around frictionless tube-like surfaces. The segmentation strategy adopted to cope with this complex class of problems consists in sequencing the global problem into, comparatively simpler, elementary problems either in continuous contact with the constraint or contact-free between their extremities. Within the conventional Lagrangian formulation of elastic rods, this approach is however associated with two major drawbacks. First, the boundary conditions specifying the locations of the rod centerline at both extremities of each elementary problem lead to the establishment of isoperimetric constraints, i.e., integral constraints on the unknown length of the rod. Second, the assessment of the unilateral contact condition requires, in principle, the comparison of two curves parametrized by distinct curvilinear coordinates, viz. the rod centerline and the constraint axis. Both conspire to burden the computations associated with the method. To streamline the solution along the elementary problems and rationalize the assessment of the unilateral contact condition, the rod governing equations are reformulated within the Eulerian framework of the constraint. The methodical exploration of both types of elementary problems leads to specific formulations of the rod governing equations that stress the profound connection between the mechanics of the rod and the geometry of the constraint surface. The proposed Eulerian reformulation, which restates the rod local equilibrium in terms of the curvilinear coordinate associated with the constraint axis, describes the rod deformed configuration

  2. Constraining electromagnetic core-mantle coupling

    NASA Astrophysics Data System (ADS)

    Wicht, J.; Jault, D.

    1999-02-01

    Electromagnetic core-mantle coupling is one candidate for explaining length of day (LOD) variations on decadal time scales. This coupling has traditionally been calculated from models of core surface flow /U-->, but core flow inversions are not unique. Unfortunately, the main torque contribution depends on the toroidal part /∇×r-->Φ of the vector field (U-->Br), which is left undetermined by the radial induction equation that links core flows to magnetic field data under the frozen-flux hypothesis. (Br is the radial magnetic field at the core surface.) We have developed a new method that bypasses flow inversions. Using the vector field (U-->Br) directly, we estimate its toroidal part by imposing that (U-->Br) vanishes where Br=0. The calculation of the electromagnetic torque is also hindered by the uncertainty in the conductivity structure of the lower mantle. However, we conclude that the conductivity of the lower mantle has to be much larger than current estimates derived from diamond anvil cell experiments to make the electromagnetic torque significant. The region beneath Africa and the mid-Atlantic (where there is the most significant (Br=0) curve apart from the magnetic equator) is particularly important. Here, the field Φ is relatively well constrained, and this is the single region that contributes most to the torque. Following Holme [Holme, R., 1998a. Electromagnetic core-mantle coupling 1: explaining decadal changes in the length of day. Geophys. J. Int. (132) 167-180.], we show nevertheless that small changes to Φ suffice to recover the torque ΓLOD required to explain LOD variations provided that the lower mantle conductivity is high enough. We find a value similar to Holme [Holme, R., 1998b. Electromagnetic core-mantle coupling 2: probing deep mantle conductance. In: Gunis, M., Wysession, M.E., Knittel, E., Buffett, B.A. (Eds.), The Core-Mantle Boundary Region. AGU, pp. 139-151.] for the minimum conductance (108 S). However, we were not able to

  3. Sequential unconstrained minimization algorithms for constrained optimization

    NASA Astrophysics Data System (ADS)

    Byrne, Charles

    2008-02-01

    The problem of minimizing a function f(x):RJ → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G_k(x)=f(x)+g_k(x), to obtain xk. The auxiliary functions gk(x):D ⊆ RJ → R+ are nonnegative on the set D, each xk is assumed to lie within D, and the objective is to minimize the continuous function f:RJ → R over x in the set C=\\overline D , the closure of D. We assume that such minimizers exist, and denote one such by \\hat x . We assume that the functions gk(x) satisfy the inequalities 0\\leq g_k(x)\\leq G_{k-1}(x)-G_{k-1}(x^{k-1}), for k = 2, 3, .... Using this assumption, we show that the sequence {f(xk)} is decreasing and converges to f({\\hat x}) . If the restriction of f(x) to D has bounded level sets, which happens if \\hat x is unique and f(x) is closed, proper and convex, then the sequence {xk} is bounded, and f(x^*)=f({\\hat x}) , for any cluster point x*. Therefore, if \\hat x is unique, x^*={\\hat x} and \\{x^k\\}\\rightarrow {\\hat x} . When \\hat x is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton-Raphson method. The proof techniques used for SUMMA can be extended to obtain related results for the induced proximal

  4. Constraining decaying dark matter with Fermi LAT gamma-rays

    SciTech Connect

    Zhang, Le; Sigl, Günter; Weniger, Christoph; Maccione, Luca; Redondo, Javier E-mail: christoph.weniger@desy.de E-mail: redondo@mppmm.mpg.de

    2010-06-01

    High energy electrons and positrons from decaying dark matter can produce a significant flux of gamma rays by inverse Compton off low energy photons in the interstellar radiation field. This possibility is inevitably related with the dark matter interpretation of the observed PAMELA and FERMI excesses. The aim of this paper is providing a simple and universal method to constrain dark matter models which produce electrons and positrons in their decay by using the Fermi LAT gamma-ray observations in the energy range between 0.5 GeV and 300 GeV. We provide a set of universal response functions that, once convolved with a specific dark matter model produce the desired constraints. Our response functions contain all the astrophysical inputs such as the electron propagation in the galaxy, the dark matter profile, the gamma-ray fluxes of known origin, and the Fermi LAT data. We study the uncertainties in the determination of the response functions and apply them to place constraints on some specific dark matter decay models that can well fit the positron and electron fluxes observed by PAMELA and Fermi LAT. To this end we also take into account prompt radiation from the dark matter decay. We find that with the available data decaying dark matter cannot be excluded as source of the PAMELA positron excess.

  5. Constraining a halo model for cosmological neutral hydrogen

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Hamsa; Refregier, Alexandre

    2017-02-01

    We describe a combined halo model to constrain the distribution of neutral hydrogen (H I) in the post-reionization universe. We combine constraints from the various probes of H I at different redshifts: the low-redshift 21-cm emission line surveys, intensity mapping experiments at intermediate redshifts, and the Damped Lyman-Alpha (DLA) observations at higher redshifts. We use a Markov Chain Monte Carlo approach to combine the observations and place constraints on the free parameters in the model. Our best-fitting model involves a relation between neutral hydrogen mass M_{H I} and halo mass M with a non-unit slope, and an upper and a lower cutoff. We find that the model fits all the observables but leads to an underprediction of the bias parameter of DLAs at z ˜ 2.3. We also find indications of a possible tension between the H I column density distribution and the mass function of H I-selected galaxies at z ˜ 0. We provide the central values of the parameters of the best-fitting model so derived. We also provide a fitting form for the derived evolution of the concentration parameter of H I in dark matter haloes, and discuss the implications for the redshift evolution of the H I-halo mass relation.

  6. Planning maximally smooth hand movements constrained to nonplanar workspaces.

    PubMed

    Liebermann, Dario G; Krasovsky, Tal; Berman, Sigal

    2008-11-01

    The article characterizes hand paths and speed profiles for movements performed in a nonplanar, 2-dimensional workspace (a hemisphere of constant curvature). The authors assessed endpoint kinematics (i.e., paths and speeds) under the minimum-jerk model assumptions and calculated minimal amplitude paths (geodesics) and the corresponding speed profiles. The authors also calculated hand speeds using the 2/3 power law. They then compared modeled results with the empirical observations. In all, 10 participants moved their hands forward and backward from a common starting position toward 3 targets located within a hemispheric workspace of small or large curvature. Comparisons of modeled observed differences using 2-way RM-ANOVAs showed that movement direction had no clear influence on hand kinetics (p < .05). Workspace curvature affected the hand paths, which seldom followed geodesic lines. Constraining the paths to different curvatures did not affect the hand speed profiles. Minimum-jerk speed profiles closely matched the observations and were superior to those predicted by 2/3 power law (p < .001). The authors conclude that speed and path cannot be unambiguously linked under the minimum-jerk assumption when individuals move the hand in a nonplanar 2-dimensional workspace. In such a case, the hands do not follow geodesic paths, but they preserve the speed profile, regardless of the geometric features of the workspace.

  7. Constrained Markovian Dynamics of Random Graphs

    NASA Astrophysics Data System (ADS)

    Coolen, A. C. C.; de Martino, A.; Annibale, A.

    2009-09-01

    We introduce a statistical mechanics formalism for the study of constrained graph evolution as a Markovian stochastic process, in analogy with that available for spin systems, deriving its basic properties and highlighting the role of the `mobility' (the number of allowed moves for any given graph). As an application of the general theory we analyze the properties of degree-preserving Markov chains based on elementary edge switchings. We give an exact yet simple formula for the mobility in terms of the graph's adjacency matrix and its spectrum. This formula allows us to define acceptance probabilities for edge switchings, such that the Markov chains become controlled Glauber-type detailed balance processes, designed to evolve to any required invariant measure (representing the asymptotic frequencies with which the allowed graphs are visited during the process). As a corollary we also derive a condition in terms of simple degree statistics, sufficient to guarantee that, in the limit where the number of nodes diverges, even for state-independent acceptance probabilities of proposed moves the invariant measure of the process will be uniform. We test our theory on synthetic graphs and on realistic larger graphs as studied in cellular biology, showing explicitly that, for instances where the simple edge swap dynamics fails to converge to the uniform measure, a suitably modified Markov chain instead generates the correct phase space sampling.

  8. Proximity Navigation of Highly Constrained Spacecraft

    NASA Technical Reports Server (NTRS)

    Scarritt, S.; Swartwout, M.

    2007-01-01

    Bandit is a 3-kg automated spacecraft in development at Washington University in St. Louis. Bandit's primary mission is to demonstrate proximity navigation, including docking, around a 25-kg student-built host spacecraft. However, because of extreme constraints in mass, power and volume, traditional sensing and actuation methods are not available. In particular, Bandit carries only 8 fixed-magnitude cold-gas thrusters to control its 6 DOF motion. Bandit lacks true inertial sensing, and the ability to sense position relative to the host has error bounds that approach the size of the Bandit itself. Some of the navigation problems are addressed through an extremely robust, error-tolerant soft dock. In addition, we have identified a control methodology that performs well in this constrained environment: behavior-based velocity potential functions, which use a minimum-seeking method similar to Lyapunov functions. We have also adapted the discrete Kalman filter for use on Bandit for position estimation and have developed a similar measurement vs. propagation weighting algorithm for attitude estimation. This paper provides an overview of Bandit and describes the control and estimation approach. Results using our 6DOF flight simulator are provided, demonstrating that these methods show promise for flight use.

  9. NUCLEI SEGMENTATION VIA SPARSITY CONSTRAINED CONVOLUTIONAL REGRESSION

    PubMed Central

    Zhou, Yin; Chang, Hang; Barner, Kenneth E.; Parvin, Bahram

    2017-01-01

    Automated profiling of nuclear architecture, in histology sections, can potentially help predict the clinical outcomes. However, the task is challenging as a result of nuclear pleomorphism and cellular states (e.g., cell fate, cell cycle), which are compounded by the batch effect (e.g., variations in fixation and staining). Present methods, for nuclear segmentation, are based on human-designed features that may not effectively capture intrinsic nuclear architecture. In this paper, we propose a novel approach, called sparsity constrained convolutional regression (SCCR), for nuclei segmentation. Specifically, given raw image patches and the corresponding annotated binary masks, our algorithm jointly learns a bank of convolutional filters and a sparse linear regressor, where the former is used for feature extraction, and the latter aims to produce a likelihood for each pixel being nuclear region or background. During classification, the pixel label is simply determined by a thresholding operation applied on the likelihood map. The method has been evaluated using the benchmark dataset collected from The Cancer Genome Atlas (TCGA). Experimental results demonstrate that our method outperforms traditional nuclei segmentation algorithms and is able to achieve competitive performance compared to the state-of-the-art algorithm built upon human-designed features with biological prior knowledge. PMID:28101301

  10. Constrained spheroids for prolonged hepatocyte culture.

    PubMed

    Tong, Wen Hao; Fang, Yu; Yan, Jie; Hong, Xin; Hari Singh, Nisha; Wang, Shu Rui; Nugraha, Bramasta; Xia, Lei; Fong, Eliza Li Shan; Iliescu, Ciprian; Yu, Hanry

    2016-02-01

    Liver-specific functions in primary hepatocytes can be maintained over extended duration in vitro using spheroid culture. However, the undesired loss of cells over time is still a major unaddressed problem, which consequently generates large variations in downstream assays such as drug screening. In static culture, the turbulence generated by medium change can cause spheroids to detach from the culture substrate. Under perfusion, the momentum generated by Stokes force similarly results in spheroid detachment. To overcome this problem, we developed a Constrained Spheroids (CS) culture system that immobilizes spheroids between a glass coverslip and an ultra-thin porous Parylene C membrane, both surface-modified with poly(ethylene glycol) and galactose ligands for optimum spheroid formation and maintenance. In this configuration, cell loss was minimized even when perfusion was introduced. When compared to the standard collagen sandwich model, hepatocytes cultured as CS under perfusion exhibited significantly enhanced hepatocyte functions such as urea secretion, and CYP1A1 and CYP3A2 metabolic activity. We propose the use of the CS culture as an improved culture platform to current hepatocyte spheroid-based culture systems.

  11. Pairwise constrained concept factorization for data representation.

    PubMed

    He, Yangcheng; Lu, Hongtao; Huang, Lei; Xie, Saining

    2014-04-01

    Concept factorization (CF) is a variant of non-negative matrix factorization (NMF). In CF, each concept is represented by a linear combination of data points, and each data point is represented by a linear combination of concepts. More specifically, each concept is represented by more than one data point with different weights, and each data point carries various weights called membership to represent their degrees belonging to that concept. However, CF is actually an unsupervised method without making use of prior information of the data. In this paper, we propose a novel semi-supervised concept factorization method, called Pairwise Constrained Concept Factorization (PCCF), which incorporates pairwise constraints into the CF framework. We expect that data points which have pairwise must-link constraints should have the same class label as much as possible, while data points with pairwise cannot-link constraints will have different class labels as much as possible. Due to the incorporation of the pairwise constraints, the learning quality of the CF has been significantly enhanced. Experimental results show the effectiveness of our proposed novel method in comparison to the state-of-the-art algorithms on several real world applications.

  12. Pressure compensated transducer system with constrained diaphragm

    NASA Astrophysics Data System (ADS)

    Percy, Joseph L.

    1992-08-01

    An acoustic source apparatus has an acoustic transducer that is enclosed in a substantially rigid and watertight enclosure to resist the pressure of water on the transducer and to seal the transducer from the water. The enclosure has an opening through which acoustic signals pass and over which is placed a resilient, expandable and substantially water-impermeable diaphragm. A net stiffens and strengthens the diaphragm as well as constrains the diaphragm from overexpansion or from migrating due to buoyancy forces. Pressurized gas, regulated at slightly above ambient pressure, is supplied to the enclosure and the diaphragm to compensate for underwater ambient pressures. Gas pressure regulated at above ambient pressure is used to selectively tune the pressure levels within the enclosure and diaphragm so that diaphragm resonance can be achieved. Controls are used to selectively fill, as well as vent the enclosure and diaphragm during system descent and ascent, respectively. A signal link is used to activate these controls and to provide the driving force for the acoustic transducer.

  13. Constrained Subjective Assessment of Student Learning

    NASA Astrophysics Data System (ADS)

    Saliu, Sokol

    2005-09-01

    Student learning is a complex incremental cognitive process; assessment needs to parallel this, reporting the results in similar terms. Application of fuzzy sets and logic to the criterion-referenced assessment of student learning is considered here. The constrained qualitative assessment (CQA) system was designed, and then applied in assessing a past course in microcomputer system design (MSD). CQA criteria were articulated in fuzzy terms and sets, and the assessment procedure was cast as a fuzzy inference rule base. An interactive graphic interface provided for transparent assessment, student "backwash," and support to the teacher when compiling the tests. Grade intervals, obtained from a departmental poll, were used to compile a fuzzy "grade" set. Assessment results were compared to those of a former standard method and to those of a modified version of it (but with fewer criteria). The three methods yielded similar results, supporting the application of CQA. The method improved assessment reliability by means of the consensus embedded in the fuzzy grade set, and improved assessment validity by integrating fuzzy criteria into the assessment procedure.

  14. Constrained Graph Optimization: Interdiction and Preservation Problems

    SciTech Connect

    Schild, Aaron V

    2012-07-30

    The maximum flow, shortest path, and maximum matching problems are a set of basic graph problems that are critical in theoretical computer science and applications. Constrained graph optimization, a variation of these basic graph problems involving modification of the underlying graph, is equally important but sometimes significantly harder. In particular, one can explore these optimization problems with additional cost constraints. In the preservation case, the optimizer has a budget to preserve vertices or edges of a graph, preventing them from being deleted. The optimizer wants to find the best set of preserved edges/vertices in which the cost constraints are satisfied and the basic graph problems are optimized. For example, in shortest path preservation, the optimizer wants to find a set of edges/vertices within which the shortest path between two predetermined points is smallest. In interdiction problems, one deletes vertices or edges from the graph with a particular cost in order to impede the basic graph problems as much as possible (for example, delete edges/vertices to maximize the shortest path between two predetermined vertices). Applications of preservation problems include optimal road maintenance, power grid maintenance, and job scheduling, while interdiction problems are related to drug trafficking prevention, network stability assessment, and counterterrorism. Computational hardness results are presented, along with heuristic methods for approximating solutions to the matching interdiction problem. Also, efficient algorithms are presented for special cases of graphs, including on planar graphs. The graphs in many of the listed applications are planar, so these algorithms have important practical implications.

  15. Scheduling Aircraft Landings under Constrained Position Shifting

    NASA Technical Reports Server (NTRS)

    Balakrishnan, Hamsa; Chandran, Bala

    2006-01-01

    Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.

  16. Optimization of input-constrained systems

    NASA Astrophysics Data System (ADS)

    Malki, Suleyman; Spaanenburg, Lambert

    2009-05-01

    The computational demands of algorithms are rapidly growing. The naive implementation uses extended doubleprecision floating-point numbers and has therefore extreme difficulties in maintaining real-time performance. For fixedpoint numbers, the value representation pushes in two directions (value range and step size) to set the applicationdependent word size. In the general case, checking all combinations of all different values on all system inputs will easily become computationally infeasible. Checking corner cases only helps to reduce the combinatorial explosion, as still checking for accuracy and precision to limit word size remains a considerable effort. A range of evolutionary techniques have been tried where the sheer size of the problem withstands an extensive search. When the value range can be limited, the problem becomes tractable and a constructive approach becomes feasible. We propose an approach that is reminiscent of the Quine-Mc.Cluskey logic minimization procedure. Next to the conjunctive search as popular in Boolean minimization, we investigate the disjunctive approach that starts from a presumed minimal word size. To eliminate the occurrence of anomalies, this still has to be checked for larger word sizes. The procedure has initially been implemented using Java and Matlab. We have applied the above procedure to feed-forward and to cellular neural networks (CNN) as typical examples of input-constrained systems. In the case of hole-filling by means of a CNN, we find that the 1461 different coefficient sets can be reduced to 360, each giving robust behaviour on 7-bits internal words.

  17. Constraining the oblateness of Kepler planets

    SciTech Connect

    Zhu, Wei; Huang, Chelsea X.; Zhou, George; Lin, D. N. C.

    2014-11-20

    We use Kepler short-cadence light curves to constrain the oblateness of planet candidates in the Kepler sample. The transits of rapidly rotating planets that are deformed in shape will lead to distortions in the ingress and egress of their light curves. We report the first tentative detection of an oblate planet outside the solar system, measuring an oblateness of 0.22{sub −0.11}{sup +0.11} for the 18 M{sub J} mass brown dwarf Kepler 39b (KOI 423.01). We also provide constraints on the oblateness of the planets (candidates) HAT-P-7b, KOI 686.01, and KOI 197.01 to be <0.067, <0.251, and <0.186, respectively. Using the Q' values from Jupiter and Saturn, we expect tidal synchronization for the spins of HAT-P-7b, KOI 686.01, and KOI 197.01, and for their rotational oblateness signatures to be undetectable in the current data. The potentially large oblateness of KOI 423.01 (Kepler 39b) suggests that the Q' value of the brown dwarf needs to be two orders of magnitude larger than that of the solar system gas giants to avoid being tidally spun down.

  18. Optimization of constrained density functional theory

    NASA Astrophysics Data System (ADS)

    O'Regan, David D.; Teobaldi, Gilberto

    2016-07-01

    Constrained density functional theory (cDFT) is a versatile electronic structure method that enables ground-state calculations to be performed subject to physical constraints. It thereby broadens their applicability and utility. Automated Lagrange multiplier optimization is necessary for multiple constraints to be applied efficiently in cDFT, for it to be used in tandem with geometry optimization, or with molecular dynamics. In order to facilitate this, we comprehensively develop the connection between cDFT energy derivatives and response functions, providing a rigorous assessment of the uniqueness and character of cDFT stationary points while accounting for electronic interactions and screening. In particular, we provide a nonperturbative proof that stable stationary points of linear density constraints occur only at energy maxima with respect to their Lagrange multipliers. We show that multiple solutions, hysteresis, and energy discontinuities may occur in cDFT. Expressions are derived, in terms of convenient by-products of cDFT optimization, for quantities such as the dielectric function and a condition number quantifying ill definition in multiple constraint cDFT.

  19. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  20. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  1. String theory origin of constrained multiplets

    NASA Astrophysics Data System (ADS)

    Kallosh, Renata; Vercnocke, Bert; Wrase, Timm

    2016-09-01

    We study the non-linearly realized spontaneously broken supersymmetry of the (anti-)D3-brane action in type IIB string theory. The worldvolume fields are one vector A μ , three complex scalars ϕ i and four 4d fermions λ 0, λ i. These transform, in addition to the more familiar {N}=4 linear supersymmetry, also under 16 spontaneously broken, non-linearly realized supersymmetries. We argue that the worldvolume fields can be packaged into the following constrained 4d non-linear {N}=1 multiplets: four chiral multiplets S, Y i that satisfy S 2 = SY i =0 and contain the worldvolume fermions λ 0 and λ i ; and four chiral multiplets W α , H i that satisfy S{W}_{α }=S{overline{D}}_{overset{\\cdotp }{α }}{overline{H}}^{overline{imath}}=0 and contain the vector A μ and the scalars ϕ i . We also discuss how placing an anti-D3-brane on top of intersecting O7-planes can lead to an orthogonal multiplet Φ that satisfies S(Φ -overline{Φ})=0 , which is particularly interesting for inflationary cosmology.

  2. Constrained Sypersymmetric Flipped SU (5) GUT Phenomenology

    SciTech Connect

    Ellis, John; Mustafayev, Azar; Olive, Keith A.; /Minnesota U., Theor. Phys. Inst. /Minnesota U. /Stanford U., Phys. Dept. /SLAC

    2011-08-12

    We explore the phenomenology of the minimal supersymmetric flipped SU(5) GUT model (CFSU(5)), whose soft supersymmetry-breaking (SSB) mass parameters are constrained to be universal at some input scale, Min, above the GUT scale, M{sub GUT}. We analyze the parameter space of CFSU(5) assuming that the lightest supersymmetric particle (LSP) provides the cosmological cold dark matter, paying careful attention to the matching of parameters at the GUT scale. We first display some specific examples of the evolutions of the SSB parameters that exhibit some generic features. Specifically, we note that the relationship between the masses of the lightest neutralino {chi} and the lighter stau {tilde {tau}}{sub 1} is sensitive to M{sub in}, as is the relationship between m{sub {chi}} and the masses of the heavier Higgs bosons A,H. For these reasons, prominent features in generic (m{sub 1/2}, m{sub 0}) planes such as coannihilation strips and rapid-annihilation funnels are also sensitive to Min, as we illustrate for several cases with tan {beta} = 10 and 55. However, these features do not necessarily disappear at large Min, unlike the case in the minimal conventional SU(5) GUT. Our results are relatively insensitive to neutrino masses.

  3. Constraining inflation with future galaxy redshift surveys

    SciTech Connect

    Huang, Zhiqi; Vernizzi, Filippo; Verde, Licia E-mail: liciaverde@icc.ub.edu

    2012-04-01

    With future galaxy surveys, a huge number of Fourier modes of the distribution of the large scale structures in the Universe will become available. These modes are complementary to those of the CMB and can be used to set constraints on models of the early universe, such as inflation. Using a MCMC analysis, we compare the power of the CMB with that of the combination of CMB and galaxy survey data, to constrain the power spectrum of primordial fluctuations generated during inflation. We base our analysis on the Planck satellite and a spectroscopic redshift survey with configuration parameters close to those of the Euclid mission as examples. We first consider models of slow-roll inflation, and show that the inclusion of large scale structure data improves the constraints by nearly halving the error bars on the scalar spectral index and its running. If we attempt to reconstruct the inflationary single-field potential, a similar conclusion can be reached on the parameters characterizing the potential. We then study models with features in the power spectrum. In particular, we consider ringing features produced by a break in the potential and oscillations such as in axion monodromy. Adding large scale structures improves the constraints on features by more than a factor of two. In axion monodromy we show that there are oscillations with small amplitude and frequency in momentum space that are undetected by CMB alone but can be measured by including galaxy surveys in the analysis.

  4. Uncertain future soil carbon dynamics under global change predicted by models constrained by total carbon measurements.

    PubMed

    Luo, Zhongkui; Wang, Enli; Sun, Osbert J

    2017-01-23

    Pool-based carbon (C) models are widely applied to predict soil C dynamics under global change and infer underlying mechanisms. However, it is unclear about the credibility of model-predicted C pool size, decay rate (k) and/or microbial C use efficiency (e) as only data on bulked total C is usually available for model-constraining. Using observing system simulation experiments (OSSE), we constrained a two-pool model using simulated datasets of total soil C dynamics under topical hypotheses on responses of soil C dynamics to warming and elevated CO2 (i.e., global change scenarios). The results indicated that the model predicted great uncertainties in C pool size, k and e under all global change scenarios, resulting in the difficulty to correctly infer the presupposed "real" values of those parameters that are used to generate the simulated total soil C for constraining the model. Furthermore, the model using the constrained parameters generated divergent future soil C dynamics. Compared with the predictions using the presupposed real parameters (i.e., the real future C dynamics), the percentage uncertainty in 100-year predictions using the constrained parameters was up to 45% depending on global change scenarios and data availability for model-constraining. Such great uncertainty was mainly due to the high collinearity among the model parameters. Using pool-based models, we argue that soil C pool size, k and/or e and their responses to global change have to be estimated explicitly and empirically, rather than through model-fitting, in order to accurately predict C dynamics and infer underlying mechanisms. The OSSE approach provides a powerful way to identify data requirement for the new generation of model development and test model performance. This article is protected by copyright. All rights reserved.

  5. Regularized Partial and/or Constrained Redundancy Analysis

    ERIC Educational Resources Information Center

    Takane, Yoshio; Jung, Sunho

    2008-01-01

    Methods of incorporating a ridge type of regularization into partial redundancy analysis (PRA), constrained redundancy analysis (CRA), and partial and constrained redundancy analysis (PCRA) were discussed. The usefulness of ridge estimation in reducing mean square error (MSE) has been recognized in multiple regression analysis for some time,…

  6. The Pendulum: From Constrained Fall to the Concept of Potential

    ERIC Educational Resources Information Center

    Bevilacqua, Fabio; Falomo, Lidia; Fregonese, Lucio; Giannetto, Enrico; Giudice, Franco; Mascheretti, Paolo

    2006-01-01

    Kuhn underlined the relevance of Galileo's gestalt switch in the interpretation of a swinging body from constrained fall to time metre. But the new interpretation did not eliminate the older one. The constrained fall, both in the motion of pendulums and along inclined planes, led Galileo to the law of free fall. Experimenting with physical…

  7. The constrained reinitialization equation for level set methods

    NASA Astrophysics Data System (ADS)

    Hartmann, Daniel; Meinke, Matthias; Schröder, Wolfgang

    2010-03-01

    Based on the constrained reinitialization scheme [D. Hartmann, M. Meinke, W. Schröder, Differential equation based constrained reinitialization for level set methods, J. Comput. Phys. 227 (2008) 6821-6845] a new constrained reinitialization equation incorporating a forcing term is introduced. Two formulations for high-order constrained reinitialization (HCR) are presented combining the simplicity and generality of the original reinitialization equation [M. Sussman, P. Smereka, S. Osher, A level set approach for computing solutions to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146-159] in terms of high-order standard discretization and the accuracy of the constrained reinitialization scheme in terms of interface displacement. The novel HCR schemes represent simple extensions of standard implementations of the original reinitialization equation. The results evidence the significantly increased accuracy and robustness of the novel schemes.

  8. Constraining structural models of stellar helium cores using the pulsations of Feige 48

    NASA Astrophysics Data System (ADS)

    Reed, Mike; Jeffery, C. Simon; Telting, John; Quick, Breanna

    2014-02-01

    Asteroseismology is the art of using stellar pulsations to discern a star's detailed structure and evolutionary history. When many stars of similar structure and/or evolution can be studied, the results can be extremely powerful; examples of which include white dwarf and red giant seismology. However, the key to these successes are twofold: Observed pulsation frequencies must first be identified with spherical harmonics (modes) and mature models must exist for comparison. For subdwarf B (sdB) stars, Kepler observations have allowed progress with the former, but have indicated weaknesses in the latter. We propose using time- resolved spectroscopy combined with multicolor photometry to identify pulsation modes and constrain structure models. We propose to re-observe Feige 48 (KY UMa). We were allocated time during 2010A, but inclement weather prevented fully exploiting the pulsations. Yet those data provided surprising clues. Feige 48's an important sdB in a short-period binary, with constrained inclination and some constraints on three pulsation modes. Our proposed observations will constrain both the star and the binary system and provide calibration for models. This provides an arsenal of seismic tools for testing structure and evolution models of Feige 48 and other, previously observed, sdB stars.

  9. Constraining the source of mantle plumes

    NASA Astrophysics Data System (ADS)

    Cagney, N.; Crameri, F.; Newsome, W. H.; Lithgow-Bertelloni, C.; Cotel, A.; Hart, S. R.; Whitehead, J. A.

    2016-02-01

    In order to link the geochemical signature of hot spot basalts to Earth's deep interior, it is first necessary to understand how plumes sample different regions of the mantle. Here, we investigate the relative amounts of deep and shallow mantle material that are entrained by an ascending plume and constrain its source region. The plumes are generated in a viscous syrup using an isolated heater for a range of Rayleigh numbers. The velocity fields are measured using stereoscopic Particle-Image Velocimetry, and the concept of the 'vortex ring bubble' is used to provide an objective definition of the plume geometry. Using this plume geometry, the plume composition can be analysed in terms of the proportion of material that has been entrained from different depths. We show that the plume composition can be well described using a simple empirical relationship, which depends only on a single parameter, the sampling coefficient, sc. High-sc plumes are composed of material which originated from very deep in the fluid domain, while low-sc plumes contain material entrained from a range of depths. The analysis is also used to show that the geometry of the plume can be described using a similarity solution, in agreement with previous studies. Finally, numerical simulations are used to vary both the Rayleigh number and viscosity contrast independently. The simulations allow us to predict the value of the sampling coefficient for mantle plumes; we find that as a plume reaches the lithosphere, 90% of its composition has been derived from the lowermost 260-750 km in the mantle, and negligible amounts are derived from the shallow half of the lower mantle. This result implies that isotope geochemistry cannot provide direct information about this unsampled region, and that the various known geochemical reservoirs must lie in the deepest few hundred kilometres of the mantle.

  10. Ductile failure of a constrained metal foil

    NASA Astrophysics Data System (ADS)

    Varias, A. G.; Suo, Z.; Shih, C. F.

    A METAL foil bonded between stiff ceramic blocks may fail in a variety of ways, including de-adhesion of interfaces, cracking in the ceramics and ductile rupture of the metal. If the interface bond is strong enough to allow the foil to undergo substantial plastic deformation dimples are usually present on fracture surfaces and the nominal fracture energy is enhanced. Ductile fracture mechanisms responsible for such morphology include (i) growth of near-tip voids nucleated at second-phase particles and or interface pores, (ii) cavitation and (iii) interfacial debonding at the site of maximum stress which develops at distances of several foil thicknesses ahead of the crack tip. For a crack in a low to moderately hardening bulk metal, it is known that the maximum mean stress which develops at a distance of several crack openings ahead of the tip does not exceed about three times the yield stress. In contrast, the maximum mean stress that develops at several foil thicknesses ahead of the crack tip in a constrained metal foil can increase continuously with the applied load. Mean stress and interfacial traction of about four to six times the yield of the metal foil can trigger cavitation and/or interfacial debonding. The mechanical fields which bear on the competition between failure mechanisms are obtained by a large deformation finite element analysis. Effort is made to formulate predictive criteria indicating, for a given material system, which one of the several mechanisms operates and the relevant parameters that govern the nominal fracture work. The shielding of the crack tip in the context of ductile adhesive joints, due to the non-proportional deformation in a region of the order of the foil thickness, is also discussed.

  11. Quasi-optical constrained lens amplifiers

    NASA Astrophysics Data System (ADS)

    Schoenberg, Jon S.

    1995-09-01

    A major goal in the field of quasi-optics is to increase the power available from solid state sources by combining the power of individual devices in free space, as demonstrated with grid oscillators and grid amplifiers. Grid amplifiers and most amplifier arrays require a plane wave feed, provided by a far field source or at the beam waist of a dielectric lens pair. These feed approaches add considerable loss and size, which is usually greater than the quasi-optical amplifier gain. In addition, grid amplifiers require external polarizers for stability, further increasing size and complexity. This thesis describes using constrained lens theory in the design of quasi optical amplifier arrays with a focal point feed, improving the power coupling between the feed and the amplifier for increased gain. Feed and aperture arrays of elements, input/output isolation and stability, amplifier circuitry, delay lines and bias distribution are all contained on a single planar substrate, making monolithic circuit integration possible. Measured results of X band transmission lenses and a low noise receive lens are presented, including absolute power gain up to 13 dB, noise figure as low as 1.7 dB, beam scanning to +/-30 deg, beam forming and beam switching of multiple sources, and multiple level quasi-optical power combining. The design and performance of millimeter wave power combining amplifier arrays is described, including a Ka Band hybrid array with 1 watt output power, and a V Band 36 element monolithic array with a 5 dB on/off ratio.

  12. Titan's interior constrained from its obliquity and tidal Love number

    NASA Astrophysics Data System (ADS)

    Baland, Rose-Marie; Coyette, Alexis; Yseboodt, Marie; Beuthe, Mikael; Van Hoolst, Tim

    2016-04-01

    In the last few years, the Cassini-Huygens mission to the Saturn system has measured the shape, the obliquity, the static gravity field, and the tidally induced gravity field of Titan. The large values of the obliquity and of the k2 Love number both point to the existence of a global internal ocean below the icy crust. In order to constrain interior models of Titan, we combine the above-mentioned data as follows: (1) we build four-layer density profiles consistent with Titan's bulk properties; (2) we determine the corresponding internal flattening compatible with the observed gravity and topography; (3) we compute the obliquity and tidal Love number for each interior model; (4) we compare these predictions with the observations. Previously, we found that Titan is more differentiated than expected (assuming hydrostatic equilibrium), and that its ocean is dense and less than 100 km thick. Here, we revisit these conclusions using a more complete Cassini state model, including: (1) gravitational and pressure torques due to internal tidal deformations; (2) atmosphere/lakes-surface exchange of angular momentum; (3) inertial torque due to Poincaré flow. We also adopt faster methods to evaluate Love numbers (i.e. the membrane approach) in order to explore a larger parameter space.

  13. Constraining Dark Matter Through the Study of Merging Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Dawson, William Anthony

    2013-03-01

    gravitational lensing observations to map and weigh the mass (i.e., dark matter which comprises ~85% of the mass) of the cluster, Sunyaev-Zel'dovich effect and X-ray observations to map and quantify the intracluster gas, and finally radio observations to search for associated radio relics, which had they been observed would have helped constrain the properties of the merger. Using this information in conjunction with a Monte Carlo analysis model I quantify the dynamic properties of the merger, necessary to properly interpret constraints on the SIDM cross-section. I compare the locations of the galaxies, dark matter and gas to constrain the SIDM cross-section. This dissertation presents this work. Findings: We find that the Musket Ball is a merger with total mass of 4.8+3.2-1.5x10 14Msun. However, the dynamic analysis shows that the Musket Ball is being observed 1.1+1.3-0.4 Gyr after first pass through and is much further progressed in its merger process than previously identified dissociative mergers (for example it is 3.4+3.8 -1.4 times further progressed that the Bullet Cluster). By observing that the dark matter is significantly offset from the gas we are able to place an upper limit on the dark matter cross-section of sigmaSIDMm -1DM < 8 cm2g-1. However, we find an that the galaxies appear to be leading the weak lensing (WL) mass distribution by 20.5" (129 kpc at z=0.53) in southern subcluster, which might be expected to occur if dark matter self-interacts. Contrary to this finding though the WL mass centroid appears to be leading the galaxy centroid by 7.4" (47 kpc at z=0.53) in the northern subcluster. Conclusion: The southern offset alone suggests that dark matter self-interacts with ~83% confidence. However, when we account for the observation that the galaxy centroid appears to trail the WL centroid in the north the confidence falls to ~55%. While the SIDM scenario is slightly preferred over the CDM scenario it is not significantly so. Perspectives: The galaxy

  14. Constraining the Enceladus plume using numerical simulation and Cassini data

    NASA Astrophysics Data System (ADS)

    Yeoh, Seng Keat; Li, Zheng; Goldstein, David B.; Varghese, Philip L.; Levin, Deborah A.; Trafton, Laurence M.

    2017-01-01

    Since its discovery, the Enceladus plume has been subjected to intense study due to the major effects that it has on the Saturnian system and the window that it provides into the interior of Enceladus. However, several questions remain and we attempt to answer some of them in this work. In particular, we aim to constrain the H2O production rate from the plume, evaluate the relative importance of the jets and the distributed sources along the Tiger Stripes, and make inferences about the source of the plume by accurately modeling the plume and constraining the model using the Cassini INMS and UVIS data. This is an extension of a previous work (Yeoh, S.K., et al. [2015] Icarus, 253, 205-222) in which we only modeled the collisional part of the Enceladus plume and studied its important physical processes. In this work, we propagate the plume farther into space where the flow has become free-molecular and the Cassini INMS and UVIS data were sampled. Then, we fit this part of the plume to the INMS H2O density distributions sampled along the E3, E5 and E7 trajectories and also compare some of the fit results with the UVIS measurements of the plume optical depth collected during the solar occultation observation on 18 May 2010. We consider several vent conditions and source configurations for the plume. By constraining our model using the INMS and UVIS data, we estimate H2O production rates of several hundred kgs-1: 400-500 kg/s during the E3 and E7 flybys and ∼900 kg/s during the E5 flyby. These values agree with other estimates and are consistent with the observed temporal variability of the plume over the orbital period of Enceladus (Hedman, M.M., et al. [2013] Nature, 500, 182-184). In addition, we determine that one of the Tiger Stripes, Cairo, exhibits a local temporal variability consistent with the observed overall temporal variability of the plume. We also find that the distributed sources along the Tiger Stripes are likely dominant while the jets provide a

  15. Constraining depth of anisotropy in the Amazon region (Northern Brasil)

    NASA Astrophysics Data System (ADS)

    Bianchi, Irene; Willy Corrêa Rosa, João; Bokelmann, Götz

    2014-05-01

    Seismic data recorded between November 2009 and September 2013, at the permanent station PTGA of the Brazilian seismic network were used to constrain the depth of anisotropy in the lithosphere beneath the station. 90 receiver functions (RF) have been computed, covering the backazimuthal directions from 0° to 180°. Both radial (R) and transverse (T) components of the RF contain useful information about the subsurface structure. The isotropic part of the seismic velocity profile at depth mainly affects the R-RF component, while anisotropy and dipping structures produce P-to-S conversion recorded on the T-RF component (Levin and Park, 1998; Savage, 1998). The incoming (radially polarized) S waves, when passing through an anisotropic crust, splits and part of it is projected onto the transverse component. The anisotropy symmetry orientations (Φ) can be estimated by the polarity change of the observed phases. The arrival times of the phases is related to the depth of the conversion. Depth and Φ are estimated by isolating phases at certain arrival times. SKS shear-wave splitting results from previous studies in this area (Krüger et al., 2002, Rosa et al., 2014), suggest the presence of anisotropy in the mantle with orientation of the fast splitting axis (about E-W) following major deep tectonic structures. The observed splitting orientation correlates well with the current South America plate motion (i.e. relative to mesosphere), and with observed aeromagnetic trends. This similarity leaves open the possibility of a linkage between the upper mantle fabric imaged by shear wave splitting analysis and the lower crustal structure imaged by aeromagnetometry. In this study we unravel, from RF data, two layers in which anisotropy concentrates, i.e. the lower crust and the upper mantle. Lower crustal and upper mantle anisotropy retrieved by RFs give some new hints in order to interpret the previously observed anisotropic orientations from SKS and the aeromagnetic anomalies.

  16. Constraining canopy biophysical simulations with MODIS reflectance data

    NASA Astrophysics Data System (ADS)

    Drewry, D. T.; Duveiller, G.

    2013-05-01

    Modern vegetation models incorporate ecophysiological details that allow for accurate estimates of carbon dioxide uptake, water use and energy exchange, but require knowledge of dynamic structural and biochemical traits. Variations in these traits are controlled by genetic factors as well as growth stage and nutrient and moisture availability, making them difficult to predict and prone to significant error. Here we explore the use of MODIS optical reflectance data for constraining key canopy- and leaf-level traits required by forward biophysical models. A multi-objective optimization algorithm is used to invert the PROSAIL canopy radiation transfer model, which accounts for the effects of leaf-level optical properties, foliage distribution and orientation on canopy reflectance across the optical range. Inversions are conducted for several growing seasons for both soybean and maize at several sites in the Central US agro-ecosystem. These inversions provide estimates of seasonal variations, and associated uncertainty, of variables such as leaf area index (LAI) that are then used as inputs into the MLCan biophysical model to conduct forward simulations. MLCan characterizes the ecophysiological functioning of a plant canopy at a half-hourly timestep, and has been rigorously validated for both C3 and C4 crops against observations of canopy CO2 uptake, evapotranspiration and sensible heat exchange across a wide range of meteorological conditions. The inversion-derived canopy properties are used to examine the ability of MODIS data to characterize seasonal variations in canopy properties in the context of a detailed forward canopy biophysical model, and the uncertainty induced in forward model estimates as a function of the uncertainty in the inverted parameters. Special care is made to ensure that the satellite observations match adequately, in both time and space, with the coupled model simulations. To do so, daily MODIS observations are used and a validated model of

  17. Constraining Anthropogenic and Biogenic Emissions Using Chemical Ionization Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Spencer, Kathleen M.

    Numerous gas-phase anthropogenic and biogenic compounds are emitted into the atmosphere. These gases undergo oxidation to form other gas-phase species and particulate matter. Whether directly or indirectly, primary pollutants, secondary gas-phase products, and particulate matter all pose health and environmental risks. In this work, ambient measurements conducted using chemical ionization mass spectrometry are used as a tool for investigating regional air quality. Ambient measurements of peroxynitric acid (HO2NO2) were conducted in Mexico City. A method of inferring the rate of ozone production, PO3, is developed based on observations of HO2NO 2, NO, and NO2. Comparison of this observationally based PO3 to a highly constrained photochemical box model indicates that regulations aimed at reducing ozone levels in Mexico City by reducing NOx concentrations may be effective at higher NO x levels than predicted using accepted photochemistry. Measurements of SO2 and particulate sulfate were conducted over the Los Angeles basin in 2008 and are compared to measurements made in 2002. A large decrease in SO2 concentration and a change in spatial distribution are observed. Nevertheless, only a modest reduction in sulfate concentration is observed at ground sites within the basin. Possible explanations for these trends are investigated. Two techniques, single and triple quadrupole chemical ionization mass spectrometry, were used to quantify ambient concentrations of biogenic oxidation products, hydroxyacetone and glycolaldehyde. The use of these techniques demonstrates the advantage of triple quadrupole mass spectrometry for separation of mass analogues, provided the collision-induced daughter ions are sufficiently distinct. Enhancement ratios of hydroxyacetone and glycolaldehyde in Californian biomass burning plumes are presented as are concentrations of these compounds at a rural ground site downwind of Sacramento.

  18. Constraining Cometary Crystal Shapes from IR Spectral Features

    NASA Technical Reports Server (NTRS)

    Wooden, Diane H.; Lindsay, Sean; Harker, David E.; Kelley, Michael S. P.; Woodward, Charles E.; Murphy, James Richard

    2013-01-01

    A major challenge in deriving the silicate mineralogy of comets is ascertaining how the anisotropic nature of forsterite crystals affects the spectral features' wavelength, relative intensity, and asymmetry. Forsterite features are identified in cometary comae near 10, 11.05-11.2, 16, 19, 23.5, 27.5 and 33 microns [1-10], so accurate models for forsterite's absorption efficiency (Qabs) are a primary requirement to compute IR spectral energy distributions (SEDs, lambdaF lambda vs. lambda) and constrain the silicate mineralogy of comets. Forsterite is an anisotropic crystal, with three crystallographic axes with distinct indices of refraction for the a-, b-, and c-axis. The shape of a forsterite crystal significantly affects its spectral features [13-16]. We need models that account for crystal shape. The IR absorption efficiencies of forsterite are computed using the discrete dipole approximation (DDA) code DDSCAT [11,12]. Starting from a fiducial crystal shape of a cube, we systematically elongate/reduce one of the crystallographic axes. Also, we elongate/reduce one axis while the lengths of the other two axes are slightly asymmetric (0.8:1.2). The most significant grain shape characteristic that affects the crystalline spectral features is the relative lengths of the crystallographic axes. The second significant grain shape characteristic is breaking the symmetry of all three axes [17]. Synthetic spectral energy distributions using seven crystal shape classes [17] are fit to the observed SED of comet C/1995 O1 (Hale-Bopp). The Hale-Bopp crystalline residual better matches equant, b-platelets, c-platelets, and b-columns spectral shape classes, while a-platelets, a-columns and c-columns worsen the spectral fits. Forsterite condensation and partial evaporation experiments demonstrate that environmental temperature and grain shape are connected [18-20]. Thus, grain shape is a potential probe for protoplanetary disk temperatures where the cometary crystalline

  19. Constraining dark energy from the abundance of weak gravitational lenses

    NASA Astrophysics Data System (ADS)

    Weinberg, Nevin N.; Kamionkowski, Marc

    2003-05-01

    We examine the prospect of using the observed abundance of weak gravitational lenses to constrain the equation-of-state parameter w=p/ρ of dark energy. Dark energy modifies the distance-redshift relation, the amplitude of the matter power spectrum, and the rate of structure growth. As a result, it affects the efficiency with which dark-matter concentrations produce detectable weak-lensing signals. Here we solve the spherical-collapse model with dark energy, clarifying some ambiguities found in the literature. We also provide fitting formulae for the non-linear overdensity at virialization and the linear-theory overdensity at collapse. We then compute the variation in the predicted weak-lens abundance with w. We find that the predicted redshift distribution and number count of weak lenses are highly degenerate in w and the present matter density Ω0. If we fix Ω0 the number count of weak lenses for w=-2/3 is a factor of ~2 smaller than for the Λ cold dark matter (CDM) model w=-1. However, if we allow Ω0 to vary with w such that the amplitude of the matter power spectrum as measured by the Cosmic Background Explorer (COBE) matches that obtained from the X-ray cluster abundance, the decrease in the predicted lens abundance is less than 25 per cent for -1 <=w< -0.4. We show that a more promising method for constraining dark energy - one that is largely unaffected by the Ω0-w degeneracy as well as uncertainties in observational noise - is to compare the relative abundance of virialized X-ray lensing clusters with the abundance of non-virialized, X-ray underluminous, lensing haloes. For aperture sizes of ~15 arcmin, the predicted ratio of the non-virialized to virialized lenses is greater than 40 per cent and varies by ~20 per cent between w=-1 and -0.6. Overall, we find that, if all other weak-lensing parameters are fixed, a survey must cover at least ~40 deg2 in order for the weak-lens number count to differentiate a ΛCDM cosmology from a dark-energy model with w

  20. Constraining crustal anisotropy: The anisotropic H-κ stacking technique

    NASA Astrophysics Data System (ADS)

    Hammond, James

    2014-05-01

    Measuring anisotropy in the crust and mantle is commonly performed to make inferences on crust/upper mantle deformation, tectonic history or the presence of fluids. However, separating the contribution of the crust and mantle to the anisotropic signature remains a challenge. This is because common seismic techniques to determine anisotropy (e.g., SKS splitting, surface waves) lack the resolution to distinguish between the two, particular in regions where deep crustal earthquakes are lacking. Receiver functions offer the chance to determine anisotropy in the crust alone, offering both the depth resolution that shear-wave splitting lacks and the lateral resolution that surface waves are unable to provide. Here I present a new anisotropic H-κ stacking technique which constrains anisotropy in the crust. I show that in a medium with horizontally transverse isotropy a strong variation in κ (VP-to-VS ratio) with back azimuth is present which characterises the anisotropic medium. In a vertically transverse isotropic medium no variation in κ with back azimuth is observed, but κ is increased across all back azimuths. While, these results show that estimates of κ are more difficult to relate to composition than previously thought, they offer the opportunity to constrain anisotropy in the crust. Based on these observations I develop a new anisotropic H-κ stacking technique which inverts H-κ data for anisotropy. I apply these new techniques to data from the Afar Depression, Ethiopia and extend the technique to invert for melt induced anisotropy solving for melt fraction, aspect ratio and orientation of melt inclusions. I show that melt is stored in interconnected stacked sills in the lower crust, which likely supply the recent volcanic eruptions and dike intrusions. The crustal anisotropic signal can explain much of the SKS-splitting results, suggesting minimal influence from the mantle. This results show that it is essential to consider anisotropy when performing H

  1. Explaining Constrains Causal Learning in Childhood

    ERIC Educational Resources Information Center

    Walker, Caren M.; Lombrozo, Tania; Williams, Joseph J.; Rafferty, Anna N.; Gopnik, Alison

    2017-01-01

    Three experiments investigate how self-generated explanation influences children's causal learning. Five-year-olds (N = 114) observed data consistent with two hypotheses and were prompted to explain or to report each observation. In Study 1, when making novel generalizations, explainers were more likely to favor the hypothesis that accounted for…

  2. How We Can Constrain Aerosol Type Globally

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph

    2016-01-01

    In addition to aerosol number concentration, aerosol size and composition are essential attributes needed to adequately represent aerosol-cloud interactions (ACI) in models. As the nature of ACI varies enormously with environmental conditions, global-scale constraints on particle properties are indicated. And although advanced satellite remote-sensing instruments can provide categorical aerosol-type classification globally, detailed particle microphysical properties are unobtainable from space with currently available or planned technologies. For the foreseeable future, only in situ measurements can constrain particle properties at the level-of-detail required for ACI, as well as to reduce uncertainties in regional-to-global-scale direct aerosol radiative forcing (DARF). The limitation of in situ measurements for this application is sampling. However, there is a simplifying factor: for a given aerosol source, in a given season, particle microphysical properties tend to be repeatable, even if the amount varies from day-to-day and year-to-year, because the physical nature of the particles is determined primarily by the regional environment. So, if the PDFs of particle properties from major aerosol sources can be adequately characterized, they can be used to add the missing microphysical detail the better sampled satellite aerosol-type maps. This calls for Systematic Aircraft Measurements to Characterize Aerosol Air Masses (SAM-CAAM). We are defining a relatively modest and readily deployable, operational aircraft payload capable of measuring key aerosol absorption, scattering, and chemical properties in situ, and a program for characterizing statistically these properties for the major aerosol air mass types, at a level-of-detail unobtainable from space. It is aimed at: (1) enhancing satellite aerosol-type retrieval products with better aerosol climatology assumptions, and (2) improving the translation between satellite-retrieved aerosol optical properties and

  3. Densification behavior of ceramic and crystallizable glass materials constrained on a rigid substrate

    NASA Astrophysics Data System (ADS)

    Calata, Jesus N.

    2005-11-01

    Constrained sintering is an important process for many applications. The sintering process almost always involves some form of constraint, both internal and external, such as rigid particles, reinforcing fibers and substrates to which the porous body adheres. The densification behavior of zinc oxide and cordierite-base crystallizable glass constrained on a rigid substrate was studied to add to the understanding of the behavior of various materials undergoing sintering when subjected to external substrate constraint. Porous ZnO films were isothermally sintered at temperatures between 900°C and 1050°C. The results showed that the densification of films constrained on substrates is severely reduced. This was evident in the sintered microstructures where the particles are joined together by narrower necks forming a more open structure, instead of the equiaxed grains with wide grain boundaries observed in the freestanding films. The calculated activation energies of densification were also different. For the density range of 60 to 64%, the constrained film had an activation energy of 391 +/- 34 kJ/mole compared to 242 +/- 21 kJ/mole for the freestanding film, indicating a change in the densification mechanism. In-plane stresses were observed during the sintering of the constrained films. Yielding of the films, in which the stresses dropped slight or remained unchanged, occurred at relative densities below 60% before the stresses climbed linearly with increasing density followed by a gradual relaxation. A substantial amount of the stresses remained after cooling. Free and constrained films of the cordierite-base crystallizable glass (glass-ceramic) were sintered between 900°C and 1000°C. The substrate constraint did not have a significant effect on the densification rate but the constrained films eventually underwent expansion. Calculations of the densification activation energy showed that, on average, it was close to 1077 kJ/mole, the activation energy of the glass

  4. Folding of Small Proteins Using Constrained Molecular Dynamics

    PubMed Central

    Balaraman, Gouthaman S.; Park, In-Hee; Jain, Abhinandan; Vaidehi, Nagarajan

    2011-01-01

    The focus of this paper is to examine whether conformational search using constrained molecular dynamics (MD) method is more enhanced and enriched towards “native-like” structures compared to all-atom MD for the protein folding as a model problem. Constrained MD methods provide an alternate MD tool for protein structure prediction and structure refinement. It is computationally expensive to perform all-atom simulations of protein folding because the processes occur on a timescale of microseconds. Compared to the all-atom MD simulation, constrained MD methods have the advantage that stable dynamics can be achieved for larger time steps and the number of degrees of freedom is an order of magnitude smaller, leading to a decrease in computational cost. We have developed a generalized constrained MD method that allows the user to “freeze and thaw” torsional degrees of freedom as fit for the problem studied. We have used this method to perform all-torsion constrained MD in implicit solvent coupled with the replica exchange method to study folding of small proteins with various secondary structural motifs such as, α-helix (polyalanine, WALP16), β-turn (1E0Q), and a mixed motif protein (Trp-cage). We demonstrate that constrained MD replica exchange method exhibits a wider conformational search than all-atom MD with increased enrichment of near native structures. “Hierarchical” constrained MD simulations, where the partially formed helical regions in the initial stretch of the all-torsion folding simulation trajectory of Trp-cage were frozen, showed a better sampling of near native structures than all-torsion constrained MD simulations. This is in agreement with the zipping-and-assembly folding model put forth by Dill and coworkers for folding proteins. The use of hierarchical “freeze and thaw” clustering schemes in constrained MD simulation can be used to sample conformations that contribute significantly to folding of proteins. PMID:21591767

  5. Binding of flexible and constrained ligands to the Grb2 SH2 domain: structural effects of ligand preorganization

    PubMed Central

    Clements, John H.; DeLorbe, John E.; Benfield, Aaron P.; Martin, Stephen F.

    2010-01-01

    Structures of the Grb2 SH2 domain complexed with a series of pseudopeptides containing flexible (benzyl succinate) and constrained (aryl cyclopropanedicarboxylate) replacements of the phosphotyrosine (pY) residue in tripeptides derived from Ac-pYXN-NH2 (where X = V, I, E and Q) were elucidated by X-ray crystallography. Complexes of flexible/constrained pairs having the same pY + 1 amino acid were analyzed in order to ascertain what structural differences might be attributed to constraining the phosphotyrosine replacement. In this context, a given structural dissimilarity between complexes was considered to be significant if it was greater than the corresponding difference in complexes coexisting within the same asymmetric unit. The backbone atoms of the domain generally adopt a similar conformation and orientation relative to the ligands in the complexes of each flexible/constrained pair, although there are some significant differences in the relative orientations of several loop regions, most notably in the BC loop that forms part of the binding pocket for the phosphate group in the tyrosine replacements. These variations are greater in the set of complexes of constrained ligands than in the set of complexes of flexible ligands. The constrained ligands make more direct polar contacts to the domain than their flexible counterparts, whereas the more flexible ligand of each pair makes more single-water-mediated contacts to the domain; there was no correlation between the total number of protein–ligand contacts and whether the phosphotyrosine replacement of the ligand was preorganized. The observed differences in hydrophobic interactions between the complexes of each flexible/constrained ligand pair were generally similar to those observed upon comparing such contacts in coexisting complexes. The average adjusted B factors of the backbone atoms of the domain and loop regions are significantly greater in the complexes of constrained ligands than in the complexes of

  6. Residual flexibility test method for verification of constrained structural models

    NASA Technical Reports Server (NTRS)

    Admire, John R.; Tinker, Michael L.; Ivey, Edward W.

    1992-01-01

    A method is presented for deriving constrained modes and frequencies from a model correlated to a set of free-free test modes and a set of measured residual flexibilities. The method involves a simple modification of the MacNeal and Rubin component mode representation to allow verification of a constrained structural model. Results for two spaceflight structures show quick convergence of constrained modes using an easily measurable set of free-free modes plus the residual flexibility matrix or its boundary partition. This paper further validates the residual flexibility approach as an alternative test/analysis method when fixed-base testing proves impractical.

  7. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    NASA Astrophysics Data System (ADS)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  8. Quantum dynamics by the constrained adiabatic trajectory method

    SciTech Connect

    Leclerc, A.; Jolicard, G.; Guerin, S.; Killingbeck, J. P.

    2011-03-15

    We develop the constrained adiabatic trajectory method (CATM), which allows one to solve the time-dependent Schroedinger equation constraining the dynamics to a single Floquet eigenstate, as if it were adiabatic. This constrained Floquet state (CFS) is determined from the Hamiltonian modified by an artificial time-dependent absorbing potential whose forms are derived according to the initial conditions. The main advantage of this technique for practical implementation is that the CFS is easy to determine even for large systems since its corresponding eigenvalue is well isolated from the others through its imaginary part. The properties and limitations of the CATM are explored through simple examples.

  9. Constraining Magnetosphere Storm Drivers Through Analysis of the Atmospheric Response

    NASA Astrophysics Data System (ADS)

    Fedrizzi, M.; Fuller-Rowell, T. J.; Codrescu, M.

    2009-12-01

    The effects of Joule heating on neutral wind, composition, temperature and density of the upper atmosphere are known qualitatively but a quantitative characterization is still missing. A step towards such a quantitative analysis requires detailed observations of the dynamics itself and its impacts on the thermosphere-ionosphere system, as well as an adequate numerical model. For this research we use the global, three-dimensional, time-dependent, non-linear coupled model of the thermosphere, ionosphere, plasmasphere and electrodynamics (CTIPe), a self-consistent physics-based model that solves the momentum, energy, and composition equations for the neutral and ionized atmosphere. The F10.7 index is used to define solar EUV heating, ionization, and dissociation. Propagating tidal modes are imposed at 80 km altitude with a prescribed amplitude and phase. The magnetospheric energy input into the system is characterized by the time variations of the solar wind velocity and the interplanetary magnetic field (IMF) magnitude and direction, whereas the auroral precipitation is derived either from the TIROS/NOAA satellite observations or from ACE solar wind and IMF data. During geomagnetic storms the temperature of the Earth’s upper atmosphere can be substantially increased mainly due to high-latitude Joule heating induced by magnetospheric convection and auroral particle precipitation. This heating drives rapid increases in temperature inducing upwelling of the neutral atmosphere. The enhanced density results in a subsequent increase of atmospheric drag on satellites and large-scale ionospheric storm effects. The storm energy input drives changes in global circulation, neutral composition, plasma density, and electrodynamics. One full year comparison between ground and space observations with CTIPe results during solar minimum conditions shows that the model captures the daily space weather and the year-long climatology not only in a qualitative but in a quantitative way

  10. Age and mass of solar twins constrained by lithium abundance

    NASA Astrophysics Data System (ADS)

    Do Nascimento, J. D., Jr.; Castro, M.; Meléndez, J.; Bazot, M.; Théado, S.; Porto de Mello, G. F.; de Medeiros, J. R.

    2009-07-01

    Aims: We analyze the non-standard mixing history of the solar twins HIP 55 459, HIP 79 672, HIP 56 948, HIP 73 815, and HIP 100 963, to determine as precisely as possible their mass and age. Methods: We computed a grid of evolutionary models with non-standard mixing at several metallicities with the Toulouse-Geneva code for a range of stellar masses assuming an error bar of ±50 K in T_eff. We choose the evolutionary model that reproduces accurately the observed low lithium abundances observed in the solar twins. Results: Our best-fit model for each solar twin provides a mass and age solution constrained by their Li content and T_eff determination. HIP 56 948 is the most likely solar-twin candidate at the present time and our analysis infers a mass of 0.994 ± 0.004 {M⊙} and an age of 4.71 ± 1.39 Gyr. Conclusions: Non-standard mixing is required to explain the low Li abundances observed in solar twins. Li depletion due to additional mixing in solar twins is strongly mass dependent. An accurate lithium abundance measurement and non-standard models provide more precise information about the age and mass more robustly than determined by classical methods alone. The models are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/501/687 or via http://andromeda.dfte.ufrn.br

  11. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  12. Progress in constraining the asymmetry dependence of the nuclear caloric curve

    NASA Astrophysics Data System (ADS)

    McIntosh, Alan B.; Yennello, Sherry J.

    2016-05-01

    The nuclear equation of state is a basic emergent property of nuclear material. Despite its importance in nuclear physics and astrophysics, aspects of it are still poorly constrained. Our research focuses on answering the question: How does the nuclear caloric curve depend on the neutron-proton asymmetry? We briefly describe our initial observation that increasing neutron-richness leads to lower temperatures. We then discuss the status of our recently executed experiment to independently measure the asymmetry dependence of the caloric curve.

  13. CONDORR--CONstrained Dynamics of Rigid Residues: a molecular dynamics program for constrained molecules.

    PubMed

    York, William S; Yi, Xiaobing

    2004-08-01

    A computer program CONDORR (CONstrained Dynamics of Rigid Residues) was developed for molecular dynamics simulations of large and/or constrained molecular systems, particularly carbohydrates. CONDORR efficiently calculates molecular trajectories on the basis of 2D or 3D potential energy maps, and can generate such maps based on a simple force field. The simulations involve three translational and three rotational degrees of freedom for each rigid, asymmetrical residue in the model. Total energy and angular momentum are conserved when no stochastic or external forces are applied to the model, if the time step is kept sufficiently short. Application of Langevin dynamics allows longer time steps, providing efficient exploration of conformational space. The utility of CONDORR was demonstrated by application to a constrained polysaccharide model and to the calculation of residual dipolar couplings for a disaccharide. [Figure: see text]. Molecular models (bottom) are created by cloning rigid residue archetypes (top) and joining them together. As defined here, the archetypes AX, HM and BG respectively correspond to an alpha-D-Xyl p residue, a hydroxymethyl group, and a beta-D-Glc p residue lacking O6, H6a and H6b. Each archetype contains atoms (indicated by boxes) that can be shared with other archetypes to form a linked structure. For example, the glycosidic link between the two D-Glc p residues is established by specifying that O1 of the nonreducing beta-D-Glc p (BG) residue (2) is identical to O4 of the reducing Glc p (BG) residue (1). The coordinates of the two residues are adjusted so as to superimpose these two (nominally distinct) atoms. Flexible hydroxymethyl (HM) groups (3 and 4) are treated as separate residues, and the torsional angles (normally indicated by the symbol omega) that define their geometric relationships to the pyranosyl rings of the BG residues are specified as psi3 and psi4, respectively. The torsional angles phi3 and phi4, defined solely to

  14. Origin of Plumes in Paleogeographically Constrained Global Convection Models

    NASA Astrophysics Data System (ADS)

    Hassan, R.; Flament, N. E.; Gurnis, M.; Bower, D. J.; Müller, D.

    2015-12-01

    Large igneous provinces (LIPs) erupting since 200 Ma may have originated from plumes that emerged from the edges of the large low shear velocity provinces (LLSVPs) in the deep lower mantle. Although qualitative assessments that are broadly in agreement with this hypothesis have been derived from numerical convection models, a quantitative assessment has been lacking. We present global convection models constrained by plume motions and subduction history over the last 230 Myr, where plumes emerge preferentially from the edges of thermochemical structures that resemble present-day LLSVPs beneath Africa and the Pacific Ocean. We also present a novel plume detection scheme and derive Monte Carlo-based statistical correlations of model plume eruption sites and reconstructed LIP eruption sites. We show that models with a chemically anomalous lower mantle are highly correlated to reconstructed LIP eruption sites, whereas the confidence level obtained for a model with purely thermal plumes falls just short of 95%. A network of embayments separated by steep ridges forms in the deep lower mantle in models with a chemically anomalous lower mantle. Plumes become anchored to the peaks of the chemical ridges and the network of ridges acts as a floating anchor, adjusting to subduction-induced flow through time. The network of ridges imposes a characteristic separation between conduits that can extend into the interior of the thermochemical structures. This may explain the observed clustering of reconstructed LIP eruption sites that mostly but not exclusively occur around the present-day LLSVPs.

  15. Ultraconservation identifies a small subset of extremely constrained developmental enhancers

    SciTech Connect

    Pennacchio, Len A.; Visel, Axel; Prabhakar, Shyam; Akiyama, Jennifer A.; Shoukry, Malak; Lewis, Keith D.; Holt, Amy; Plajzer-Frick, Ingrid; Afzal, Veena; Rubin, Edward M.; Pennacchio, Len A.

    2007-10-01

    While experimental studies have suggested that non-coding ultraconserved DNA elements are central nodes in the regulatory circuitry that specifies mammalian embryonic development, the possible functional relevance of their>200bp of perfect sequence conservation between human-mouse-rat remains obscure 1,2. Here we have compared the in vivo enhancer activity of a genome-wide set of 231 non-exonic sequences with ultraconserved cores to that of 206 sequences that are under equivalently severe human-rodent constraint (ultra-like), but lack perfect sequence conservation. In transgenic mouse assays, 50percent of the ultraconserved and 50percent of the ultra-like conserved elements reproducibly functioned as tissue-specific enhancers at embryonic day 11.5. In this in vivo assay, we observed that ultraconserved enhancers and constrained non-ultraconserved enhancers targeted expression to a similar spectrum of tissues with a particular enrichment in the developing central nervous system. A human genome-wide comparative screen uncovered ~;;2,600 non-coding elements that evolved under ultra-like human-rodent constraint and are similarly enriched near transcriptional regulators and developmental genes as the much smaller number of ultraconserved elements. These data indicate that ultraconserved elements possessing absolute human-rodent sequence conservation are not distinct from other non-coding elements that are under comparable purifying selection in mammals and suggest they are principal constituents of the cis-regulatory framework of mammalian development.

  16. Constraining MHD Disk-Winds with X-ray Absorbers

    NASA Astrophysics Data System (ADS)

    Fukumura, Keigo; Tombesi, F.; Shrader, C. R.; Kazanas, D.; Contopoulos, J.; Behar, E.

    2014-01-01

    From the state-of-the-art spectroscopic observations of active galactic nuclei (AGNs) the robust features of absorption lines (e.g. most notably by H/He-like ions), called warm absorbers (WAs), have been often detected in soft X-rays (< 2 keV). While the identified WAs are often mildly blueshifted to yield line-of-sight velocities up to ~100-3,000 km/sec in typical X-ray-bright Seyfert 1 AGNs, a fraction of Seyfert galaxies such as PG 1211+143 exhibits even faster absorbers (v/ 0.1-0.2) called ultra-fast outflows (UFOs) whose physical condition is much more extreme compared with the WAs. Motivated by these recent X-ray data we show that the magnetically- driven accretion-disk wind model is a plausible scenario to explain the characteristic property of these X-ray absorbers. As a preliminary case study we demonstrate that the wind model parameters (e.g. viewing angle and wind density) can be constrained by data from PG 1211+143 at a statistically significant level with chi-squared spectral analysis. Our wind models can thus be implemented into the standard analysis package, XSPEC, as a table spectrum model for general analysis of X-ray absorbers.

  17. Could the Pliocene constrain the equilibrium climate sensitivity?

    NASA Astrophysics Data System (ADS)

    Hargreaves, J. C.; Annan, J. D.

    2016-08-01

    The mid-Pliocene Warm Period (mPWP) is the most recent interval in which atmospheric carbon dioxide was substantially higher than in modern pre-industrial times. It is, therefore, a potentially valuable target for testing the ability of climate models to simulate climates warmer than the pre-industrial state. The recent Pliocene Model Intercomparison Project (PlioMIP) presented boundary conditions for the mPWP and a protocol for climate model experiments. Here we analyse results from the PlioMIP and, for the first time, discuss the potential for this interval to usefully constrain the equilibrium climate sensitivity. We observe a correlation in the ensemble between their tropical temperature anomalies at the mPWP and their equilibrium sensitivities. If the real world is assumed to also obey this relationship, then the reconstructed tropical temperature anomaly at the mPWP can in principle generate a constraint on the true sensitivity. Directly applying this methodology using available data yields a range for the equilibrium sensitivity of 1.9-3.7 °C, but there are considerable additional uncertainties surrounding the analysis which are not included in this estimate. We consider the extent to which these uncertainties may be better quantified and perhaps lessened in the next few years.

  18. Constraining models of accretion outbursts in low-mass YSOs}

    NASA Astrophysics Data System (ADS)

    Ninan, J. P.; Ojha, D. K.; Ghosh, S. K.; Bhatt, B. C.

    Young low-mass stars, which are still undergoing accretion, have been found to undergo sudden outbursts in short period of time. They are believed to be due to sudden increase of typically ˜2 orders of magnitude in mass infall rate. Classically these objects are classified as FUors and EXors. FUors undergo long duration outbursts for several decades of typical magnitude δ m ˜ 4-5, while EXors undergo short duration outbursts for few months to years of typical magnitude δ m ˜ 2-3 and they might occur repeatedly. From the number count of FUors, it is estimated that every low-mass stars, on a minimum, undergo FUors kind of outburst in its early life. We present our study on three such rare outbursts in optical and near-infrared wavebands using long-term observations with 2-m Himalayan Chandra Telescope and 2-m IUCAA Girawali Observatory telescope. Using the current available models and the constrains on it, we can deduce to understand the physical process driving the outburst.

  19. NGC 1097:Constraining mechanisms for star formation with the VLA

    NASA Astrophysics Data System (ADS)

    Wood, Sarah; Sheth, Kartik; Balser, Dana S.; Yarber, Aara'L.

    2015-01-01

    The project goal is to trace the precise location of star forming regions in the barred spiral NGC 1097. Specifically we want to better understand how the star formation progresses in the bar and at the bar ends. Our hydrodynamic gas flow model indicates gas flow should never cross dust lanes yet previous azimuthal cross-correlation analysis have indicated that the Hα emission is offset on the leading side of the bar dust lanes. It is critical to verify the precise locations of the stars forming regions. Is the star formation initiated in the dust lanes, or perhaps in dust spurs on the trailing side of the galaxy? We will measure synchrotron and thermal radiation contributions to quantify recent activity and compare to existing Hα, GALEX, archival VLA, and new ALMA Cycle 0 and Cycle 1 observations. This project will help catalog current and past star formation activity in the bar of NGC 1097 and thus help constrain the mechanisms for star formation.

  20. Constraining cloud parameters using high density gas tracers in galaxies

    NASA Astrophysics Data System (ADS)

    Kazandjian, M. V.; Pelupessy, I.; Meijerink, R.; Israel, F. P.; Coppola, C. M.; Rosenberg, M. J. F.; Spaans, M.

    2016-11-01

    Far-infrared molecular emission is an important tool used to understand the excitation mechanisms of the gas in the interstellar medium (ISM) of star-forming galaxies. In the present work, we model the emission from rotational transitions with critical densities n ≳ 104 cm-3. We include 4-3 < J ≤ 15-14 transitions of CO and 13CO , in addition to J ≤ 7-6 transitions of HCN, HNC, and HCO+ on galactic scales. We do this by re-sampling high density gas in a hydrodynamic model of a gas-rich disk galaxy, assuming that the density field of the ISM of the model galaxy follows the probability density function (PDF) inferred from the resolved low density scales. We find that in a narrow gas density PDF, with a mean density of 10 cm-3 and a dispersion σ = 2.1 in the log of the density, most of the emission of molecular lines, even of gas with critical densities >104 cm-3, emanates from the 10-1000 cm-3 part of the PDF. We construct synthetic emission maps for the central 2 kpc of the galaxy and fit the line ratios of CO and 13CO up to J = 15-14, as well as HCN, HNC, and HCO+ up to J = 7-6, using one photo-dissociation region (PDR) model. We attribute the goodness of the one component fits for our model galaxy to the fact that the distribution of the luminosity, as a function of density, is peaked at gas densities between 10 and 1000 cm-3, with negligible contribution from denser gas. Specifically, the Mach number, ℳ, of the model galaxy is 10. We explore the impact of different log-normal density PDFs on the distribution of the line-luminosity as a function of density, and we show that it is necessary to have a broad dispersion, corresponding to Mach numbers ≳30 in order to obtain significant (>10%) emission from n> 104 cm-3 gas. Such Mach numbers are expected in star-forming galaxies, luminous infrared galaxies (LIRGS), and ultra-luminous infrared galaxies (ULIRGS). This method provides a way to constrain the global PDF of the ISM of galaxies from observations of

  1. A Constrained Spline Estimator of a Hazard Function.

    ERIC Educational Resources Information Center

    Bloxom, Bruce

    1985-01-01

    A constrained quadratic spline is proposed as an estimator of the hazard function of a random variable. A maximum penalized likelihood procedure is used to fit the estimator to a sample of psychological response times. (Author/LMO)

  2. Vibration control through passive constrained layer damping and active control

    NASA Astrophysics Data System (ADS)

    Lam, Margaretha J.; Inman, Daniel J.; Saunders, William R.

    1997-05-01

    To add damping to systems, viscoelastic materials (VEM) are added to structures. In order to enhance the damping effects of the VEM, a constraining layer is attached. When this constraining layer is an active element, the treatment is called active constrained layer damping (ACLD). Recently, the investigation of ACLD treatments has shown it to be an effective method of vibration suppression. In this paper, the treatment of a beam with a separate active element and passive constrained layer (PCLD) element is investigated. A Ritz- Galerkin approach is used to obtain discretized equations of motion. The damping is modeled using the GHM method and the system is analyzed in the time domain. By optimizing on the performance and control effort for both the active and passive case, it is shown that this treatment is capable of lower control effort with more inherent damping, and is therefore a better approach to damp vibration.

  3. Structure from motion in computationally constrained systems

    NASA Astrophysics Data System (ADS)

    Conroy, Joseph; Humbert, J. Sean

    2013-06-01

    Visual sensing is an attractive method to allow small, palm-sized flying vehicles to navigate complex environments without collisions. Visual processing for unmanned vehicles, however, is typically computationally intense. Insects are able to extract structural information about the environment by appropriate control of self-motion and efficient processing of the visual field. This paper presents a methodology that attempts to capture the insect's ability to do this by constructing a nonlinear observer with provable stability via a Lyapunov analysis. Furthermore, the persistency of excitation condition for the observer illustrates the need for a zig-zagging flight style exhibited by certain insects.

  4. Transverse gravity versus observations

    SciTech Connect

    Álvarez, Enrique; Faedo, Antón F.; López-Villarejo, J.J. E-mail: anton.fernandez@uam.es

    2009-07-01

    Theories of gravity invariant under those diffeomorphisms generated by transverse vectors, ∂{sub μ}ξ{sup μ} = 0 are considered. Such theories are dubbed transverse, and differ from General Relativity in that the determinant of the metric, g, is a transverse scalar. We comment on diverse ways in which these models can be constrained using a variety of observations. Generically, an additional scalar degree of freedom mediates the interaction, so the usual constraints on scalar-tensor theories have to be imposed. If the purely gravitational part is Einstein-Hilbert but the matter action is transverse, the models predict that the three a priori different concepts of mass (gravitational active and gravitational passive as well as inertial) are not equivalent anymore. These transverse deviations from General Relativity are therefore tightly constrained, actually correlated with existing bounds on violations of the equivalence principle, local violations of Newton's third law and/or violation of Local Position Invariance.

  5. Geometric constrained variational calculus. II: The second variation (Part I)

    NASA Astrophysics Data System (ADS)

    Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico

    2016-10-01

    Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.

  6. Constrained Adaptive Beamforming for Improved Contrast in Breast Ultrasound

    DTIC Science & Technology

    2007-06-01

    us to modify the Spatial Processing Optimized and Constrained algorithm ( SPOC ) toyield the Time-Domain Optimized Near-field Estimator (TONE). In...modified the Spatial Processing Optimized and Constrained ( SPOC ) algorithm [14, 15] to operate on near-field, broadband signals. In the process of writing...about this modified algorithm it became clear that our significant changes to SPOC really made it a new algorithm, deserving a new name. Thus we have

  7. Constrained Adaptive Beamforming for Improved Contrast in Breast Ultrasound

    DTIC Science & Technology

    2006-06-01

    have led us to utilize a recently proposed method, Spatial Processing Optimized and Constrained ( SPOC ). In initial simulations this method not only...6 Summary of First Year Progress……………………………………6 Conventional Beamformer Optimization………….…………….….7 SPOC Progress...and hundreds of papers published in this area, only the SPOC (Spatial Processing Optimized and Constrained) algorithm could be readily modified to meet

  8. Constrained Adaptive Beamforming for Improved Contrast in Breast Ultrasound

    DTIC Science & Technology

    2008-06-01

    us to modify the Spatial Processing Optimized and Constrained algorithm ( SPOC ) to yield the Time-Domain Optimized Near-field Estimator (TONE). In...beamformers - Improvement and evaluation of adaptive imaging algorithms based loosely on SPOC - Invention and testing of the basic TONE algorithm...named the Time-domain Optimized Near-Filed Estimator, or TONE. The novel ABF was developed from Spatial Processing: Optimized and Constrained ( SPOC

  9. Conjugate Gradient Methods for Constrained Least Squares Problems

    DTIC Science & Technology

    1990-01-01

    TINO Hi!AGL . edi"o ar m m Conjugate Gradient Methods for Constrained Least Squares Problems by Douglas James A thesis 3ubmitted to the Graduate Faculty...Methods for Constrained Least uares Problems (directed by Robert J . Plemmons). Nreiw• 1\\ . ’iu 1988, Barlow, Nichols, and Plemmons proposed order...typical). The blocks 13 element j / F __- . F= %.GEi ,,,mo node Figure 2.4: Matrices for Static Structurem Prcblem associated with t’e planar square

  10. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  11. Constraining compartmental models using multiple voltage recordings and genetic algorithms.

    PubMed

    Keren, Naomi; Peled, Noam; Korngreen, Alon

    2005-12-01

    Compartmental models with many nonlinearly and nonhomogeneous distributions of voltage-gated conductances are routinely used to investigate the physiology of complex neurons. However, the number of loosely constrained parameters makes manually constructing the desired model a daunting if not impossible task. Recently, progress has been made using automated parameter search methods, such as genetic algorithms (GAs). However, these methods have been applied to somatically recorded action potentials using relatively simple target functions. Using a genetic minimization algorithm and a reduced compartmental model based on a previously published model of layer 5 neocortical pyramidal neurons we compared the efficacy of five cost functions (based on the waveform of the membrane potential, the interspike interval, trajectory density, and their combinations) to constrain the model. When the model was constrained using somatic recordings only, a combined cost function was found to be the most effective. This combined cost function was then applied to investigate the contribution of dendritic and axonal recordings to the ability of the GA to constrain the model. The more recording locations from the dendrite and the axon that were added to the data set the better was the genetic minimization algorithm able to constrain the compartmental model. Based on these simulations we propose an experimental scheme that, in combination with a genetic minimization algorithm, may be used to constrain compartmental models of neurons.

  12. The eigenstate thermalization hypothesis in constrained Hilbert spaces: A case study in non-Abelian anyon chains

    NASA Astrophysics Data System (ADS)

    Chandran, A.; Schulz, Marc D.; Burnell, F. J.

    2016-12-01

    Many phases of matter, including superconductors, fractional quantum Hall fluids, and spin liquids, are described by gauge theories with constrained Hilbert spaces. However, thermalization and the applicability of quantum statistical mechanics has primarily been studied in unconstrained Hilbert spaces. In this paper, we investigate whether constrained Hilbert spaces permit local thermalization. Specifically, we explore whether the eigenstate thermalization hypothesis (ETH) holds in a pinned Fibonacci anyon chain, which serves as a representative case study. We first establish that the constrained Hilbert space admits a notion of locality by showing that the influence of a measurement decays exponentially in space. This suggests that the constraints are no impediment to thermalization. We then provide numerical evidence that ETH holds for the diagonal and off-diagonal matrix elements of various local observables in a generic disorder-free nonintegrable model. We also find that certain nonlocal observables obey ETH.

  13. Constraining Slab Breakoff Induced Magmatism through Numerical Modelling

    NASA Astrophysics Data System (ADS)

    Freeburn, R.; Van Hunen, J.; Maunder, B. L.; Magni, V.; Bouilhol, P.

    2015-12-01

    Post-collisional magmatism is markedly different in nature and composition than pre-collisional magmas. This is widely interpreted to mark a change in the thermal structure of the system due to the loss of the oceanic slab (slab breakoff), allowing a different source to melt. Early modelling studies suggest that when breakoff takes place at depths shallower than the overriding lithosphere, magmatism occurs through both the decompression of upwelling asthenopshere into the slab window and the thermal perturbation of the overriding lithosphere (Davies & von Blanckenburg, 1995; van de Zedde & Wortel, 2001). Interpretations of geochemical data which invoke slab breakoff as a means of generating magmatism mostly assume these shallow depths. However more recent modelling results suggest that slab breakoff is likely to occur deeper (e.g. Andrews & Billen, 2009; Duretz et al., 2011; van Hunen & Allen, 2011). Here we test the extent to which slab breakoff is a viable mechanism for generating melting in post-collisional settings. Using 2-D numerical models we conduct a parametric study, producing models displaying a range of dynamics with breakoff depths ranging from 150 - 300 km. Key models are further analysed to assess the extent of melting. We consider the mantle wedge above the slab to be hydrated, and compute the melt fraction by using a simple parameterised solidus. Our models show that breakoff at shallow depths can generate a short-lived (< 3 Myr) pulse of mantle melting, through the hydration of hotter, undepleted asthenosphere flowing in from behind the detached slab. However, our results do not display the widespread, prolonged style of magmatism, observed in many post-collisional areas, suggesting that this magmatism may be generated via alternative mechanisms. This further implies that using magmatic observations to constrain slab breakoff is not straightforward.

  14. Constraining primordial non-Gaussianity with future galaxy surveys

    NASA Astrophysics Data System (ADS)

    Giannantonio, Tommaso; Porciani, Cristiano; Carron, Julien; Amara, Adam; Pillepich, Annalisa

    2012-06-01

    We study the constraining power on primordial non-Gaussianity of future surveys of the large-scale structure of the Universe for both near-term surveys (such as the Dark Energy Survey - DES) as well as longer term projects such as Euclid and WFIRST. Specifically we perform a Fisher matrix analysis forecast for such surveys, using DES-like and Euclid-like configurations as examples, and take account of any expected photometric and spectroscopic data. We focus on two-point statistics and consider three observables: the 3D galaxy power spectrum in redshift space, the angular galaxy power spectrum and the projected weak-lensing shear power spectrum. We study the effects of adding a few extra parameters to the basic Λ cold dark matter (ΛCDM) set. We include the two standard parameters to model the current value for the dark-energy equation of state and its time derivative, w0, wa, and we account for the possibility of primordial non-Gaussianity of the local, equilateral and orthogonal types, of parameter fNL and, optionally, of spectral index ?. We present forecasted constraints on these parameters using the different observational probes. We show that accounting for models that include primordial non-Gaussianity does not degrade the constraint on the standard ΛCDM set nor on the dark-energy equation of state. By combining the weak-lensing data and the information on projected galaxy clustering, consistently including all two-point functions and their covariance, we find forecasted marginalized errors σ(fNL) ˜ 3, ? from a Euclid-like survey for the local shape of primordial non-Gaussianity, while the orthogonal and equilateral constraints are weakened for the galaxy clustering case, due to the weaker scale dependence of the bias. In the lensing case, the constraints remain instead similar in all configurations.

  15. Constraining Substellar Magnetic Dynamos using Auroral Radio Emission

    NASA Astrophysics Data System (ADS)

    Kao, Melodie; Hallinan, Gregg; Pineda, J. Sebastian; Escala, Ivanna; Burgasser, Adam J.; Stevenson, David J.

    2017-01-01

    An important outstanding problem in dynamo theory is understanding how magnetic fields are generated and sustained in fully convective stellar objects. A number of models for possible dynamo mechanisms in this regime have been proposed but constraining data on magnetic field strengths and topologies across a wide range of mass, age, rotation rate, and temperature are sorely lacking, particularly in the brown dwarf regime. Detections of highly circularly polarized pulsed radio emission provide our only window into magnetic field measurements for objects in the ultracool brown dwarf regime. However, these detections are very rare; previous radio surveys encompassing ˜60 L6 or later targets have yielded only one detection. We have developed a selection strategy for biasing survey targets based on possible optical and infrared tracers of auroral activity. Using our selection strategy, we previously observed six late L and T dwarfs with the Jansky Very Large Array (VLA) and detected the presence of highly circularly polarized radio emission for five targets. Our initial detections at 4-8 GHz provided the most robust constraints on dynamo theory in this regime, confirming magnetic fields >2.5 kG. To further develop our understanding of magnetic fields in the ultracool brown dwarf mass regime bridging planets and stars, we present constraints on surface magnetic field strengths for two Y-dwarfs as well as higher frequency observations of the previously detected L/T dwarfs corresponding ~3.6 kG fields. By carefully comparing magnetic field measurements derived from auroral radio emission to measurements derived from Zeeman broadening and Zeeman Doppler imaging, we provide tentative evidence that the dynamo operating in this mass regime may be inconsistent with predicted values from currently in vogue models. This suggests that parameters beyond convective flux may influence magnetic field generation in brown dwarfs.

  16. Prediction of noise constrained optimum takeoff procedures

    NASA Technical Reports Server (NTRS)

    Padula, S. L.

    1980-01-01

    An optimization method is used to predict safe, maximum-performance takeoff procedures which satisfy noise constraints at multiple observer locations. The takeoff flight is represented by two-degree-of-freedom dynamical equations with aircraft angle-of-attack and engine power setting as control functions. The engine thrust, mass flow and noise source parameters are assumed to be given functions of the engine power setting and aircraft Mach number. Effective Perceived Noise Levels at the observers are treated as functionals of the control functions. The method is demonstrated by applying it to an Advanced Supersonic Transport aircraft design. The results indicate that automated takeoff procedures (continuously varying controls) can be used to significantly reduce community and certification noise without jeopardizing safety or degrading performance.

  17. Constraining Accreting Binary Populations in Normal Galaxies

    NASA Astrophysics Data System (ADS)

    Lehmer, Bret; Hornschemeier, A.; Basu-Zych, A.; Fragos, T.; Jenkins, L.; Kalogera, V.; Ptak, A.; Tzanavaris, P.; Zezas, A.

    2011-01-01

    X-ray emission from accreting binary systems (X-ray binaries) uniquely probe the binary phase of stellar evolution and the formation of compact objects such as neutron stars and black holes. A detailed understanding of X-ray binary systems is needed to provide physical insight into the formation and evolution of the stars involved, as well as the demographics of interesting binary remnants, such as millisecond pulsars and gravitational wave sources. Our program makes wide use of Chandra observations and complementary multiwavelength data sets (through, e.g., the Spitzer Infrared Nearby Galaxies Survey [SINGS] and the Great Observatories Origins Deep Survey [GOODS]), as well as super-computing facilities, to provide: (1) improved calibrations for correlations between X-ray binary emission and physical properties (e.g., star-formation rate and stellar mass) for galaxies in the local Universe; (2) new physical constraints on accreting binary processes (e.g., common-envelope phase and mass transfer) through the fitting of X-ray binary synthesis models to observed local galaxy X-ray binary luminosity functions; (3) observational and model constraints on the X-ray evolution of normal galaxies over the last 90% of cosmic history (since z 4) from the Chandra Deep Field surveys and accreting binary synthesis models; and (4) predictions for deeper observations from forthcoming generations of X-ray telesopes (e.g., IXO, WFXT, and Gen-X) to provide a science driver for these missions. In this talk, we highlight the details of our program and discuss recent results.

  18. Prospects for constraining the shape of non-Gaussianity with the scale-dependent bias

    SciTech Connect

    Noreña, Jorge; Verde, Licia; Barenboim, Gabriela; Bosch, Cristian E-mail: liciaverde@icc.ub.edu E-mail: Cristian.Bosch@uv.es

    2012-08-01

    We consider whether the non-Gaussian scale-dependent halo bias can be used not only to constrain the local form of non-Gaussianity but also to distinguish among different shapes. In particular, we ask whether it can constrain the behavior of the primordial three-point function in the squeezed limit where one of the momenta is much smaller than the other two. This is potentially interesting since the observation of a three-point function with a squeezed limit that does not go like the local nor equilateral templates would be a signal of non-trivial dynamics during inflation. To this end we use the quasi-single field inflation model of Chen and Wang [1, 2] as a representative two-parameter model, where one parameter governs the amplitude of non-Gaussianity and the other the shape. We also perform a model-independent analysis by parametrizing the scale-dependent bias as a power-law on large scales, where the power is to be constrained from observations. We find that proposed large-scale structure surveys (with characteristics similar to the dark energy task force stage IV surveys) have the potential to distinguish among the squeezed limit behavior of different bispectrum shapes for a wide range of fiducial model parameters. Thus the halo bias can help discriminate between different models of inflation.

  19. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    PubMed Central

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling are performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigate the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles are performed during each test. The material is observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5 MPa to 4.2 MPa is observed for the constrained displacement recovery experiments. After performing the experiments, the Chen and Lagoudas model is used to simulate and predict the experimental results. The material properties used in the constitutive model – namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction – are calibrated from a single 10% extension free recovery experiment. The model is then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data. PMID:22003272

  20. Constrained growth flips the direction of optimal phenological responses among annual plants.

    PubMed

    Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas

    2016-03-01

    Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory.

  1. Improving Ocean Angular Momentum Estimates Using a Model Constrained by Data

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.; Stammer, Detlef; Wunsch, Carl

    2001-01-01

    Ocean angular momentum (OAM) calculations using forward model runs without any data constraints have, recently revealed the effects of OAM variability on the Earth's rotation. Here we use an ocean model and its adjoint to estimate OAM values by constraining the model to available oceanic data. The optimization procedure yields substantial changes in OAM, related to adjustments in both motion and mass fields, as well as in the wind stress torques acting on the ocean. Constrained and unconstrained OAM values are discussed in the context of closing the planet's angular momentum budget. The estimation procedure, yields noticeable improvements in the agreement with the observed Earth rotation parameters, particularly at the seasonal timescale. The comparison with Earth rotation measurements provides an independent consistency check on the estimated ocean state and underlines the importance of ocean state estimation for quantitative. studies of the variable large-scale oceanic mass and circulation fields, including studies of OAM.

  2. Energy losses in thermally cycled optical fibers constrained in small bend radii

    SciTech Connect

    Guild, Eric; Morelli, Gregg

    2012-09-23

    High energy laser pulses were fired into a 365μm diameter fiber optic cable constrained in small radii of curvature bends, resulting in a catastrophic failure. Q-switched laser pulses from a flashlamp pumped, Nd:YAG laser were injected into the cables, and the spatial intensity profile at the exit face of the fiber was observed using an infrared camera. The transmission of the radiation through the tight radii resulted in an asymmetric intensity profile with one half of the fiber core having a higher peak-to-average energy distribution. Prior to testing, the cables were thermally conditioned while constrained in the small radii of curvature bends. Single-bend, double-bend, and U-shaped eometries were tested to characterize various cable routing scenarios.

  3. Gravitational-wave limits from pulsar timing constrain supermassive black hole evolution.

    PubMed

    Shannon, R M; Ravi, V; Coles, W A; Hobbs, G; Keith, M J; Manchester, R N; Wyithe, J S B; Bailes, M; Bhat, N D R; Burke-Spolaor, S; Khoo, J; Levin, Y; Osłowski, S; Sarkissian, J M; van Straten, W; Verbiest, J P W; Wang, J-B

    2013-10-18

    The formation and growth processes of supermassive black holes (SMBHs) are not well constrained. SMBH population models, however, provide specific predictions for the properties of the gravitational-wave background (GWB) from binary SMBHs in merging galaxies throughout the universe. Using observations from the Parkes Pulsar Timing Array, we constrain the fractional GWB energy density (Ω(GW)) with 95% confidence to be Ω(GW)(H0/73 kilometers per second per megaparsec)(2) < 1.3 × 10(-9) (where H0 is the Hubble constant) at a frequency of 2.8 nanohertz, which is approximately a factor of 6 more stringent than previous limits. We compare our limit to models of the SMBH population and find inconsistencies at confidence levels between 46 and 91%. For example, the standard galaxy formation model implemented in the Millennium Simulation Project is inconsistent with our limit with 50% probability.

  4. ALMA Observations of TNOs

    NASA Astrophysics Data System (ADS)

    Butler, Bryan J.; Brown, Michael E.

    2016-10-01

    Some of the most fundamental properties of TNOs are still quite poorly constrained, including diameter and density. Observations at long thermal wavelengths, in the millimeter and submillimeter, hold promise for determining these quantities, at least for the largest of these bodies (and notably for those with companions). Knowing this information can then yield clues as to the formation mechanism of these bodies, allowing us to distinguish between pairwise accretion and other formation scenarios.We have used the Atacama Large Millimeter/Submillimeter Array (ALMA) to observe Orcus, Quaoar, Salacia, and 2002 UX25 at wavelengths of 1.3 and 0.8 mm, in order to constrain the sizes of these bodies. We have also used ALMA to make astrometric observations of the Eris-Dysnomia system, in an attempt to measure the wobble of Eris and hence accurately determine its density. Dysnomia should also be directly detectable in those data, separate from Eris (ALMA has sufficient resolution in the configuration in which the observations were made). Results from these observations will be presented and discussed.

  5. Using seismically constrained magnetotelluric inversion to recover velocity structure in the shallow lithosphere

    NASA Astrophysics Data System (ADS)

    Moorkamp, M.; Fishwick, S.; Jones, A. G.

    2015-12-01

    Typical surface wave tomography can recover well the velocity structure of the upper mantle in the depth range between 70-200km. For a successful inversion, we have to constrain the crustal structure and assess the impact on the resulting models. In addition,we often observe potentially interesting features in the uppermost lithosphere which are poorly resolved and thus their interpretationhas to be approached with great care.We are currently developing a seismically constrained magnetotelluric (MT) inversion approach with the aim of better recovering the lithospheric properties (and thus seismic velocities) in these problematic areas. We perform a 3D MT inversion constrained by a fixed seismic velocity model from surface wave tomography. In order to avoid strong bias, we only utilize information on structural boundaries to combine these two methods. Within the region that is well resolved by both methods, we can then extract a velocity-conductivity relationship. By translating the conductivitiesretrieved from MT into velocities in areas where the velocity model is poorly resolved, we can generate an updated velocity model and test what impactthe updated velocities have on the predicted data.We test this new approach using a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons togetherwith a tomographic models for the region. Here, both datasets have previously been used to constrain lithospheric structure and show some similarities.We carefully asses the validity of our results by comparing with observations and petrophysical predictions for the conductivity-velocity relationship.

  6. Exploring JWST's Capability to Constrain Habitability on Simulated Terrestrial TESS Planets

    NASA Astrophysics Data System (ADS)

    Tremblay, Luke; Britt, Amber; Batalha, Natasha; Schwieterman, Edward; Arney, Giada; Domagal-Goldman, Shawn; Mandell, Avi; Planetary Systems Laboratory; Virtual Planetary Laboratory

    2017-01-01

    In the following, we have worked to develop a flexible "observability" scale of biologically relevant molecules in the atmospheres of newly discovered exoplanets for the instruments aboard NASA's next flagship mission, the James Webb Space Telescope (JWST). We sought to create such a scale in order to provide the community with a tool with which to optimize target selection for JWST observations based on detections of the upcoming Transiting Exoplanet Satellite Survey (TESS). Current literature has laid the groundwork for defining both biologically relevant molecules as well as what characteristics would make a new world "habitable", but it has so far lacked a cohesive analysis of JWST's capabilities to observe these molecules in exoplanet atmospheres and thereby constrain habitability. In developing our Observability Scale, we utilized a range of hypothetical planets (over planetary radii and stellar insolation) and generated three self-consistent atmospheric models (of dierent molecular compositions) for each of our simulated planets. With these planets and their corresponding atmospheres, we utilized the most accurate JWST instrument simulator, created specically to process transiting exoplanet spectra. Through careful analysis of these simulated outputs, we were able to determine the relevant parameters that effected JWST's ability to constrain each individual molecular bands with statistical accuracy and therefore generate a scale based on those key parameters. As a preliminary test of our Observability Scale, we have also applied it to the list of TESS candidate stars in order to determine JWST's observational capabilities for any soon-to-be-detected planet in those solar systems.

  7. Constraining Sommerfeld enhanced annihilation cross-sections of dark matter via direct searches

    NASA Astrophysics Data System (ADS)

    Arina, Chiara; Josse-Michaux, François-Xavier; Sahu, Narendra

    2010-08-01

    In a large class of models we show that the light scalar field responsible for the Sommerfeld enhancement in the annihilation of dark matter leads to observable direct detection rates, due to its mixing with the standard model Higgs. As a result the large annihilation cross-section of dark matter at present epoch, required to explain the observed cosmic ray anomalies, can be strongly constrained by direct searches. In particular Sommerfeld boost factors of order of a few hundred are already out of the CDMS-II upper bound at 90% confidence level for reasonable values of the model parameters.

  8. Constraining Cometary Crystal Shapes from IR Spectral Features

    NASA Astrophysics Data System (ADS)

    Wooden, D. H.; Lindsay, S.; Harker, D. E.; Kelley, M. S.; Woodward, C. E.; Murphy, J. R.

    2013-12-01

    A major challenge in deriving the silicate mineralogy of comets is ascertaining how the anisotropic nature of forsterite crystals affects the spectral features' wavelength, relative intensity, and asymmetry. Forsterite features are identified in cometary comae near 10, 11.05-11.2, 16, 19, 23.5, 27.5 and 33 μm [1-10], so accurate models for forsterite's absorption efficiency (Qabs) are a primary requirement to compute IR spectral energy distributions (SEDs, λFλ vs. λ) and constrain the silicate mineralogy of comets. Forsterite is an anisotropic crystal, with three crystallographic axes with distinct indices of refraction for the a-, b-, and c-axis. The shape of a forsterite crystal significantly affects its spectral features [13-16]. We need models that account for crystal shape. The IR absorption efficiencies of forsterite are computed using the discrete dipole approximation (DDA) code DDSCAT [11,12]. Starting from a fiducial crystal shape of a cube, we systematically elongate/reduce one of the crystallographic axes. Also, we elongate/reduce one axis while the lengths of the other two axes are slightly asymmetric (0.8:1.2). The most significant grain shape characteristic that affects the crystalline spectral features is the relative lengths of the crystallographic axes. The second significant grain shape characteristic is breaking the symmetry of all three axes [17]. Synthetic spectral energy distributions using seven crystal shape classes [17] are fit to the observed SED of comet C/1995 O1 (Hale-Bopp). The Hale-Bopp crystalline residual better matches equant, b-platelets, c-platelets, and b-columns spectral shape classes, while a-platelets, a-columns and c-columns worsen the spectral fits. Forsterite condensation and partial evaporation experiments demonstrate that environmental temperature and grain shape are connected [18-20]. Thus, grain shape is a potential probe for protoplanetary disk temperatures where the cometary crystalline forsterite formed. The

  9. Neurally Constrained Modeling of Perceptual Decision Making

    PubMed Central

    Purcell, Braden A.; Heitz, Richard P.; Cohen, Jeremiah Y.; Schall, Jeffrey D.; Logan, Gordon D.; Palmeri, Thomas J.

    2010-01-01

    Stochastic accumulator models account for response time in perceptual decision-making tasks by assuming that perceptual evidence accumulates to a threshold. The present investigation mapped the firing rate of frontal eye field (FEF) visual neurons onto perceptual evidence and the firing rate of FEF movement neurons onto evidence accumulation to test alternative models of how evidence is combined in the accumulation process. The models were evaluated on their ability to predict both response time distributions and movement neuron activity observed in monkeys performing a visual search task. Models that assume gating of perceptual evidence to the accumulating units provide the best account of both behavioral and neural data. These results identify discrete stages of processing with anatomically distinct neural populations and rule out several alternative architectures. The results also illustrate the use of neurophysiological data as a model selection tool and establish a novel framework to bridge computational and neural levels of explanation. PMID:20822291

  10. Constraining Emission Models of Luminous Blazar Sources

    SciTech Connect

    Sikora, Marek; Stawarz, Lukasz; Moderski, Rafal; Nalewajko, Krzysztof; Madejski, Greg; /KIPAC, Menlo Park /SLAC

    2009-10-30

    Many luminous blazars which are associated with quasar-type active galactic nuclei display broad-band spectra characterized by a large luminosity ratio of their high-energy ({gamma}-ray) and low-energy (synchrotron) spectral components. This large ratio, reaching values up to 100, challenges the standard synchrotron self-Compton models by means of substantial departures from the minimum power condition. Luminous blazars have also typically very hard X-ray spectra, and those in turn seem to challenge hadronic scenarios for the high energy blazar emission. As shown in this paper, no such problems are faced by the models which involve Comptonization of radiation provided by a broad-line-region, or dusty molecular torus. The lack or weakness of bulk Compton and Klein-Nishina features indicated by the presently available data favors production of {gamma}-rays via up-scattering of infrared photons from hot dust. This implies that the blazar emission zone is located at parsec-scale distances from the nucleus, and as such is possibly associated with the extended, quasi-stationary reconfinement shocks formed in relativistic outflows. This scenario predicts characteristic timescales for flux changes in luminous blazars to be days/weeks, consistent with the variability patterns observed in such systems at infrared, optical and {gamma}-ray frequencies. We also propose that the parsec-scale blazar activity can be occasionally accompanied by dissipative events taking place at sub-parsec distances and powered by internal shocks and/or reconnection of magnetic fields. These could account for the multiwavelength intra-day flares occasionally observed in powerful blazars sources.

  11. Right-Left Approach and Reaching Arm Movements of 4-Month Infants in Free and Constrained Conditions

    ERIC Educational Resources Information Center

    Morange-Majoux, Francoise; Dellatolas, Georges

    2010-01-01

    Recent theories on the evolution of language (e.g. Corballis, 2009) emphazise the interest of early manifestations of manual laterality and manual specialization in human infants. In the present study, left- and right-hand movements towards a midline object were observed in 24 infants aged 4 months in a constrained condition, in which the hands…

  12. Constrained iterations for blind deconvolution and convexity issues

    NASA Astrophysics Data System (ADS)

    Spaletta, Giulia; Caucci, Luca

    2006-12-01

    The need for image restoration arises in many applications of various scientific disciplines, such as medicine and astronomy and, in general, whenever an unknown image must be recovered from blurred and noisy data [M. Bertero, P. Boccacci, Introduction to Inverse Problems in Imaging, Institute of Physics Publishing, Philadelphia, PA, USA, 1998]. The algorithm studied in this work restores the image without the knowledge of the blur, using little a priori information and a blind inverse filter iteration. It represents a variation of the methods proposed in Kundur and Hatzinakos [A novel blind deconvolution scheme for image restoration using recursive filtering, IEEE Trans. Signal Process. 46(2) (1998) 375-390] and Ng et al. [Regularization of RIF blind image deconvolution, IEEE Trans. Image Process. 9(6) (2000) 1130-1134]. The problem of interest here is an inverse one, that cannot be solved by simple filtering since it is ill-posed. The imaging system is assumed to be linear and space-invariant: this allows a simplified relationship between unknown and observed images, described by a point spread function modeling the distortion. The blurring, though, makes the restoration ill-conditioned: regularization is therefore also needed, obtained by adding constraints to the formulation. The restoration is modeled as a constrained minimization: particular attention is given here to the analysis of the objective function and on establishing whether or not it is a convex function, whose minima can be located by classic optimization techniques and descent methods. Numerical examples are applied to simulated data and to real data derived from various applications. Comparison with the behavior of methods [D. Kundur, D. Hatzinakos, A novel blind deconvolution scheme for image restoration using recursive filtering, IEEE Trans. Signal Process. 46(2) (1998) 375-390] and [M. Ng, R.J. Plemmons, S. Qiao, Regularization of RIF Blind Image Deconvolution, IEEE Trans. Image Process. 9

  13. A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars

    2016-11-01

    We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.

  14. Analyzing and constraining signaling networks: parameter estimation for the user.

    PubMed

    Geier, Florian; Fengos, Georgios; Felizzi, Federico; Iber, Dagmar

    2012-01-01

    The behavior of most dynamical models not only depends on the wiring but also on the kind and strength of interactions which are reflected in the parameter values of the model. The predictive value of mathematical models therefore critically hinges on the quality of the parameter estimates. Constraining a dynamical model by an appropriate parameterization follows a 3-step process. In an initial step, it is important to evaluate the sensitivity of the parameters of the model with respect to the model output of interest. This analysis points at the identifiability of model parameters and can guide the design of experiments. In the second step, the actual fitting needs to be carried out. This step requires special care as, on the one hand, noisy as well as partial observations can corrupt the identification of system parameters. On the other hand, the solution of the dynamical system usually depends in a highly nonlinear fashion on its parameters and, as a consequence, parameter estimation procedures get easily trapped in local optima. Therefore any useful parameter estimation procedure has to be robust and efficient with respect to both challenges. In the final step, it is important to access the validity of the optimized model. A number of reviews have been published on the subject. A good, nontechnical overview is provided by Jaqaman and Danuser (Nat Rev Mol Cell Biol 7(11):813-819, 2006) and a classical introduction, focussing on the algorithmic side, is given in Press (Numerical recipes: The art of scientific computing, Cambridge University Press, 3rd edn., 2007, Chapters 10 and 15). We will focus on the practical issues related to parameter estimation and use a model of the TGFβ-signaling pathway as an educative example. Corresponding parameter estimation software and models based on MATLAB code can be downloaded from the authors's web page ( http://www.bsse.ethz.ch/cobi ).

  15. Constraining the nature of the asthenosphere

    NASA Astrophysics Data System (ADS)

    Fahy, E. H.; Hall, P.; Faul, U.

    2010-12-01

    Geophysical observations indicate that the oceanic upper mantle has relatively low seismic velocities, high seismic attenuation, and high electrical conductivity at depths of ~80-200km. These depths coincide with the rheologically weak layer known as the asthenosphere . Three hypotheses have been proposed to account for these observations: 1) the presence of volatiles, namely water, in the oceanic upper mantle; 2) the presence of small-degree partial melts in the oceanic upper mantle; and 3) variations in the physical properties of dry, melt-free peridotite with temperature and pressure. Each of these hypotheses suggests a characteristic distribution of volatiles and melt in the upper mantle, resulting in corresponding spatial variations in viscosity, density, seismic structure, and electrical conductivity. These viscosity and density scenarios can also lead to variations in the onset time and growth rate of thermal instabilities at the base of the overriding lithosphere, which can in turn affect heat flow, bathymetry, and seismic structure. We report on the results of a series of computational geodynamic experiments that evaluate the dynamical consequences of each of the three proposed scenarios. Experiments were conducted using the CitcomCU finite element package to model the evolution of the oceanic lithosphere and flow in the underlying mantle. Our model domain consists of 2048x256x64 elements corresponding, to physical dimensions of 12,800x1600x400km. These dimensions allow us to consider oceanic lithosphere to ages of ~150Ma. We adopt the composite rheology law from Billen & Hirth (2007), which combines both diffusion and dislocation creep mechanisms, and consider a range of rheological parameters (e.g., activation energy, activation volume, grain size) as obtained from laboratory deformation experiments [e.g. Hirth & Kohlstedt, 2003]. Melting and volatile content within the model domain are tracked using a Lagrangian particle scheme. Variations in depletion

  16. Constraining the Mean Crustal Thickness on Mercury

    NASA Technical Reports Server (NTRS)

    Nimmo, F.

    2001-01-01

    The topography of Mercury is poorly known, with only limited radar and stereo coverage available. However, radar profiles reveal topographic contrasts of several kilometers over wavelengths of approximately 1000 km. The bulk of Mercury's geologic activity took place within the first 1 Ga of the planet's history), and it is therefore likely that these topographic features derive from this period. On Earth, long wavelength topographic features are supported either convectively, or through some combination of isostasy and flexure. Photographic images show no evidence for plume-like features, nor for plate tectonics; I therefore assume that neither convective support nor Pratt isostasy are operating. The composition and structure of the crust of Mercury are almost unknown. The reflectance spectrum of the surface of Mercury is similar to that of the lunar highlands, which are predominantly plagioclase. Anderson et al. used the observed center-of-mass center-of-figure offset together with an assumption of Airy isostasy to infer a crustal thickness of 100-300 km. Based on tidal despinning arguments, the early elastic thickness (T(sub e)) of the (unfractured) lithosphere was approximately equal to or less than 100 km. Thrust faults with lengths of up to 500 km and ages of about 4 Ga B.P. are known to exist on Mercury. Assuming a semicircular slip distribution and a typical thrust fault angle of 10 degrees, the likely vertical depth to the base of these faults is about 45 km. More sophisticated modelling gives similar or slightly smaller answers. The depth to the base of faulting and the elastic layer are usually similar on Earth, and both are thought to be thermally controlled. Assuming that the characteristic temperature is about 750 K, the observed fault depth implies that the heat flux at 4 Ga B.P. is unlikely to be less than 20 mW m(exp -2) for a linear temperature gradient. For an elastic thickness of 45 km, topography at 1000 km wavelength is likely to be about 60

  17. Constraining the aerosol influence on cloud fraction

    NASA Astrophysics Data System (ADS)

    Gryspeerdt, E.; Quaas, J.; Bellouin, N.

    2016-04-01

    Aerosol-cloud interactions have the potential to modify many different cloud properties. There is significant uncertainty in the strength of these aerosol-cloud interactions in analyses of observational data, partly due to the difficulty in separating aerosol effects on clouds from correlations generated by local meteorology. The relationship between aerosol and cloud fraction (CF) is particularly important to determine, due to the strong correlation of CF to other cloud properties and its large impact on radiation. It has also been one of the hardest to quantify from satellites due to the strong meteorological covariations involved. This work presents a new method to analyze the relationship between aerosol optical depth (AOD) and CF. By including information about the cloud droplet number concentration (CDNC), the impact of the meteorological covariations is significantly reduced. This method shows that much of the AOD-CF correlation is explained by relationships other than that mediated by CDNC. By accounting for these, the strength of the global mean AOD-CF relationship is reduced by around 80%. This suggests that the majority of the AOD-CF relationship is due to meteorological covariations, especially in the shallow cumulus regime. Requiring CDNC to mediate the AOD-CF relationship implies an effective anthropogenic radiative forcing from an aerosol influence on liquid CF of -0.48 W m-2 (-0.1 to -0.64 W m-2), although some uncertainty remains due to possible biases in the CDNC retrievals in broken cloud scenes.

  18. Constraining the origin of magnetar flares

    NASA Astrophysics Data System (ADS)

    Link, Bennett

    2014-07-01

    Sudden relaxation of the magnetic field in the core of a magnetar produces mechanical energy primarily in the form of shear waves which propagate to the surface and enter the magnetosphere as relativistic Alfvén waves. Due to a strong impedance mismatch, shear waves excited in the star suffer many reflections before exiting the star. If mechanical energy is deposited in the core and is converted directly to radiation upon propagation to the surface, the rise time of the emission is at least seconds to minutes, and probably minutes to hours for a realistic magnetic field geometry, at odds with observed rise times of ≲10 ms for both small and giant flares. Mechanisms for both small and giant flares that rely on the sudden relaxation of the magnetic field of the core are rendered unviable by the impedance mismatch, requiring the energy that drives these events to be stored in the magnetosphere just before the flare. A corollary to this conclusion is that if the quasi-periodic oscillations seen in giant flares represent stellar oscillations, they must be excited by the magnetosphere, not by mechanical energy released inside the star. Excitation of stellar oscillations by relativistic Alfvén waves in the magnetosphere could be quick enough to excite stellar modes well before a giant flare ends, unless the waves are quickly damped.

  19. Constraining the Statistics of Population III Binaries

    NASA Technical Reports Server (NTRS)

    Stacy, Athena; Bromm, Volker

    2012-01-01

    We perform a cosmological simulation in order to model the growth and evolution of Population III (Pop III) stellar systems in a range of host minihalo environments. A Pop III multiple system forms in each of the ten minihaloes, and the overall mass function is top-heavy compared to the currently observed initial mass function in the Milky Way. Using a sink particle to represent each growing protostar, we examine the binary characteristics of the multiple systems, resolving orbits on scales as small as 20 AU. We find a binary fraction of approx. 36, with semi-major axes as large as 3000 AU. The distribution of orbital periods is slightly peaked at approx. < 900 yr, while the distribution of mass ratios is relatively flat. Of all sink particles formed within the ten minihaloes, approx. 50 are lost to mergers with larger sinks, and 50 of the remaining sinks are ejected from their star-forming disks. The large binary fraction may have important implications for Pop III evolution and nucleosynthesis, as well as the final fate of the first stars.

  20. Constraining nonstandard neutrino interactions with electrons

    NASA Astrophysics Data System (ADS)

    Forero, D. V.; Guzzo, M. M.

    2011-07-01

    We update the phenomenological constraints of the nonstandard neutrino interactions (NSNI) with electrons including in the analysis, for the first time, data from LAMPF, Krasnoyarsk, and the latest Texono observations. We assume that NSNI modify the cross section of elastic scattering of (anti)neutrinos off electrons, using reactor and accelerator data, and the cross section of the electron-positron annihilation, using the four LEP experiments, in particular, new data from DELPHI. We find more restrictive allowed regions for the NSNI parameters: -0.11<ɛeeeR<0.05 and -0.02<ɛeeeL<0.09 (90% C.L.). We also recalculate the parameters of tauonic flavor obtaining -0.35<ɛττeR<0.50 and -0.51<ɛττeL<0.34 (90% C.L.). Although more severe than the limits already present in the literature, our results indicate that NSNI are allowed by the present data as a subleading effect, and the standard electroweak model continues consistent with the experimental panorama at 90% C.L. Further improvement on this picture will deserve a lot of engagement of upcoming experiments.

  1. Residual flexibility test method for verification of constrained structural models

    NASA Technical Reports Server (NTRS)

    Admire, John R.; Tinker, Michael L.; Ivey, Edward W.

    1994-01-01

    A method is described for deriving constrained modes and frequencies from a reduced model based on a subset of the free-free modes plus the residual effects of neglected modes. The method involves a simple modification of the MacNeal and Rubin component mode representation to allow development of a verified constrained (fixed-base) structural model. Results for two spaceflight structures having translational boundary degrees of freedom show quick convergence of constrained modes using a measureable number of free-free modes plus the boundary partition of the residual flexibility matrix. This paper presents the free-free residual flexibility approach as an alternative test/analysis method when fixed-base testing proves impractical.

  2. Using extracellular action potential recordings to constrain compartmental models.

    PubMed

    Gold, Carl; Henze, Darrell A; Koch, Christof

    2007-08-01

    We investigate the use of extracellular action potential (EAP) recordings for biophysically faithful compartmental models. We ask whether constraining a model to fit the EAP is superior to matching the intracellular action potential (IAP). In agreement with previous studies, we find that the IAP method under-constrains the parameters. As a result, significantly different sets of parameters can have virtually identical IAP's. In contrast, the EAP method results in a much tighter constraint. We find that the distinguishing characteristics of the waveform--but not its amplitude-resulting from the distribution of active conductances are fairly invariant to changes of electrode position and detailed cellular morphology. Based on these results, we conclude that EAP recordings are an excellent source of data for the purpose of constraining compartmental models.

  3. Constrained modes in control theory - Transmission zeros of uniform beams

    NASA Technical Reports Server (NTRS)

    Williams, T.

    1992-01-01

    Mathematical arguments are presented demonstrating that the well-established control system concept of the transmission zero is very closely related to the structural concept of the constrained mode. It is shown that the transmission zeros of a flexible structure form a set of constrained natural frequencies for it, with the constraints depending explicitly on the locations and the types of sensors and actuators used for control. Based on this formulation, an algorithm is derived and used to produce dimensionless plots of the zero of a uniform beam with a compatible sensor/actuator pair.

  4. Augmented Lagrangian method for constrained nuclear density functional theory

    NASA Astrophysics Data System (ADS)

    Staszczak, A.; Stoitsov, M.; Baran, A.; Nazarewicz, W.

    2010-10-01

    The augmented Lagrangiam method (ALM), widely used in quantum chemistry constrained optimization problems, is applied in the context of the nuclear Density Functional Theory (DFT) in the self-consistent constrained Skyrme Hartree-Fock-Bogoliubov (CHFB) variant. The ALM allows precise calculations of multi-dimensional energy surfaces in the space of collective coordinates that are needed to, e.g., determine fission pathways and saddle points; it improves the accuracy of computed derivatives with respect to collective variables that are used to determine collective inertia; and is well adapted to supercomputer applications.

  5. Value, Cost, and Sharing: Open Issues in Constrained Clustering

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.

    2006-01-01

    Clustering is an important tool for data mining, since it can identify major patterns or trends without any supervision (labeled data). Over the past five years, semi-supervised (constrained) clustering methods have become very popular. These methods began with incorporating pairwise constraints and have developed into more general methods that can learn appropriate distance metrics. However, several important open questions have arisen about which constraints are most useful, how they can be actively acquired, and when and how they should be propagated to neighboring points. This position paper describes these open questions and suggests future directions for constrained clustering research.

  6. Augmented Lagrangian Method for Constrained Nuclear Density Functiional Theory

    SciTech Connect

    Staszczak, A.; Stoitsov, Mario; Baran, Andrzej K; Nazarewicz, Witold

    2010-01-01

    The augmented Lagrangian method (ALM), widely used in quantum chemistry constrained optimization problems, is applied in the context of the nuclear Density Functional Theory (DFT) in the self-consistent constrained Skyrme Hartree-Fock-Bogoliubov (CHFB) variant. The ALM allows precise calculations of multidimensional energy surfaces in the space of collective coordinates that are needed to, e.g., determine fission pathways and saddle points; it improves accuracy of computed derivatives with respect to collective variables that are used to determine collective inertia and is well adapted to supercomputer applications.

  7. A lexicographic approach to constrained MDP admission control

    NASA Astrophysics Data System (ADS)

    Panfili, Martina; Pietrabissa, Antonio; Oddi, Guido; Suraci, Vincenzo

    2016-02-01

    This paper proposes a reinforcement learning-based lexicographic approach to the call admission control problem in communication networks. The admission control problem is modelled as a multi-constrained Markov decision process. To overcome the problems of the standard approaches to the solution of constrained Markov decision processes, based on the linear programming formulation or on a Lagrangian approach, a multi-constraint lexicographic approach is defined, and an online implementation based on reinforcement learning techniques is proposed. Simulations validate the proposed approach.

  8. Constrained optimization for image restoration using nonlinear programming

    NASA Technical Reports Server (NTRS)

    Yeh, C.-L.; Chin, R. T.

    1985-01-01

    The constrained optimization problem for image restoration, utilizing incomplete information and partial constraints, is formulated using nonlinear proramming techniques. This method restores a distorted image by optimizing a chosen object function subject to available constraints. The penalty function method of nonlinear programming is used. Both linear or nonlinear object function, and linear or nonlinear constraint functions can be incorporated in the formulation. This formulation provides a generalized approach to solve constrained optimization problems for image restoration. Experiments using this scheme have been performed. The results are compared with those obtained from other restoration methods and the comparative study is presented.

  9. Effects of constrained arm swing on vertical center of mass displacement during walking.

    PubMed

    Yang, Hyung Suk; Atkins, Lee T; Jensen, Daniel B; James, C Roger

    2015-10-01

    The purpose of this study was to determine the effects of constraining arm swing on the vertical displacement of the body's center of mass (COM) during treadmill walking and examine several common gait variables that may account for or mask differences in the body's COM motion with and without arm swing. Participants included 20 healthy individuals (10 male, 10 female; age: 27.8 ± 6.8 years). The body's COM displacement, first and second peak vertical ground reaction forces (VGRFs), and lowest VGRF during mid-stance, peak summed bilateral VGRF, lower extremity sagittal joint angles, stride length, and foot contact time were measured with and without arm swing during walking at 1.34 m/s. The body's COM displacement was greater with the arms constrained (arm swing: 4.1 ± 1.2 cm, arm constrained: 4.9 ± 1.2 cm, p < 0.001). Ground reaction force data indicated that the COM displacement increased in both double limb and single limb stance. However, kinematic patterns visually appeared similar between conditions. Shortened stride length and foot contact time also were observed, although these do not seem to account for the increased COM displacement. However, a change in arm COM acceleration might have contributed to the difference. These findings indicate that a change in arm swing causes differences in vertical COM displacement, which could increase energy expenditure.

  10. A multiplicative iterative algorithm for box-constrained penalized likelihood image restoration.

    PubMed

    Chan, Raymond H; Ma, Jun

    2012-07-01

    Image restoration is a computationally intensive problem as a large number of pixel values have to be determined. Since the pixel values of digital images can attain only a finite number of values (e.g., 8-bit images can have only 256 gray levels), one would like to recover an image within some dynamic range. This leads to the imposition of box constraints on the pixel values. The traditional gradient projection methods for constrained optimization can be used to impose box constraints, but they may suffer from either slow convergence or repeated searching for active sets in each iteration. In this paper, we develop a new box-constrained multiplicative iterative (BCMI) algorithm for box-constrained image restoration. The BCMI algorithm just requires pixelwise updates in each iteration, and there is no need to invert any matrices. We give the convergence proof of this algorithm and apply it to total variation image restoration problems, where the observed blurry images contain Poisson, Gaussian, or salt-and-pepper noises.

  11. Suicide and the Internet: the case of Amanda Todd.

    PubMed

    Lester, David; McSwain, Stephanie; Gunn, John F

    2013-01-01

    In a previous article in this journal, Gunn, Lester and McSwain provided evidence that the ten warnings signs for suicide proposed by the American Association for Suicidology are valid for predicting suicidal ideation and behavior A video posted on YouTube by a 15-year-old girl scored 8-9 out of 10 for these signs. Thirty-nine days after positing the video, the girl killed herself.

  12. 21 CFR 888.3210 - Finger joint metal/metal constrained cemented prosthesis.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Finger joint metal/metal constrained cemented... metal/metal constrained cemented prosthesis. (a) Identification. A finger joint metal/metal constrained..., 1996 for any finger joint metal/metal constrained cemented prosthesis that was in...

  13. 21 CFR 888.3210 - Finger joint metal/metal constrained cemented prosthesis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Finger joint metal/metal constrained cemented... metal/metal constrained cemented prosthesis. (a) Identification. A finger joint metal/metal constrained..., 1996 for any finger joint metal/metal constrained cemented prosthesis that was in...

  14. 21 CFR 888.3210 - Finger joint metal/metal constrained cemented prosthesis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Finger joint metal/metal constrained cemented... metal/metal constrained cemented prosthesis. (a) Identification. A finger joint metal/metal constrained..., 1996 for any finger joint metal/metal constrained cemented prosthesis that was in...

  15. 21 CFR 888.3210 - Finger joint metal/metal constrained cemented prosthesis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Finger joint metal/metal constrained cemented... metal/metal constrained cemented prosthesis. (a) Identification. A finger joint metal/metal constrained..., 1996 for any finger joint metal/metal constrained cemented prosthesis that was in...

  16. 21 CFR 888.3300 - Hip joint metal constrained cemented or uncemented prosthesis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Hip joint metal constrained cemented or uncemented... metal constrained cemented or uncemented prosthesis. (a) Identification. A hip joint metal constrained... Administration on or before December 26, 1996 for any hip joint metal constrained cemented or...

  17. 21 CFR 888.3210 - Finger joint metal/metal constrained cemented prosthesis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Finger joint metal/metal constrained cemented... metal/metal constrained cemented prosthesis. (a) Identification. A finger joint metal/metal constrained..., 1996 for any finger joint metal/metal constrained cemented prosthesis that was in...

  18. Constraining primordial magnetic fields with future cosmic shear surveys

    SciTech Connect

    Fedeli, C.; Moscardini, L. E-mail: lauro.moscardini@unibo.it

    2012-11-01

    The origin of astrophysical magnetic fields observed in galaxies and clusters of galaxies is still unclear. One possibility is that primordial magnetic fields generated in the early Universe provide seeds that grow through compression and turbulence during structure formation. A cosmological magnetic field present prior to recombination would produce substantial matter clustering at intermediate/small scales, on top of the standard inflationary power spectrum. In this work we study the effect of this alteration on one particular cosmological observable, cosmic shear. We adopt the semi-analytic halo model in order to describe the non-linear clustering of matter, and feed it with the altered mass variance induced by primordial magnetic fields. We find that the convergence power spectrum is, as expected, substantially enhanced at intermediate/small angular scales, with the exact amplitude of the enhancement depending on the magnitude and power-law index of the magnetic field power spectrum. Specifically, for a fixed amplitude, the effect of magnetic fields is larger for larger spectral indices. We use the predicted statistical errors for a future wide-field cosmic shear survey, on the model of the ESA Cosmic Vision mission Euclid, in order to forecast constraints on the amplitude of primordial magnetic fields as a function of the spectral index. We find that the amplitude will be constrained at the level of ∼ 0.1 nG for n{sub B} ∼ −3, and at the level of ∼ 10{sup −7} nG for n{sub B} ∼ 3. The latter is at the same level of lower bounds coming from the secondary emission of gamma-ray sources, implying that for high spectral indices Euclid will certainly be able to detect primordial magnetic fields, if they exist. The present study shows how large-scale structure surveys can be used for both understanding the origins of astrophysical magnetic fields and shedding new light on the physics of the pre-recombination Universe.

  19. Multi-asset Black-Scholes model as a variable second class constrained dynamical system

    NASA Astrophysics Data System (ADS)

    Bustamante, M.; Contreras, M.

    2016-09-01

    In this paper, we study the multi-asset Black-Scholes model from a structural point of view. For this, we interpret the multi-asset Black-Scholes equation as a multidimensional Schrödinger one particle equation. The analysis of the classical Hamiltonian and Lagrangian mechanics associated with this quantum model implies that, in this system, the canonical momentums cannot always be written in terms of the velocities. This feature is a typical characteristic of the constrained system that appears in the high-energy physics. To study this model in the proper form, one must apply Dirac's method for constrained systems. The results of the Dirac's analysis indicate that in the correlation parameters space of the multi-assets model, there exists a surface (called the Kummer surface ΣK, where the determinant of the correlation matrix is null) on which the constraint number can vary. We study in detail the cases with N = 2 and N = 3 assets. For these cases, we calculate the propagator of the multi-asset Black-Scholes equation and show that inside the Kummer ΣK surface the propagator is well defined, but outside ΣK the propagator diverges and the option price is not well defined. On ΣK the propagator is obtained as a constrained path integral and their form depends on which region of the Kummer surface the correlation parameters lie. Thus, the multi-asset Black-Scholes model is an example of a variable constrained dynamical system, and it is a new and beautiful property that had not been previously observed.

  20. The Emergence of Solar Supergranulation as a Natural Consequence of Rotationally Constrained Interior Convection

    NASA Astrophysics Data System (ADS)

    Featherstone, Nicholas A.; Hindman, Bradley W.

    2016-10-01

    We investigate how rotationally constrained, deep convection might give rise to supergranulation, the largest distinct spatial scale of convection observed in the solar photosphere. While supergranulation is only weakly influenced by rotation, larger spatial scales of convection sample the deep convection zone and are presumably rotationally influenced. We present numerical results from a series of nonlinear, 3D simulations of rotating convection and examine the velocity power distribution realized under a range of Rossby numbers. When rotation is present, the convective power distribution possesses a pronounced peak, at characteristic wavenumber {{\\ell }}{peak}, whose value increases as the Rossby number is decreased. This distribution of power contrasts with that realized in non-rotating convection, where power increases monotonically from high to low wavenumbers. We find that spatial scales smaller than {{\\ell }}{peak} behave in analogy to non-rotating convection. Spatial scales larger than {{\\ell }}{peak} are rotationally constrained and possess substantially reduced power relative to the non-rotating system. We argue that the supergranular scale emerges due to a suppression of power on spatial scales larger than {\\ell }≈ 100 owing to the presence of deep, rotationally constrained convection. Supergranulation thus represents the largest non-rotationally constrained mode of solar convection. We conclude that the characteristic spatial scale of supergranulation bounds that of the deep convective motions from above, making supergranulation an indirect measure of the deep-seated dynamics at work in the solar dynamo. Using the spatial scale of supergranulation in conjunction with our numerical results, we estimate an upper bound of 10 m s-1 for the Sun’s bulk rms convective velocity.

  1. Achieving Procurement Efficiencies in a Budget-Constrained Environment

    DTIC Science & Technology

    2015-05-13

    Copyright © 2015 Boeing. All rights reserved. . Corporate Contracts Darryl Scott Corporate Vice President, Contracts May 13, 2015 Naval...Budget-Constrained Environment T2012 Panel March 12, 2012 Copyright © 2012 Boeing. All rights reserved. Copyright © 2015 Boeing. All rights reserved...Corporate | Contracts Comparison of Commercial and Military Production Satellites Copyright © 2012 Boeing. All rights reserved. $407M ~58 mo

  2. Using Diagnostic Text Information to Constrain Situation Models

    ERIC Educational Resources Information Center

    Dutke, Stephan; Baadte, Christiane; Hahnel, Andrea; von Hecker, Ulrich; Rinck, Mike

    2010-01-01

    During reading, the model of the situation described by the text is continuously accommodated to new text input. The hypothesis was tested that readers are particularly sensitive to diagnostic text information that can be used to constrain their existing situation model. In 3 experiments, adult participants read narratives about social situations…

  3. Joint Force Interdependence for a Fiscally Constrained Future

    DTIC Science & Technology

    2013-03-01

    Army 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Dr. Richard Meinhart ...Joint Force Interdependence For A Fiscally Constrained Future by Colonel Daniel P. Ray United States Army ...United States Army War College Class of 2013 DISTRIBUTION STATEMENT: A Approved for Public Release Distribution is Unlimited

  4. Bayesian Item Selection in Constrained Adaptive Testing Using Shadow Tests

    ERIC Educational Resources Information Center

    Veldkamp, Bernard P.

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…

  5. Hamiltonian dynamics and constrained variational calculus: continuous and discrete settings

    NASA Astrophysics Data System (ADS)

    de León, Manuel; Jiménez, Fernando; Martín de Diego, David

    2012-05-01

    The aim of this paper is to study the relationship between Hamiltonian dynamics and constrained variational calculus. We describe both using the notion of Lagrangian submanifolds of convenient symplectic manifolds and using the so-called Tulczyjew triples. The results are also extended to the case of discrete dynamics and nonholonomic mechanics. Interesting applications to the geometrical integration of Hamiltonian systems are obtained.

  6. Constrained variational calculus for higher order classical field theories

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; de León, Manuel; Martín de Diego, David

    2010-11-01

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  7. Constrained Quantum Mechanics: Chaos in Non-Planar Billiards

    ERIC Educational Resources Information Center

    Salazar, R.; Tellez, G.

    2012-01-01

    We illustrate some of the techniques to identify chaos signatures at the quantum level using as guiding examples some systems where a particle is constrained to move on a radial symmetric, but non-planar, surface. In particular, two systems are studied: the case of a cone with an arbitrary contour or "dunce hat billiard" and the rectangular…

  8. Extracting electron transfer coupling elements from constrained density functional theory

    NASA Astrophysics Data System (ADS)

    Wu, Qin; Van Voorhis, Troy

    2006-10-01

    Constrained density functional theory (DFT) is a useful tool for studying electron transfer (ET) reactions. It can straightforwardly construct the charge-localized diabatic states and give a direct measure of the inner-sphere reorganization energy. In this work, a method is presented for calculating the electronic coupling matrix element (Hab) based on constrained DFT. This method completely avoids the use of ground-state DFT energies because they are known to irrationally predict fractional electron transfer in many cases. Instead it makes use of the constrained DFT energies and the Kohn-Sham wave functions for the diabatic states in a careful way. Test calculations on the Zn2+ and the benzene-Cl atom systems show that the new prescription yields reasonable agreement with the standard generalized Mulliken-Hush method. We then proceed to produce the diabatic and adiabatic potential energy curves along the reaction pathway for intervalence ET in the tetrathiafulvalene-diquinone (Q-TTF-Q) anion. While the unconstrained DFT curve has no reaction barrier and gives Hab≈17kcal /mol, which qualitatively disagrees with experimental results, the Hab calculated from constrained DFT is about 3kcal /mol and the generated ground state has a barrier height of 1.70kcal/mol, successfully predicting (Q-TTF-Q)- to be a class II mixed-valence compound.

  9. How well can future CMB missions constrain cosmic inflation?

    SciTech Connect

    Martin, Jérôme; Vennin, Vincent; Ringeval, Christophe E-mail: christophe.ringeval@uclouvain.be

    2014-10-01

    We study how the next generation of Cosmic Microwave Background (CMB) measurement missions (such as EPIC, LiteBIRD, PRISM and COrE) will be able to constrain the inflationary landscape in the hardest to disambiguate situation in which inflation is simply described by single-field slow-roll scenarios. Considering the proposed PRISM and LiteBIRD satellite designs, we simulate mock data corresponding to five different fiducial models having values of the tensor-to-scalar ratio ranging from 10{sup -1} down to 10{sup -7}. We then compute the Bayesian evidences and complexities of all Encyclopædia Inflationaris models in order to assess the constraining power of PRISM alone and LiteBIRD complemented with the Planck 2013 data. Within slow-roll inflation, both designs have comparable constraining power and can rule out about three quarters of the inflationary scenarios, compared to one third for Planck 2013 data alone. However, we also show that PRISM can constrain the scalar running and has the capability to detect a violation of slow roll at second order. Finally, our results suggest that describing an inflationary model by its potential shape only, without specifying a reheating temperature, will no longer be possible given the accuracy level reached by the future CMB missions.

  10. Multiply-Constrained Semantic Search in the Remote Associates Test

    ERIC Educational Resources Information Center

    Smith, Kevin A.; Huber, David E.; Vul, Edward

    2013-01-01

    Many important problems require consideration of multiple constraints, such as choosing a job based on salary, location, and responsibilities. We used the Remote Associates Test to study how people solve such multiply-constrained problems by asking participants to make guesses as they came to mind. We evaluated how people generated these guesses…

  11. Constrained Adaptive Beamforming for Improved Contrast in Breast Ultrasound

    DTIC Science & Technology

    2005-06-01

    led us to utilize a recently proposed method, Spatial Processing Optimized and Constrained ( SPOC ). In initial simulations this method not only...6 Review of the Adaptive Beamforming Literature ................... 7 SPOC Progress...given the parameters of in vivo ultrasound imaging. SPOC Progress: Most of the Adaptive Beamforming algorithms previously described fail when applied to

  12. Constrained quantum mechanics: chaos in non-planar billiards

    NASA Astrophysics Data System (ADS)

    Salazar, R.; Téllez, G.

    2012-07-01

    We illustrate some of the techniques to identify chaos signatures at the quantum level using as guiding examples some systems where a particle is constrained to move on a radial symmetric, but non-planar, surface. In particular, two systems are studied: the case of a cone with an arbitrary contour or dunce hat billiard and the rectangular billiard with an inner Gaussian surface.

  13. Excision technique in constrained formulations of Einstein equations: collapse scenario

    NASA Astrophysics Data System (ADS)

    Cordero-Carrión, I.; Vasset, N.; Novak, J.; Jaramillo, J. L.

    2015-04-01

    We present a new excision technique used in constrained formulations of Einstein equations to deal with black hole in numerical simulations. We show the applicability of this scheme in several scenarios. In particular, we present the dynamical evolution of the collapse of a neutron star to a black hole, using the CoCoNuT code and this excision technique.

  14. Reflections on How Color Term Acquisition Is Constrained

    ERIC Educational Resources Information Center

    Pitchford, Nicola J.

    2006-01-01

    Compared with object word learning, young children typically find learning color terms to be a difficult linguistic task. In this reflections article, I consider two questions that are fundamental to investigations into the developmental acquisition of color terms. First, I consider what constrains color term acquisition and how stable these…

  15. Constrained Hartree-Fock and quasi-spin projection

    NASA Astrophysics Data System (ADS)

    Cambiaggio, M. C.; Plastino, A.; Szybisz, L.

    1980-08-01

    The constrained Hartree-Fock approach of Elliott and Evans is studied in detail with reference to two quasi-spin models, and their predictions compared with those arising from a projection method. It is found that the new approach works fairly well, although limitations to its applicability are encountered.

  16. Applications of a Constrained Mechanics Methodology in Economics

    ERIC Educational Resources Information Center

    Janova, Jitka

    2011-01-01

    This paper presents instructive interdisciplinary applications of constrained mechanics calculus in economics on a level appropriate for undergraduate physics education. The aim of the paper is (i) to meet the demand for illustrative examples suitable for presenting the background of the highly expanding research field of econophysics even at the…

  17. Testing a Constrained MPC Controller in a Process Control Laboratory

    ERIC Educational Resources Information Center

    Ricardez-Sandoval, Luis A.; Blankespoor, Wesley; Budman, Hector M.

    2010-01-01

    This paper describes an experiment performed by the fourth year chemical engineering students in the process control laboratory at the University of Waterloo. The objective of this experiment is to test the capabilities of a constrained Model Predictive Controller (MPC) to control the operation of a Double Pipe Heat Exchanger (DPHE) in real time.…

  18. Combining Bayesian methods and aircraft observations to constrain the HO. + NO2 reaction rate

    EPA Science Inventory

    Tropospheric ozone is the third strongest greenhouse gas, and has the highest uncertainty in radiative forcing of the top five greenhouse gases. Throughout the troposphere, ozone is produced by radical oxidation of nitrogen oxides (NO,x =NO+NO2). In the uppe...

  19. Constraining Large-Scale Solar Magnetic Field Models with Optical Coronal Observations

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.

    2015-12-01

    Scientific success of the Solar Probe Plus (SPP) and Solar Orbiter (SO) missions will depend to a large extent on the accuracy of the available coronal magnetic field models describing the connectivity of plasma disturbances in the inner heliosphere with their source regions. We argue that ground based and satellite coronagraph images can provide robust geometric constraints for the next generation of improved coronal magnetic field extrapolation models. In contrast to the previously proposed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions located at significant radial distances from the solar surface. Details on the new feature detection algorithms will be presented. By applying the developed image processing methodology to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code presented in a companion talk by S.Jones at al. Tracing results are shown to be in a good qualitative agreement with the large-scalie configuration of the optical corona. Subsequent phases of the project and the related data products for SSP and SO missions as wwll as the supporting global heliospheric simulations will be discussed.

  20. Microscopic observation of the segmental orientation autocorrelation function for entangled and constrained polymer chains

    NASA Astrophysics Data System (ADS)

    Mordvinkin, Anton; Saalwächter, Kay

    2017-03-01

    Previous work on probing the dynamics of reptating polymer chains in terms of the segmental orientation autocorrelation function (OACF) by multiple-quantum (MQ) NMR relied on the time-temperature superposition (TTS) principle as applied to normalized double-quantum (DQ) build-up curves. Alternatively, an initial-rise analysis of the latter is also possible. These approaches are subject to uncertainties related to the relevant segmental shift factor or parasitic signals and inhomogeneities distorting the build-up at short times, respectively. Here, we present a simple analytical fitting approach based upon a power-law model of the OACF, by the way of which an effective power-law time scaling exponent and the amplitude of the OACF can be estimated from MQ NMR data at any given temperature. This obviates the use of TTS and provides a robust and independent probe of the shape of the OACF. The approach is validated by application to polymer melts of variable molecular weight as well as elastomers. We anticipate a wide range of applications, including the study of physical networks with labile junctions.

  1. Constrained profile retrieval applied to the observation mode of the michelson interferometer for passive atmospheric sounding.

    PubMed

    Steck, T; von Clarmann, T

    2001-07-20

    To investigate the atmosphere of Earth and to detect changes in its environment, the Environmental Satellite will be launched by the European Space Agency in a polar orbit in October 2001. One of its payload instruments is a Fourier spectrometer, the Michelson Interferometer for Passive Atmospheric Sounding, designed to measure the spectral thermal emission of molecules in the atmosphere in a limb-viewing mode. The goal of this experiment is to derive operationally vertical profiles of pressure and temperature as well as of trace gases O(3), H(2)O, CH(4), N(2)O, NO(2), and HNO(3) from spectra on a global scale. A major topic in the analysis of the computational methodology for obtaining the profiles is how available a priori knowledge can be used and how this a priori knowledge affects corresponding results. Retrieval methods were compared and it was shown that an optimal estimation formalism can be used in a highly flexible way for this kind of data analysis. Beyond this, diagnostic tools, such as estimated standard deviation, vertical resolution, or degrees of freedom, have been used to characterize the results. Optimized regularization parameters have been determined, and a great effect from the choice of regularization and discretization on the results was demonstrated. In particular, we show that the optimal estimation formalism can be used to emulate purely smoothing constraints.

  2. The importance of observations on fluxes to constrain ground water model calibration

    NASA Astrophysics Data System (ADS)

    Vassena, Chiara; Durante, Cinzia; Giudici, Mauro; Ponzini, Giansilvio

    The aquifer system in the alluvial basin bordered by Adda, Po and Oglio rivers (Northern Italy) is characterised by a dual flow regime. In shallow sediments, which constitute a phreatic aquifer with high conductivity, great fluxes are driven by the interaction between ground water and the network of surface water, by the infiltration of rain and irrigation water, and by the fluxes drained from depression springs and river valley terraces. The underlying semiconfined aquifers are characterised by minor fluxes driven by water abstraction from wells of the public Water Works. Since most of the ground water flow occurs in the phreatic aquifer, an equivalent single layer 2D steady state flow model has been calibrated. The identification of the transmissivity field at the scale of the model has been obtained by solving an inverse problem with the comparison model method which requires an initial configuration, i.e., reference head, initial transmissivity field, source terms. Most of the head and source data are related to the phreatic aquifer, but most of the estimates of transmissivity are obtained with field tests conducted in deep wells pumping from the semiconfined aquifers, so that this kind of prior information cannot be used directly for model calibration. The inverse problem is underdetermined and a unique solution is not available. Furthermore information on surface hydrology is poor. Therefore many tests with different hypotheses about the initial configuration have been performed and some of them have been selected and used to initialise the automatic inversion procedure.

  3. Global inventory of nitrogen oxide emissions constrained by space-based observations of NO2 columns

    NASA Astrophysics Data System (ADS)

    Martin, Randall V.; Jacob, Daniel J.; Chance, Kelly; Kurosu, Thomas P.; Palmer, Paul I.; Evans, Mathew J.

    2003-09-01

    We use tropospheric NO2 columns from the Global Ozone Monitoring Experiment (GOME) satellite instrument to derive top-down constraints on emissions of nitrogen oxides (NOx ≡ NO + NO2), and combine these with a priori information from a bottom-up emission inventory (with error weighting) to achieve an optimized a posteriori estimate of the global distribution of surface NOx emissions. Our GOME NO2 retrieval improves on previous work by accounting for scattering and absorption of radiation by aerosols; the effect on the air mass factor (AMF) ranges from +10 to -40% depending on the region. Our AMF also includes local information on relative vertical profiles (shape factors) of NO2 from a global 3-D chemical transport model (GEOS-CHEM); assumption of a globally uniform shape factor, as in most previous retrievals, would introduce regional biases of up to 40% over industrial regions and a factor of 2 over remote regions. We derive a top-down NOx emission inventory from the GOME data by using the local GEOS-CHEM relationship between NO2 columns and NOx emissions. The resulting NOx emissions for industrial regions are aseasonal, despite large seasonal variation in NO2 columns, providing confidence in the method. Top-down errors in monthly NOx emissions are comparable with bottom-up errors over source regions. Annual global a posteriori errors are half of a priori errors. Our global a posteriori estimate for annual land surface NOx emissions (37.7 Tg N yr-1) agrees closely with the GEIA-based a priori (36.4) and with the EDGAR 3.0 bottom-up inventory (36.6), but there are significant regional differences. A posteriori NOx emissions are higher by 50-100% in the Po Valley, Tehran, and Riyadh urban areas, and by 25-35% in Japan and South Africa. Biomass burning emissions from India, central Africa, and Brazil are lower by up to 50%; soil NOx emissions are appreciably higher in the western United States, the Sahel, and southern Europe.

  4. Constrained optimization schemes for geophysical inversion of seismic data

    NASA Astrophysics Data System (ADS)

    Sosa Aguirre, Uram Anibal

    Many experimental techniques in geophysics advance the understanding of Earth processes by estimating and interpreting Earth structure (e.g., velocity and/or density structure). These techniques use different types of geophysical data which can be collected and analyzed separately, sometimes resulting in inconsistent models of the Earth depending on data quality, methods and assumptions made. This dissertation presents two approaches for geophysical inversion of seismic data based on constrained optimization. In one approach we expand a one dimensional (1-D) joint inversion least-squares (LSQ) algorithm by introducing a constrained optimization methodology. Then we use the 1-D inversion results to produce 3-D Earth velocity structure models. In the second approach, we provide a unified constrained optimization framework for solving a 1-D inverse wave propagation problem. In Chapter 2 we present a constrained optimization framework for joint inversion. This framework characterizes 1-D Earth's structure by using seismic shear wave velocities as a model parameter. We create two geophysical synthetic data sets sensitive to shear velocities, namely receiver function and surface wave dispersion. We validate our approach by comparing our numerical results with a traditional unconstrained method, and also we test our approach robustness in the presence of noise. Chapter 3 extends this framework to include an interpolation technique for creating 3-D Earth velocity structure models of the Rio Grande Rift region. Chapter 5 introduces the joint inversion of multiple data sets by adding delay travel times information in a synthetic setup, and leave the posibility to include more data sets. Finally, in Chapter 4 we pose a 1-D inverse full-waveform propagation problem as a PDE-constrained optimization program, where we invert for the material properties in terms of shear wave velocities throughout the physical domain. We facilitate the implementation and comparison of different

  5. Constraining Dark Matter-Neutrino Interactions with High-Energy Astrophysical Neutrinos

    NASA Astrophysics Data System (ADS)

    Arguelles, Carlos

    2017-01-01

    IceCube has continued to observe cosmic neutrinos since their discovery. The origin of these cosmic neutrinos is still unknown. Moreover, their arrival direction is compatible with an isotropic distribution. The this observation, together with dedicated studies looking for galactic plane correlations, suggest that the observed astrophysical neutrinos are of extragalactic origin. If there is a dark matter-neutrino interaction, then the observed neutrino flux and its spatial distribution would be distorted. We perform a likelihood analysis using four years of IceCube's high energy starting events to constrain the strength dark matter neutrino interactions in the context of simplified models. Finally, we compare our results with cosmology and highlight the complementary between the two constraints.

  6. Using qflux to constrain modeled Congo Basin rainfall in the CMIP5 ensemble

    NASA Astrophysics Data System (ADS)

    Creese, A.; Washington, R.

    2016-11-01

    Coupled models are the tools by which we diagnose and project future climate, yet in certain regions they are critically underevaluated. The Congo Basin is one such region which has received limited scientific attention, due to the severe scarcity of observational data. There is a large difference in the climatology of rainfall in global coupled climate models over the basin. This study attempts to address this research gap by evaluating modeled rainfall magnitude and distribution amongst global coupled models in the Coupled Model Intercomparison Project 5 (CMIP5) ensemble. Mean monthly rainfall between models varies by up to a factor of 5 in some months, and models disagree on the location of maximum rainfall. The ensemble mean, which is usually considered a "best estimate" of coupled model output, does not agree with any single model, and as such is unlikely to present a possible rainfall state. Moisture flux (qflux) convergence (which is assumed to be better constrained than parameterized rainfall) is found to have a strong relationship with rainfall; strongest correlations occur at 700 hPa in March-May (r = 0.70) and 850 hPa in June-August, September-November, and December-February (r = 0.66, r = 0.71, and r = 0.81). In the absence of observations, this relationship could be used to constrain the wide spectrum of modeled rainfall and give a better understanding of Congo rainfall climatology. Analysis of moisture transport pathways indicates that modeled rainfall is sensitive to the amount of moisture entering the basin. A targeted observation campaign at key Congo Basin boundaries could therefore help to constrain model rainfall.

  7. Neutrino-Electron Scattering in MINERvA for Constraining the NuMI Neutrino Flux

    SciTech Connect

    Park, Jaewon

    2013-01-01

    Neutrino-electron elastic scattering is used as a reference process to constrain the neutrino flux at the Main Injector (NuMI) beam observed by the MINERvA experiment. Prediction of the neutrino flux at accelerator experiments from other methods has a large uncertainty, and this uncertainty degrades measurements of neutrino oscillations and neutrino cross-sections. Neutrino-electron elastic scattering is a rare process, but its cross-section is precisely known. With a sample corresponding to $3.5\\times10^{20}$ protons on target in the NuMI low-energy neutrino beam, a sample of $120$ $\

  8. (Uncertain) Carbonyl Sulfide Plant Fluxes Spatially Constrain (Even More Uncertain) CO2 GPP

    NASA Astrophysics Data System (ADS)

    Hilton, T. W.; Whelan, M.; Kulkarni, S.; Zumkehr, A. L.; Berry, J. A.; Campbell, J. E.

    2015-12-01

    With predictions of future terrestrial carbon dioxide (CO2)gross primary productivity (GPP) remaining stubbornly uncertain,ecosystem carbonyl sulfide (COS) fluxes provide an independent source ofinformation that may be able to reduce that uncertainty. Several openquestions must be addressed before COS may be applied widely as a GPPtracer. Here we employ an atmospheric chemistry and transport model(STEM) and airborne atmospheric COS concentration observations todemonstrate that COS plant uptake spatially constrains CO2 GPP even whenaccounting for soil COS flux uncertainty and COS leaf-scale relativeuptake variability and uncertainty.

  9. Sensitivity Analysis Tailored to Constrain 21st Century Terrestrial Carbon-Uptake

    NASA Astrophysics Data System (ADS)

    Muller, S. J.; Gerber, S.

    2013-12-01

    The long-term fate of terrestrial carbon (C) in response to climate change remains a dominant source of uncertainty in Earth-system model projections. Increasing atmospheric CO2 could be mitigated by long-term net uptake of C, through processes such as increased plant productivity due to "CO2-fertilization". Conversely, atmospheric conditions could be exacerbated by long-term net release of C, through processes such as increased decomposition due to higher temperatures. This balance is an important area of study, and a major source of uncertainty in long-term (>year 2050) projections of planetary response to climate change. We present results from an innovative application of sensitivity analysis to LM3V, a dynamic global vegetation model (DGVM), intended to identify observed/observable variables that are useful for constraining long-term projections of C-uptake. We analyzed the sensitivity of cumulative C-uptake by 2100, as modeled by LM3V in response to IPCC AR4 scenario climate data (1860-2100), to perturbations in over 50 model parameters. We concurrently analyzed the sensitivity of over 100 observable model variables, during the extant record period (1970-2010), to the same parameter changes. By correlating the sensitivities of observable variables with the sensitivity of long-term C-uptake we identified model calibration variables that would also constrain long-term C-uptake projections. LM3V employs a coupled carbon-nitrogen cycle to account for N-limitation, and we find that N-related variables have an important role to play in constraining long-term C-uptake. This work has implications for prioritizing field campaigns to collect global data that can help reduce uncertainties in the long-term land-atmosphere C-balance. Though results of this study are specific to LM3V, the processes that characterize this model are not completely divorced from other DGVMs (or reality), and our approach provides valuable insights into how data can be leveraged to be better

  10. Precision measurements, dark matter direct detection and LHC Higgs searches in a constrained NMSSM

    SciTech Connect

    Belanger, G.; Hugonie, C.; Pukhov, A. E-mail: cyril.hugonie@lpta.univ-montp2.fr

    2009-01-15

    We reexamine the constrained version of the Next-to-Minimal Supersymmetric Standard Model with semi universal parameters at the GUT scale (CNMSSM). We include constraints from collider searches for Higgs and susy particles, upper bound on the relic density of dark matter, measurements of the muon anomalous magnetic moment and of B-physics observables as well as direct searches for dark matter. We then study the prospects for direct detection of dark matter in large scale detectors and comment on the prospects for discovery of heavy Higgs states at the LHC.

  11. Constraining the Antarctic contribution to interglacial sea-level rise

    NASA Astrophysics Data System (ADS)

    Naish, T.; Mckay, R. M.; Barrett, P. J.; Levy, R. H.; Golledge, N. R.; Deconto, R. M.; Horgan, H. J.; Dunbar, G. B.

    2015-12-01

    Observations, models and paleoclimate reconstructions suggest that Antarctica's marine-based ice sheets behave in an unstable manner with episodes of rapid retreat in response to warming climate. Understanding the processes involved in this "marine ice sheet instability" is key for improving estimates of Antarctic ice sheet contribution to future sea-level rise. Another motivating factor is that far-field sea-level reconstructions and ice sheet models imply global mean sea level (GMSL) was up to 20m and 10m higher, respectively, compared with present day, during the interglacials of the warm Pliocene (~4-3Ma) and Late Pleistocene (at ~400ka and 125ka). This was when atmospheric CO2 was between 280 and 400ppm and global average surface temperatures were 1- 3°C warmer, suggesting polar ice sheets are highly sensitive to relatively modest increases in climate forcing. Such magnitudes of GMSL rise not only require near complete melt of the Greenland Ice Sheet and the West Antarctic Ice Sheet, but a substantial retreat of marine-based sectors of East Antarctic Ice Sheet. Recent geological drilling initiatives on the continental margin of Antarctica from both ship- (e.g. IODP; International Ocean Discovery Program) and ice-based (e.g. ANDRILL/Antarctic Geological Drilling) platforms have provided evidence supporting retreat of marine-based ice. However, without direct access through the ice sheet to archives preserved within sub-glacial sedimentary basins, the volume and extent of ice sheet retreat during past interglacials cannot be directly constrained. Sediment cores have been successfully recovered from beneath ice shelves by the ANDRILL Program and ice streams by the WISSARD (Whillans Ice Stream Sub-glacial Access Research Drilling) Project. Together with the potential of the new RAID (Rapid Access Ice Drill) initiative, these demonstrate the technological feasibility of accessing the subglacial bed and deeper sedimentary archives. In this talk I will outline the

  12. Comparative Analysis of Uninhibited and Constrained Avian Wing Aerodynamics

    NASA Astrophysics Data System (ADS)

    Cox, Jordan A.

    The flight of birds has intrigued and motivated man for many years. Bird flight served as the primary inspiration of flying machines developed by Leonardo Da Vinci, Otto Lilienthal, and even the Wright brothers. Avian flight has once again drawn the attention of the scientific community as unmanned aerial vehicles (UAV) are not only becoming more popular, but smaller. Birds are once again influencing the designs of aircraft. Small UAVs operating within flight conditions and low Reynolds numbers common to birds are not yet capable of the high levels of control and agility that birds display with ease. Many researchers believe the potential to improve small UAV performance can be obtained by applying features common to birds such as feathers and flapping flight to small UAVs. Although the effects of feathers on a wing have received some attention, the effects of localized transient feather motion and surface geometry on the flight performance of a wing have been largely overlooked. In this research, the effects of freely moving feathers on a preserved red tailed hawk wing were studied. A series of experiments were conducted to measure the aerodynamic forces on a hawk wing with varying levels of feather movement permitted. Angle of attack and air speed were varied within the natural flight envelope of the hawk. Subsequent identical tests were performed with the feather motion constrained through the use of externally-applied surface treatments. Additional tests involved the study of an absolutely fixed geometry mold-and-cast wing model of the original bird wing. Final tests were also performed after applying surface coatings to the cast wing. High speed videos taken during tests revealed the extent of the feather movement between wing models. Images of the microscopic surface structure of each wing model were analyzed to establish variations in surface geometry between models. Recorded aerodynamic forces were then compared to the known feather motion and surface

  13. Hot galactic winds constrained by the X-ray luminosities of galaxies

    SciTech Connect

    Zhang, Dong; Thompson, Todd A.; Murray, Norman; Quataert, Eliot E-mail: thompson@astronomy.ohio-state.edu

    2014-04-01

    Galactic superwinds may be driven by very hot outflows generated by overlapping supernovae within the host galaxy. We use the Chevalier and Clegg (CC85) wind model and the observed correlation between X-ray luminosities of galaxies and their star formation rates (SFRs) to constrain the mass-loss rates ( M-dot {sub hot}) across a wide range of SFRs, from dwarf starbursts to ultraluminous infrared galaxies. We show that for fixed thermalization and mass-loading efficiencies, the X-ray luminosity of the hot wind scales as L{sub X} ∝SFR{sup 2}, significantly steeper than is observed for star-forming galaxies: L{sub X} ∝SFR. Using this difference, we constrain the mass-loading and thermalization efficiency of hot galactic winds. For reasonable values of the thermalization efficiency (≲ 1) and for SFR ≳ 10 M {sub ☉} yr{sup –1} we find that M-dot {sub hot}/SFR≲ 1, which is significantly lower than required by integrated constraints on the efficiency of stellar feedback in galaxies and potentially too low to explain observations of winds from rapidly star-forming galaxies. In addition, we highlight the fact that heavily mass-loaded winds cannot be described by the adiabatic CC85 model because they become strongly radiative.

  14. Toward a deeper understanding of how experiments constrain the underlying physics of heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Sangaline, Evan; Pratt, Scott

    2016-02-01

    Recent work has provided the means to rigorously determine properties of superhadronic matter from experimental data through the application of broad scale modeling of high-energy nuclear collisions within a Bayesian framework. These studies have provided unprecedented statistical inferences about the physics underlying nuclear collisions by virtue of simultaneously considering a wide range of model parameters and experimental observables. Notably, this approach has been used to constrain both the QCD equation of state and the shear viscosity above the quark-hadron transition. Although the inferences themselves have a clear meaning, the complex nature of the relationships between model parameters and observables has remained relatively obscure. We present here a novel extension of the standard Bayesian Markov-chain Monte Carlo approach that allows for the quantitative determination of how inferences of model parameters are driven by experimental measurements and their uncertainties. This technique is then applied in the context of heavy-ion collisions in order to explore previous results in greater depth. The resulting relationships are useful for identifying model weaknesses, prioritizing future experimental measurements, and, most importantly, developing an intuition for the roles that different observables play in constraining our understanding of the underlying physics.

  15. Constraining sources, transport pathways and process parameters on various scales by atmospheric Lagrangian inversions

    NASA Astrophysics Data System (ADS)

    von Hobe, Marc; Konopka, Paul; Hoffmann, Lars; Griessbach, Sabine; Sumińska-Ebersoldt, Olga; Vernier, Jean-Paul; Plöger, Felix; Tao, Mengchu; Müller, Rolf

    2015-04-01

    Inverse methods have become widely used tools to infer sources and sinks of atmospheric constituents based on observations. Inversion techniques can also help to better constrain input and process parameters and thus improve the underlying models. While the majority of today's inverse model frameworks use the Eulerian concept of transport, the capability of Lagrangian inversion to infer emissions of even ill constrained sources has been demonstrated (e.g. Stohl et al., 2011). We will discuss Lagrangian inverse modelling as a powerful tool to solve problems on a wide range of scales in terms of spatial and temporal extent as well as complexity. First, two distinct applications on different scales will be presented: i) the retrieval of reaction rates that govern the chlorine catalyzed ozone destruction in the polar winter along individual trajectories connecting airborne observations in the Arctic in 2010, and ii) the derivation of emission altitudes and transport pathways of sulfate aerosol from the 2011 eruption of the Nabro volcano using CALIPSO satellite observations. Second, the potential and requirements for applications at even higher complexity, e.g. simultaneously retrieval of source, sink and process parameters on a global scale, will be explored. Stohl, A., et al. 2011. Atmospheric Chemistry and Physics 11, 4333-4351.

  16. Mercury's interior structure constrained by geodesy and present-day thermal state

    NASA Astrophysics Data System (ADS)

    Rivoldini, Attilio; Van Hoolst, Tim; Beuthe, Mikael; Deproost, Marie-Hélène

    2016-10-01

    Recent measurements of Mercury's spin state and gravitational field strongly constrain Mercury's core radius and core density, but provide little information on the size of its inner core. Both a fully molten liquid core and a core differentiated into a large solid inner core and a liquid outer part are consistent with the observations, although the observed tides seem to exclude an extremely large inner core. The observed global magnetic field could be generated even without a growing inner core, since remelting of iron snow inside the core might produce a sufficiently large buoyancy flux to drive magnetic field generation by compositional convection.Further constraints on Mercury's internal structure can be obtained by studying its thermal state. The inner core radius depends mainly on the thermal state and on the light elements present in the core. Secular cooling and subsequent formation of an inner core lead to the global contraction of the planet, estimated to be about 7 km.In this study we combine geodesy data (88 day libration amplitude, polar moment of inertia, and tidal Love number) with the recent estimate of the radial contraction of Mercury and thermal evolution calculations in order to constrain its interior structure and in particular its inner core. We consider bulk compositions that are in agreement with the reducing formation conditions suggested by remote sensing data of Mercury's surface.

  17. Observational Constraints on Planet Nine

    NASA Astrophysics Data System (ADS)

    Payne, Matthew John; Holman, Matthew J.

    2016-10-01

    Recent publications from Batygin & Brown have rekindled interest in the possibility that there is a large (~10 Earth-Mass) planet lurking unseen in a distant (a~500 AU) orbit at the edge of the Solar System. Such a massive planet would tidally distort the orbits of the other planets in the Solar System.These distortions can potentially be measured and/or constrained through precise observations of the orbits of the outer planets and distant trans-Neptunian objects. I will discuss our recent (and ongoing) attempts to observationally constrain the possible location of Planet Nine via (a) measurements of the orbit of Pluto, and (b) measurements of the orbit of Saturn derived from the Cassini spacecraft.

  18. Thermodynamics of the Madden-Julian Oscillation in a Regional Model with Constrained Moisture

    SciTech Connect

    Hagos, Samson M.; Leung, Lai-Yung R.; Dudhia, Jimy

    2011-09-01

    In order to identify the main thermodynamic processes that sustain the Madden Julian Oscillation, an eddy available potential energy budget analysis is performed on a WRF simulation with moisture constrained by observations. The model realistically simulates the two MJO episodes observed during the winter of 2007-2008. The analysis shows that instabilities and damping associated with variations in diabatic heating and energy transport work in concert to provide the MJO with its observed characteristics. The results are used to construct a simplified paradigm of MJO thermodynamics. Furthermore, the effect of moisture nudging on the simulation is analyzed to understand the limitations of the model cumulus parameterization. Without moisture nudging, the parameterization fails to provide adequate low-level (upper-level) moistening during the early (late) stage of the MJO active phase. The moistening plays a critical role in providing stratiform heating variability that is an important source of eddy available potential energy for the model MJO.

  19. CAN THE DIFFERENTIAL EMISSION MEASURE CONSTRAIN THE TIMESCALE OF ENERGY DEPOSITION IN THE CORONA?

    SciTech Connect

    Guennou, C.; Auchere, F.; Bocchialini, K.; Parenti, S.

    2013-09-01

    In this paper, the ability of the Hinode/EIS instrument to detect radiative signatures of coronal heating is investigated. Recent observational studies of active region cores suggest that both the low and high frequency heating mechanisms are consistent with observations. Distinguishing between these possibilities is important for identifying the physical mechanism(s) of the heating. The differential emission measure (DEM) tool is one diagnostic that allows us to make this distinction, through the amplitude of the DEM slope coolward of the coronal peak. It is therefore crucial to understand the uncertainties associated with these measurements. Using proper estimations of the uncertainties involved in the problem of DEM inversion, we derive confidence levels on the observed DEM slope. Results show that the uncertainty in the slope reconstruction strongly depends on the number of lines constraining the slope. Typical uncertainty is estimated to be about {+-}1.0 in the more favorable cases.

  20. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  1. Moving Forward to Constrain the Shear Viscosity of QCD Matter.

    PubMed

    Denicol, Gabriel; Monnai, Akihiko; Schenke, Björn

    2016-05-27

    We demonstrate that measurements of rapidity differential anisotropic flow in heavy-ion collisions can constrain the temperature dependence of the shear viscosity to entropy density ratio η/s of QCD matter. Comparing results from hydrodynamic calculations with experimental data from the RHIC, we find evidence for a small η/s≈0.04 in the QCD crossover region and a strong temperature dependence in the hadronic phase. A temperature independent η/s is disfavored by the data. We further show that measurements of the event-by-event flow as a function of rapidity can be used to independently constrain the initial state fluctuations in three dimensions and the temperature dependent transport properties of QCD matter.

  2. A Novel Neural Network for Generally Constrained Variational Inequalities.

    PubMed

    Gao, Xingbao; Liao, Li-Zhi

    2016-06-13

    This paper presents a novel neural network for solving generally constrained variational inequality problems by constructing a system of double projection equations. By defining proper convex energy functions, the proposed neural network is proved to be stable in the sense of Lyapunov and converges to an exact solution of the original problem for any starting point under the weaker cocoercivity condition or the monotonicity condition of the gradient mapping on the linear equation set. Furthermore, two sufficient conditions are provided to ensure the stability of the proposed neural network for a special case. The proposed model overcomes some shortcomings of existing continuous-time neural networks for constrained variational inequality, and its stability only requires some monotonicity conditions of the underlying mapping and the concavity of nonlinear inequality constraints on the equation set. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.

  3. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    PubMed

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  4. Assessing working memory capacity through time-constrained elementary activities.

    PubMed

    Lucidi, Annalisa; Loaiza, Vanessa; Camos, Valérie; Barrouillet, Pierre

    2014-01-01

    Working memory (WM) capacity measured through complex span tasks is among the best predictors of fluid intelligence (Gf). These tasks usually involve maintaining memoranda while performing complex cognitive activities that require a rather high level of education (e.g., reading comprehension, arithmetic), restricting their range of applicability. Because individual differences in such complex activities are nothing more than the concatenation of small differences in their elementary constituents, complex span tasks involving elementary processes should be as good of predictors of Gf as traditional tasks. The present study showed that two latent variables issued from either traditional or new span tasks involving time-constrained elementary activities were similarly correlated with Gf. Moreover, a model with a single unitary WM factor had a similar fit as a model with two distinct WM factors. Thus, time-constrained elementary activities can be integrated in WM tasks, permitting the assessment of WM in a wider range of populations.

  5. CONMIN: A FORTRAN program for constrained function minimization: User's manual

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1973-01-01

    CONMIN is a FORTRAN program, in subroutine form, for the solution of linear or nonlinear constrained optimization problems. The basic optimization algorithm is the Method of Feasible Directions. The user must provide a main calling program and an external routine to evaluate the objective and constraint functions and to provide gradient information. If analytic gradients of the objective or constraint functions are not available, this information is calculated by finite difference. While the program is intended primarily for efficient solution of constrained problems, unconstrained function minimization problems may also be solved, and the conjugate direction method of Fletcher and Reeves is used for this purpose. This manual describes the use of CONMIN and defines all necessary parameters. Sufficient information is provided so that the program can be used without special knowledge of optimization techniques. Sample problems are included to help the user become familiar with CONMIN and to make the program operational.

  6. Colorimetric characterization of LCD based on constrained least squares

    NASA Astrophysics Data System (ADS)

    LI, Tong; Xie, Kai; Wang, Qiaojie; Yao, Luyang

    2017-01-01

    In order to improve the accuracy of colorimetric characterization of liquid crystal display, tone matrix model in color management modeling of display characterization is established by using constrained least squares for quadratic polynomial fitting, and find the relationship between the RGB color space to CIEXYZ color space; 51 sets of training samples were collected to solve the parameters, and the accuracy of color space mapping model was verified by 100 groups of random verification samples. The experimental results showed that, with the constrained least square method, the accuracy of color mapping was high, the maximum color difference of this model is 3.8895, the average color difference is 1.6689, which prove that the method has better optimization effect on the colorimetric characterization of liquid crystal display.

  7. The genetic code constrains yet facilitates Darwinian evolution.

    PubMed

    Firnberg, Elad; Ostermeier, Marc

    2013-08-01

    An important goal of evolutionary biology is to understand the constraints that shape the dynamics and outcomes of evolution. Here, we address the extent to which the structure of the standard genetic code constrains evolution by analyzing adaptive mutations of the antibiotic resistance gene TEM-1 β-lactamase and the fitness distribution of codon substitutions in two influenza hemagglutinin inhibitor genes. We find that the architecture of the genetic code significantly constrains the adaptive exploration of sequence space. However, the constraints endow the code with two advantages: the ability to restrict access to amino acid mutations with a strong negative effect and, most remarkably, the ability to enrich for adaptive mutations. Our findings support the hypothesis that the standard genetic code was shaped by selective pressure to minimize the deleterious effects of mutation yet facilitate the evolution of proteins through imposing an adaptive mutation bias.

  8. Moving Forward to Constrain the Shear Viscosity of QCD Matter

    DOE PAGES

    Denicol, Gabriel; Monnai, Akihiko; Schenke, Björn

    2016-05-26

    In this work, we demonstrate that measurements of rapidity differential anisotropic flow in heavy-ion collisions can constrain the temperature dependence of the shear viscosity to entropy density ratio η/s of QCD matter. Comparing results from hydrodynamic calculations with experimental data from the RHIC, we find evidence for a small η/s ≈ 0.04 in the QCD crossover region and a strong temperature dependence in the hadronic phase. A temperature independent η/s is disfavored by the data. We further show that measurements of the event-by-event flow as a function of rapidity can be used to independently constrain the initial state fluctuations inmore » three dimensions and the temperature dependent transport properties of QCD matter.« less

  9. Moving Forward to Constrain the Shear Viscosity of QCD Matter

    SciTech Connect

    Denicol, Gabriel; Monnai, Akihiko; Schenke, Björn

    2016-05-26

    In this work, we demonstrate that measurements of rapidity differential anisotropic flow in heavy-ion collisions can constrain the temperature dependence of the shear viscosity to entropy density ratio η/s of QCD matter. Comparing results from hydrodynamic calculations with experimental data from the RHIC, we find evidence for a small η/s ≈ 0.04 in the QCD crossover region and a strong temperature dependence in the hadronic phase. A temperature independent η/s is disfavored by the data. We further show that measurements of the event-by-event flow as a function of rapidity can be used to independently constrain the initial state fluctuations in three dimensions and the temperature dependent transport properties of QCD matter.

  10. Bayesian methods for the analysis of inequality constrained contingency tables.

    PubMed

    Laudy, Olav; Hoijtink, Herbert

    2007-04-01

    A Bayesian methodology for the analysis of inequality constrained models for contingency tables is presented. The problem of interest lies in obtaining the estimates of functions of cell probabilities subject to inequality constraints, testing hypotheses and selection of the best model. Constraints on conditional cell probabilities and on local, global, continuation and cumulative odds ratios are discussed. A Gibbs sampler to obtain a discrete representation of the posterior distribution of the inequality constrained parameters is used. Using this discrete representation, the credibility regions of functions of cell probabilities can be constructed. Posterior model probabilities are used for model selection and hypotheses are tested using posterior predictive checks. The Bayesian methodology proposed is illustrated in two examples.

  11. Functional coupling constrains craniofacial diversification in Lake Tanganyika cichlids

    PubMed Central

    Tsuboi, Masahito; Gonzalez-Voyer, Alejandro; Kolm, Niclas

    2015-01-01

    Functional coupling, where a single morphological trait performs multiple functions, is a universal feature of organismal design. Theory suggests that functional coupling may constrain the rate of phenotypic evolution, yet empirical tests of this hypothesis are rare. In fish, the evolutionary transition from guarding the eggs on a sandy/rocky substrate (i.e. substrate guarding) to mouthbrooding introduces a novel function to the craniofacial system and offers an ideal opportunity to test the functional coupling hypothesis. Using a combination of geometric morphometrics and a recently developed phylogenetic comparative method, we found that head morphology evolution was 43% faster in substrate guarding species than in mouthbrooding species. Furthermore, for species in which females were solely responsible for mouthbrooding the males had a higher rate of head morphology evolution than in those with bi-parental mouthbrooding. Our results support the hypothesis that adaptations resulting in functional coupling constrain phenotypic evolution. PMID:25948565

  12. Matter coupling in partially constrained vielbein formulation of massive gravity

    SciTech Connect

    Felice, Antonio De; Gümrükçüoğlu, A. Emir; Heisenberg, Lavinia; Mukohyama, Shinji

    2016-01-04

    We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metric formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.

  13. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  14. Critical transition in the constrained traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Andrecut, M.; Ali, M. K.

    2001-04-01

    We investigate the finite size scaling of the mean optimal tour length as a function of density of obstacles in a constrained variant of the traveling salesman problem (TSP). The computational experience pointed out a critical transition (at ρc~85%) in the dependence between the excess of the mean optimal tour length over the Held-Karp lower bound and the density of obstacles.

  15. Anti-B-B Mixing Constrains Topcolor-Assisted Technicolor

    SciTech Connect

    Burdman, Gustavo; Lane, Kenneth; Rador, Tonguc

    2000-12-06

    We argue that extended technicolor augmented with topcolor requires that all mixing between the third and the first two quark generations resides in the mixing matrix of left-handed down quarks. Then, the anti-B_d--B_d mixing that occurs in topcolor models constrains the coloron and Z' boson masses to be greater than about 5 TeV. This implies fine tuning of the topcolor couplings to better than 1percent.

  16. Control of the constrained planar simple inverted pendulum

    NASA Technical Reports Server (NTRS)

    Bavarian, B.; Wyman, B. F.; Hemami, H.

    1983-01-01

    Control of a constrained planar inverted pendulum by eigenstructure assignment is considered. Linear feedback is used to stabilize and decouple the system in such a way that specified subspaces of the state space are invariant for the closed-loop system. The effectiveness of the feedback law is tested by digital computer simulation. Pre-compensation by an inverse plant is used to improve performance.

  17. Constraining dark energy through the stability of cosmic structures

    SciTech Connect

    Pavlidou, V.; Tetradis, N.; Tomaras, T.N. E-mail: ntetrad@phys.uoa.gr

    2014-05-01

    For a general dark-energy equation of state, we estimate the maximum possible radius of massive structures that are not destabilized by the acceleration of the cosmological expansion. A comparison with known stable structures constrains the equation of state. The robustness of the constraint can be enhanced through the accumulation of additional astrophysical data and a better understanding of the dynamics of bound cosmic structures.

  18. Constrained and Unconstrained Localization for Automated Inspection of Marine Propellers

    DTIC Science & Technology

    1991-05-01

    5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7...S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT This work ...constrained localization allows a subset of the points to have stronger influence on the transformation. The measured points in the context of this work refer

  19. Image restoration by constrained total variation minimization and variants

    NASA Astrophysics Data System (ADS)

    Chambolle, Antonin; Lions, Pierre-Louis

    1995-09-01

    We study a signal or image restoration method proposed by Rudin and Osher, namely, the constrained total variation (TV) minimization. This very powerful method gives excellent results on nearly piecewise constant signals, but fails on more complicated data. We propose a very general way to combine convex potentials like the TV, and in this setting we introduce a variant that performs better on piecewise regular--but non constant-- signals.

  20. Observational constraints on undulant cosmologies

    SciTech Connect

    Barenboim, Gabriela; Mena Requejo, Olga; Quigg, Chris; /Fermilab

    2005-10-01

    In an undulant universe, cosmic expansion is characterized by alternating periods of acceleration and deceleration. We examine cosmologies in which the dark-energy equation of state varies periodically with the number of e-foldings of the scale factor of the universe, and use observations to constrain the frequency of oscillation. We find a tension between a forceful response to the cosmic coincidence problem and the standard treatment of structure formation.